@ControlsFreak's banner p

ControlsFreak


				

				

				
5 followers   follows 0 users  
joined 2022 October 02 23:23:48 UTC

				

User ID: 1422

ControlsFreak


				
				
				

				
5 followers   follows 0 users   joined 2022 October 02 23:23:48 UTC

					

No bio...


					

User ID: 1422

if that made it work better

It seems to me that you are saying that you have goals for what you want the end product to be like. As such, I think you're implicitly affirming that you would choose to not do things like train on the test set. That is, you wouldn't just clearly and directly give it the answers, even though you could.

Now, the question seems to me, "What do you even mean by benevolence?" You originally said:

Lack of benevolence: God created the world and all that is in it, and is able to interact with it, but doesn't actually care about us.

But this sort of doesn't make direct sense. You care about the LLM you're creating. You deeply care about it, at least in that you very much care to "ma[k]e it work better". It seems like you're using some other sense of words that is not fully fleshed-out. Like, maybe to be benevolent, you have to care about some particular type of goal or in some particular way, but other types of caring/goals do not count, or something. I think we just don't have enough information to figure out whether this reasoning makes much sense.

I drive 99% of the time, and my wife very very occasionally says things. She always apologizes about it, but somehow every. single. time. it is valid and useful information. For example, maybe I'm looking back to initiate a lane change, and something suddenly happens in front of us and to the other side.

That sexual revolution thing didn't turn out so well for women, did it?

If you were creating an LLM, would you train on the test set? If not, does that mean that you lack benevolence? You could just clearly and directly give it the answers!