Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
If you were creating an LLM, would you train on the test set? If not, does that mean that you lack benevolence? You could just clearly and directly give it the answers!
Of course I lack benevolence towards an LLM. I can be polite to it out of habit, but I wouldn't hesitate to do horrible things to it if that made it work better.
It seems to me that you are saying that you have goals for what you want the end product to be like. As such, I think you're implicitly affirming that you would choose to not do things like train on the test set. That is, you wouldn't just clearly and directly give it the answers, even though you could.
Now, the question seems to me, "What do you even mean by benevolence?" You originally said:
But this sort of doesn't make direct sense. You care about the LLM you're creating. You deeply care about it, at least in that you very much care to "ma[k]e it work better". It seems like you're using some other sense of words that is not fully fleshed-out. Like, maybe to be benevolent, you have to care about some particular type of goal or in some particular way, but other types of caring/goals do not count, or something. I think we just don't have enough information to figure out whether this reasoning makes much sense.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link