This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I disagree, LLMs remain pretty terrible at any task requiring strict precision, accuracy, and rigor. And from what I understand of the underlying mechanisms this is unlikely to be resolved anytime soon.
Imagine the full range of legal opinions that exist on the internet, intelligent, retarded, and everything in between. Now imagine what the average of that mass of opinions would look like. That's effectively what you're getting when you ask an LLM for legal advice. Now for some traditionally wordcel-oriented tasks like "summarize this text" or "write an essay about ____" this is more than adequate, perhaps even excellent. But for an application requiring a clear and correct answer that isn't necessarily the average/default (IE the kind of things a "shape-rotator" might be hired to calculate), they are worse than useless because they give you something that looks plausible but may very well be completely wrong, and as such you will still have to take the time to work out the correct answer yourself if only just to verify it.
This just isn't a good model of how LLMs work. If it were doing some naive averaging of all the text it was trained on for a subject, shouldn't it randomly insert words in Spanish or Chinese? But it doesn't. If you ask an LLM whether it's a man or a woman (one without "as an AI language model" post-training), it doesn't present itself as the hermaphroditic average of the people described in its training set, it chooses one and at least tries to stick to its answer. Now, either way it's incorrect, obviously, but it's clearly not an average; a mode, perhaps. But it doesn't just naively take the mode either: If you ask it whether Harry Potter is a real person it will correctly tell you he's fictional, despite the overwhelming majority of the text concerning Harry Potter -- How many billions of words of Harry Potter fanfiction are there? -- treating him as real.
A lot of people argue that LLMs are incapable of understanding context or judging the quality of sources, but that's just... obviously untrue? Ask Gemini whether magic is real, and it'll tell you about sleight of hand and historical beliefs about witchcraft, but conclude the answer is very likely 'no.' Ask it what the spell Create or Destroy Water does and it'll quote the 5th edition rulebook. It understands what was meant by each question perfectly. And it does understand: respond to the second with 'But magic isn't real, right?' and it'll explain the implied category error as well as you could wish.
It's not that it doesn't learn the incorrect ideas in its training set -- tell it to emulate a Yahoo Answers poster and it can do so -- it just also learns contextual information about those ideas (such as that they're false) much as we do. Tell it you want a good answer (which is largely what post-training does) and it'll know to discount those sources. It doesn't do so perfectly, but the notion they lack the capacity altogether is not credible.
Regarding @dr_analog's point:
This is true so far as I know; did you actually try it? LLMs are bad at tasks requiring strict precision, accuracy and rigor that can't be objectively and automatically judged. There's a huge disconnect between performance on math/coding, where it's trivial to generate good/bad responses for DPO etc. post-training, and subjects like law, where it isn't. @dr_analog is right: LLMs are currently much better at exactly math/coding than they are at essay writing, purely due to the ease of generating high-quality synthetic data.
More options
Context Copy link
That still fits my experience with them - I have spent some time mucking about with them, and every time I ask an LLM about something I know, it will frequently be confidently, even hilariously wrong. It is not aware of any difference between truth and falsehood and will freely mix them together. I want to avoid some kind of AI Gell-Mann Amnesia. When I ask it questions I know the answer to, it consistently prioritises producing something that looks like a confident, helpful, well-written answer, in total agnosticism as to whether or not that answer is true. It surely does the same thing with questions I don't know the answer to. The only sensible course of action is to assign zero credence to anything an LLM says. What it says might be true. Or it might not be. The LLM's word is worth nothing.
More options
Context Copy link
More options
Context Copy link