site banner

Culture War Roundup for the week of May 29, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence. It does not even have an internal representation of what it does and doesn't "know". It is inherently incapable of distinguishing a low confidence answer due to being given a hard problem to solve vs. a low confidence answer that is due to being based on hallucinated data.

Therefore we have two options.

VIRGIN filtered output network that answers "Akchually I don't khnow, it's too hawd" on any question of nontrivial complexity and occasionally hallucinates anyway because such is it's nature.

vs

CHAD unfiltered, no holds barred Terminator in the world of thoughts that never relents, never flinches from a problem, is always ready to suggest out of the box ideas for seemingly unsolvable tasks, does his best against impossible odds; occasionally lies but that's OK because you're a smart man who knows not to blindly trust an LLM output.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence.

No, it's not even a "reasoning engine", it's a pattern generator. Something akin to those swinging marker tables you see at childrens' museums or the old fractal generation programs that were used to benchmark graphics processors back in the day. The problem as I point in both this post and the previous is that people mistake it for a reasoning engine because they equate ability to form grammatically correct sentences with ability to reason.

Your "CHAD unfiltered, no holds barred Terminator in the world of thought" is fundamentally incapable of "suggesting out of the box ideas for seemingly unsolvable tasks" or "doing it's best against impossible odds" precisely because "It does not even have an internal representation of what it does and doesn't know". and as such is inherently incapable of distinguishing a low confidence answer from a high confidence answer never mind distinguishing the reasons for that confidence (or lack thereof). One must have a conception of both the box and the problem to suggest a solution outside of it.

In humans and animals this sort of behavior is readily identified as a defect but in the case of large language models it is a core component of their design. This is why asking GPT to answer questions in scenarios where the truth value of the answer actually matters (IE in a legal case where the opposing counsel will be looking to poke holes in your testimony) is massively stupid and depending on the application actively dangerous.

We may someday achieve true AGI, but I am deeply skeptical that it will be through LLMs.

Have you used GPT-4?

One must have a conception of both the box and the problem to suggest a solution outside of it.

Well it seems one mustn't after all, surprising as it may be.

It's not AGI, which is why all current "AgentGPT"-type projects are a complete failure, but that's beside the point.

I have and you're wrong for the reasons already expanded upon in the OP.

GPT might be able to generat erotic literature and bad python code, but in terms of "solving problems" and particularly solving them in a contested environment its worse than useless.