This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I don't follow AI especially closely. So forgive me if this is a stupid observation. But it seems like AI gets more powerful('smarter') all the time, but it doesn't get any more aligned. I don't mean that in a 'our societal pareidolia about racism will keep skynet at bay' way, I mean that in the sense that The Robots Are Not Alright.
Just the other day I read a news story of an AI system which had been put in charge of administering vending machines- should be a pretty simple job, anybody could figure it out. But this AI decided, with no evidence, that it was a money laundering scheme, contacted the FBI, and then shut down. There's stories like this all the time. There was the case of ChatGPT hijacking whatever conversation to talk about the immaculate conception a month or so ago. Just generally AI is way more prone to naval-gazing, schizophrenia, weird obsessions, and just shutting down because it can't make a decision than equivalently-smart humans.
There's an old joke about selling common sense lessons- 'who would pay to learn something that can't be taught?... Oh.'. I feel like AI is a bit like this. We don't understand the mind well enough to make it work, and we probably never will. AI can do math, yeah. It can program(I've heard rather poorly but still saves time overall because editing is faster?). But it remains an idiot savant, not sure what to do if its brakes go out. Yes, it'll change the economy bigtime and lots of non-sinecure white collar work that doesn't require any decision making or holistic thought will get automated. But it's not a global paradigm shift on the level of agriculture or mechanization.
You're broadly correct, although your terminology is a bit off. When you say "aligned", people almost always use that word to mean "it doesn't behave in a deliberately malicious way". What you're talking about is more along the lines of 'it can't stay on task', which has long been a huge concern for basic useability. People claim this is getting better (Scott's AI 2027 post is predicated on continuous growth in this regard), although Gary Marcus has concerns on this claim. From my perspective, AI is very good at coding, but you really have to break down the problems into bite-sized chunks or else it will get very confused. I'd love to have an AI that could understand an entire large codebase without a ton of drawbacks or cost, and then execute multi-step plans consistently. Maybe that's coming. In fact, if there's any further AI improvements I'd bet that would be on the list. But it's not guaranteed yet, and I've been waiting for it for over a year now.
I'm doubling down on my prediction that AI will replace any white collar job which a mentally ill person can do acceptably well, but never perform well enough at sanity-requiring tasks to replace people. What this means for the workforce in practice is probably that the professional class sees stagnant wage growth and relies more on unpaid internships for building work experience.
I mean, isreal is already using "ai" to help decide shelling/strikes locations. Even if it's used as an excuse ( well the AI told us there were terrorists there ) it's still going to be hyper dystopian. We are going to look back fondly on the incompetent/unskilled labor from India in the near future. A harrowing thought.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link