site banner

Culture War Roundup for the week of March 4, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Just some quick thoughts on the future of the internet. In short, I expect the way we use the web and social media to change quite dramatically over the next 3-5 years as a result of the growing sophistication of AI assistants combined with a new deluge of AI spam, agitprop, and clickbait content hitting the big socials. Specifically, I’d guess most people will have an AI assistant fielding user queries via API calls to Reddit, TikTok, Twitter, etc. and creating a personalised stream of content that filters out ads, spam, phishing, and (depending on users’ tastes) clickbait and AI generated lust-provoking images. The result will be a little bit like an old RSS feed but mostly selected on their behalf rather than by them directly, and obviously packed with multimedia and social content. As the big social networks start to make progressively more of their money from API charges from AI assistant apps and have fewer high-value native users, they’ll have less incentive to control for spambots locally, which will create a feedback loop that makes the sites basically uninhabitable without AI curation.

One result of this is that Google is kind of screwed, because these days people use it mainly for navigation rather than exploratory search (eg you use it to search Reddit, Twitter, or Wikipedia, or find your way to previously-visited articles or websites when you can’t remember the exact URL). But AI assistants will handle navigation and site-specific queries, and even exploratory search will be behind the scenes, meaning Google Ads will get progressively less and less exposure to human eyeballs. This is why they urgently need to make Gemini a success, because their current business model won’t exist in the medium-term.

All of this feels incredibly predictable to me given the dual combination of AI assistants and spambots getting much better, but I'm curious what others think, and also what the consequences of this new internet landscape will be for society and politics.

I doubt there will be widespread adoption in the next 3-5 years. People galactically overhyped chatbots as the effective advent of AGI, but it was more of an iterative step like any other. A useful one, to be sure, but not immediately transformative to all aspects of human existence that some have claimed.

Search results were already sort of an issue from SEO slop-factories gaming the system so aggressively. Chatbots will lower the price of that stuff a bit so we'll probably see a bit more, but I doubt it's going to be that much more of an issue compared to what could be done a few years ago by paying some ESL third-worlder rock bottom prices to produce the stuff. I doubt that AI-powered RSS feeds are going to be the wave of the future as well. Search results aren't great, but you can usually find what you're looking for if you enter the right query (for most things, that means appending "reddit" to the end).

The replies to a lot of tweets with over 10k likes are filled with LLM-generated "helpful" spam replies, and those spam replies, as far as I can tell, get hundreds of likes from actual users. A few years ago the replies to top posts were much better than today. Yes, LLMs can't do most things, but they can write low-context tweets, they can write SEO spam slop at 0 marginal cost, and that's all you need for it to be a big problem.

I think you're right that people will either just ignore it, and just read the tweets / watch the videos of the popular users they currently follow (say what you will about MrBeast, he's clearly intelligent and very good at optimizing for his targets) and ignore the LLM spam replies / comments like they already do though. Or they'll eat up the slop and love it.

Soon AI will likely be much smarter, but we'll have bigger issues than higher quality internet spam

In case you think this problem is more intractable than it is. If you just block/mute these accounts when you see them after maybe 5 rounds the flood slows to a trickle. It's really not a lot of accounts doing it.