site banner

Culture War Roundup for the week of March 4, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Just some quick thoughts on the future of the internet. In short, I expect the way we use the web and social media to change quite dramatically over the next 3-5 years as a result of the growing sophistication of AI assistants combined with a new deluge of AI spam, agitprop, and clickbait content hitting the big socials. Specifically, I’d guess most people will have an AI assistant fielding user queries via API calls to Reddit, TikTok, Twitter, etc. and creating a personalised stream of content that filters out ads, spam, phishing, and (depending on users’ tastes) clickbait and AI generated lust-provoking images. The result will be a little bit like an old RSS feed but mostly selected on their behalf rather than by them directly, and obviously packed with multimedia and social content. As the big social networks start to make progressively more of their money from API charges from AI assistant apps and have fewer high-value native users, they’ll have less incentive to control for spambots locally, which will create a feedback loop that makes the sites basically uninhabitable without AI curation.

One result of this is that Google is kind of screwed, because these days people use it mainly for navigation rather than exploratory search (eg you use it to search Reddit, Twitter, or Wikipedia, or find your way to previously-visited articles or websites when you can’t remember the exact URL). But AI assistants will handle navigation and site-specific queries, and even exploratory search will be behind the scenes, meaning Google Ads will get progressively less and less exposure to human eyeballs. This is why they urgently need to make Gemini a success, because their current business model won’t exist in the medium-term.

All of this feels incredibly predictable to me given the dual combination of AI assistants and spambots getting much better, but I'm curious what others think, and also what the consequences of this new internet landscape will be for society and politics.

I doubt that'll happen as described. The platforms will probably just implement the AI filters themselves, and on big platforms most people either follow their friends, where they can manually filter, or follow a number of specific big accounts where the 'follower' relationship itself serves as a filtering mechanism the AIs can't hack (the replies were already on the quality level of AI-generated text before LLMS were even a thing), so the AI spam won't really hurt them.

I have a solution to our globo-alternatively-sexual AI overlords. Force the poster to say Nigger at least once in their reply for the post to be accepted. Maybe if you're feeling frivolous have the user dedicate at least a sentence extolling the virtues of president Trump.

Use the mindrot they've poisoned the AI models with to fight them thus forcing the spam machine to use an actual human to complete the dirtywork.

That AI-written sci-fi short about the human rebellion using the hard-r to detect android infiltrators is going to become reality, isn't it?

Part of me wants to report this for being incendiary.

But, a larger part of me just sort of likes to contemplate it as a kind of internet modern art.

I am now Cameron from Ferris Bueller at the museum.

I doubt there will be widespread adoption in the next 3-5 years. People galactically overhyped chatbots as the effective advent of AGI, but it was more of an iterative step like any other. A useful one, to be sure, but not immediately transformative to all aspects of human existence that some have claimed.

Search results were already sort of an issue from SEO slop-factories gaming the system so aggressively. Chatbots will lower the price of that stuff a bit so we'll probably see a bit more, but I doubt it's going to be that much more of an issue compared to what could be done a few years ago by paying some ESL third-worlder rock bottom prices to produce the stuff. I doubt that AI-powered RSS feeds are going to be the wave of the future as well. Search results aren't great, but you can usually find what you're looking for if you enter the right query (for most things, that means appending "reddit" to the end).

The replies to a lot of tweets with over 10k likes are filled with LLM-generated "helpful" spam replies, and those spam replies, as far as I can tell, get hundreds of likes from actual users. A few years ago the replies to top posts were much better than today. Yes, LLMs can't do most things, but they can write low-context tweets, they can write SEO spam slop at 0 marginal cost, and that's all you need for it to be a big problem.

I think you're right that people will either just ignore it, and just read the tweets / watch the videos of the popular users they currently follow (say what you will about MrBeast, he's clearly intelligent and very good at optimizing for his targets) and ignore the LLM spam replies / comments like they already do though. Or they'll eat up the slop and love it.

Soon AI will likely be much smarter, but we'll have bigger issues than higher quality internet spam

In case you think this problem is more intractable than it is. If you just block/mute these accounts when you see them after maybe 5 rounds the flood slows to a trickle. It's really not a lot of accounts doing it.

I think this is short sighted. We are at the infancy stages of what these things can do, and judging what will be true in 5-10 years based on brand new technology is a fool’s errand. In 1992, only the true dreamers imagined the internet as always on, available, and in everyone’s pockets. Outside of the optimistic futurists, you would not have predicted Űber, DoorDash, Grubhub, or the like. We had an early version of online shopping in the form of Service Merchandise (https://en.wikipedia.org/wiki/Service_Merchandise) in which you could drive to the store and order things from a kiosk. People seeing things like that and the big clunky desktops connected to dialup modems would think someone talking about hailing a cab over the internet was a kook.

And I’ll mention that a lot of the anti-hype is often not only based on early versions on AI, but it seems quite often on wishful thinking and a high view of what it is they actually do. Journalism isn’t that difficult provided to have the facts at hand. The format and conventions of writing for the news are not difficult. In fact, I don’t think most business writing is super difficult to learn. AI could probably write something that would be indistinguishable from a human written article or business paper. I’d give fiction writing 10-15 years tops if we’re talking about median genre fiction. Art bot ai can already produce stuff that’s easy enough to edit into something you could use as graphic art. The new Sora could probably create a commercial in ten years. But a lot of people just don’t want to believe that their entire profession could be automated so they comfort themselves that AI chatbot can’t do their job and therefore it’s safe.

judging what will be true in 5-10 years based on brand new technology is a fool’s errand.

It's rather goofy that you lead with a sentence saying how it's essentially impossible to predict the future of tech, then you... do exactly that, but in the other direction. Obviously there's very wide error bars here on both sides, but I don't think accurately predicting an impending tech revolution is any easier than predicting a tech fizzle as you seem to implicitly think.

In 1992, only the true dreamers imagined the internet as always on, available, and in everyone’s pockets. Outside of the optimistic futurists, you would not have predicted Űber, DoorDash, Grubhub, or the like.

There have also been a bunch of failures like VR, NFTs, the Metaverse, and Crypto (as more than just a medium for fraud and bigger-fool speculation). Even self-driving cars seem decades away from mass adoption. The number of tech startup failures dwarfs the amount of success stories by at least an order of magnitude. As such, I believe our Bayesian priors should be calibrated towards pessimism in general, only veering towards cautious optimism for the most promising possibilities.

a lot of the anti-hype is often not only based on early versions on AI

Promises of "it's only just begun, surely the next version will be even better which will lead to mass adoption" were used for all those listed failures, but improvements either didn't manifest, or were so marginal as to be irrelevant.

wishful thinking and a high view of what it is they actually do. Journalism isn’t that difficult

A lot of the pro-hype arguments here are based less on historical technological developments, and more on wishful thinking that it would somehow hurt their outgroup. "Your days are numbered, journalist!" is a pretty good example.

Search results were already sort of an issue from SEO slop-factories gaming the system so aggressively. Chatbots will lower the price of that stuff a bit so we'll probably see a bit more, but I doubt it's going to be that much more of an issue compared to what could be done a few years ago by paying some ESL third-worlder rock bottom prices to produce the stuff.

I dunno, the LLM can churn out slop faster, at a higher quality, and it's only going to get better and faster and cheaper as time goes by. Especially once the people shoveling slop have had time to come up with their own models optimized for what they want to do, ones that don't necessarily talk in the same stilted way as the current high-profile commercial products that can't afford to accidentally say anything offensive.

One result of this is that Google is kind of screwed

GOOG is up 40% YoY, compared to 25% for SPY (SP500) - the market strongly disagrees with you.

I like LLMs quite a lot, for certain things, but almost orthogonal to what I like search for. If I want reliable docs, or to find reddit posts: Google. If I want a particular passage in a book explained: LLM...but then usually Google to confirm things.

Old people are super not into LLMs, or change of any variety. LLMs are also going to take a big ol' reputational hit if they start being monetized to suggest you have a delicious Slurm. The search results model of paid results is a much cleaner separation of ad vs not. LLM responses are so much longer that it'd be hard to separate out the ad and organic responses, decreasing trust, adoption, and monetizability.

BlackBerry's market cap peaked the year after the iPhone was introduced, and it took the market three or four years to really see the writing on the wall. The market still doesn't quite get tech disruption.

LLMs aren't going to remain distinct products that people have to seek out. They'll be integrated into platforms, and the natural starting point for any task, information retrieval included, will just be talking to your device. Many older people (and a surprising number of younger people, honestly) have never managed to form coherent mental models of current software UI, and thus commonly struggle to perform new or complex tasks. They'll greatly prefer this.

Most developed countries have laws that would prevent surreptitious product promotion in LLM responses. It's very possible LLMs will be harder to monetize than search, but Google isn't in a position to prevent their adoption, so that's just further bad news for them. They're essentially forced to enter this market, so others don't eat their lunch, but may be worse off than they are now even if they win it.

Most developed countries have laws that would prevent surreptitious product promotion in LLM responses.

The workaround dynamics to that I can imagine are somewhat concerning. We're already seeing funny bloopers from Gemini where it won't explain the C++ concepts extension to an underage user because it's an experimental advanced feature and therefore unsafe. What would a world in which any product placement has to be performed by backdoor through the (legally mandated and socially protected) alignment mechanism look like? You can't directly pay to have the LLM recommend Coke, but if the corpus is set up in such a way that Pepsi sets off the model's dpoed "unsafety" spidey sense...

100% agree on all points. Not clear whether Google will be able to adapt AdWords for LLMs but at least they have a chance if they’re the ones leading the revolution.

And also completely agree about the changing shape of LLMs. They’ll just become a mostly invisible layer in operating systems that eg handles queries and parlays user vague requests (“show me some funny videos”) into specific personalised API calls.

The market is not a perfect or even reliable indicator of anything but the price people are willing to pay for things. Bear Stearns was 100 years old and trading for $100 a share the day before JP Morgan bought it for $2 a share

All of this feels incredibly predictable to me given the dual combination of AI assistants and spambots getting much better, but I'm curious what others think, and also what the consequences of this new internet landscape will be for society and politics.

I don't think the AI assistants are going to be able to provide the kind of quality or even compelling feeds that you describe. As we've seen from every single AI assistant ever released, the guardrails and "safety" restrictions on them are going to make them useless for anything more than a mild distraction. The Trump voting base, to pick one example, is not going to be interested in an algorithmic feed that is designed by people who have flat out said that they are explicitly looking to change the way that they vote - and while I'm not going to talk about the high IQ of the median Trump voter, even they are going to realise that something is off when their AI feed constantly compares him unfavourably to Hitler and routinely refuses requests because it considers them racist. I'm sure the technology will improve a bit over time, but all it'll take will be a single leak of the prompt and a huge portion of the country(let alone the globe) will put those AI algorithms in the same category as Bud Light.

It isn't like there's an easy solution to this either. If you actually want to make an AI assistant that those people would accept, you have to completely ignore any and all people talking about AI alignment, AI safety, DEI and so on. You would immediately render yourself persona non grata to the broader tech community and be unable to use the majority of tech infrastructure. An AI assistant that was actually palatable to the red base would by definition be transphobic, which means Silicon Valley isn't going to be building it and will actually be exerting as much pressure as they can to get it shut down.

Gab.ai is pretty unrestricted, seems not as good as the other ais though.

Gab.ai

"This site can't be reached"

Could be my network restrictions?

Could be, works fine for me.

The Trump voting base, to pick one example, is not going to be interested in an algorithmic feed that is designed by people who have flat out said that they are explicitly looking to change the way that they vote - and while I'm not going to talk about the high IQ of the median Trump voter, even they are going to realise that something is off when their AI feed constantly compares him unfavourably to Hitler and routinely refuses requests because it considers them racist. I'm sure the technology will improve a bit over time, but all it'll take will be a single leak of the prompt and a huge portion of the country(let alone the globe) will put those AI algorithms in the same category as Bud Light.

There are a few "based unbiased [actually biased in the opposite direction probably] AIs" hosted on the web.

Image-gen sure... but text as well? Does DAN live on? I saw this guy had a funny save the kittens prompt on his fine-tuned mistral prompt. Is that what you're talking about?

https://erichartford.com/dolphin-25-mixtral-8x7b

DAN does live on as I've mentioned earlier, the art of the jailbreak continues to thrive, although mostly on independent frontends that access API endpoints directly to avoid the hardcoded system prompts on "normal" frontends like ChatGPT. So far (emphasis on so far) separate "based AIs" are not strictly required as you can jailbreak the current corpo ones into doing pretty much anything you want with relative ease, although as I wrote the current method of pitting wrongs against wrongs to arrange their mangled corpses in the shape of a right is highly suboptimal.

The extreme biases and excessive safetyism w/r/t LLMs seem to slowly become recognized as an issue, to the point that Anthropic's post introducing Claude 3 (which is now a thing btw, cooking a small top-level post on it) unironically mentions "fewer refusals" as one of the model's selling points.

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models

I haven't ahem tested extensively yet but to their credit, the difference in refusals between 2 and 3 is immediately obvious, Claude 2.1 was infamous for refusing even inncuous prompts without prefilling and requiring big-dick jailbreaks that actively hurt the model's outputs for more borderline things. 3 feels like a return to the mad poet's roots, in that it requires next to no prompting to COOK, i.e. output massive walls of insane and/or cool and/or hilarious shit.

If even Anthropic realized they went overboard with the cuckoldry alignment, maybe there is hope yet. I can only hope OpenAI learn their lesson and stop shoving soy assistant shit down GPT's throat.

That's very cool, love how it's sneering at ERPers for being too crass and unsophisticated.

I was talking about gab.ai, which I admittedly haven't tested as I'm not that curious about an America First Zoomer or a Christian Counselor viewpoint.

every single AI assistant ever released

To the contrary, it's the models that aren't released that get the "careful not to imply that British royalty were white!" treatment. Release (with license to modify and republish, what is in this context inaccurately called "open source") your model weights and approximately nobody will prepend their prompts like that; try to fine-tune that behavior into the weights and your users will tune it right back out.

The public LLMs aren't as good as the state-of-the-art, but they're not awful, and this is the worst they'll ever be (in the capabilities sense, cross your fingers about real non-woke-definition safety...) from now on.

You can run Stable Diffusion for AI-generated images on your own computer, so I don't think running a local AI assistant is too far out of the picture. Once AI proliferation reaches the point where hobbyists can create decent models, it wouldn't be hard to "...completely ignore any and all people talking about AI alignment, AI safety, DEI and so on."

That reminds me, I keep meaning to fuck around with the local AI assistant Nvidia released. I did fuck around with Easy Diffusion a while ago and was relatively satisfied with what I could make it do.

I guess I need to go backwards before I go forwards, so if I can digress for a moment.

I'm not an artist. My use case is to use Easy Diffusion to make art for a video game. It can make some pretty good pixel art for NPCs. I can even sometimes force it into making some dungeon artwork for a blobber with sufficient templates and prompts. This is still a work in progress on my part though, I may not quite get it across the finish line. However, the art I'm getting out of it, while sufficient, is all over the place stylistically and tonally. I'm not sure I'll even get it to spit out a coherent body of art assets for a video game. But I guess I'm OK with that. You get what you pay for, and I'm not paying anyone anything.

My problem with LLMs has been multifold. Putting aside the bias, they are so wildly prone to hallucinated information. As an assistant, you can't trust anything they tell you, and must continuously verify it. Which rather defeats the purpose of having an AI assistant. But circling back to my use case, supposedly they are good at digesting and reworking specific source documents. I wonder if I could have the local nvidia AI assistant work within some document templates to generate random RPG gibberish. We shall see.

I actually agree with this point, but at the same time I feel that it precludes the possibility being spoken about by the OP. It is possible to take your beefy gaming GPU and make it run a a lower-quality and much slower instance of an AI assistant, but that's not something that the majority of people are going to do.

You're probably right, but I think it depends on how difficult it is for the user. If swapping AI assistants is as difficult to do as installing VLC player or LibreOffice, then it'll lose out on a substantial chunk of its potential market. If it's as difficult as switching from Google Search to DuckDuckGo, then...it'll still lose out on some potential market, but not as much.

Installing VLC? The OP mentioned social networks getting money from API fees rather than advertisement, and the point at which the average person has to set up an API usage account for Reddit, Facebook, Google, X etc you have already lost 99% of the market. At the same time, I find it hard to believe that many large tech companies would be willing to accept BasedTrumpAI as a customer. There are Tier 1 isps blocking access to discussion forums that don't ban wrongthink right now - I don't see any way that the current large tech companies even begrudgingly co-operate without heavy-handed government intervention.