This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
https://www.astralcodexten.com/p/ama-with-ai-futures-project-team
My opinion of Scott Alexander continues to crater. I don’t know how much of this story is his or the collaborators, but there is a shocking level of naïveté about everything other than AI technical progress. Even there, I don’t know enough about AI to comment.
My favorite part is the end where Chinese AI sells out China, assists a grassroots Chinese pro-democracy group affect a coup, democratic elections are carried out and everyone lives happily after.
Yeah, the geopolitics in that story are just cringingly bad fiction. (It's really weird that the "superforecasters" who wrote it don't really seem to understand how the world works?) And I'm guessing the main chart listing "AI Boyfriends" instead of "AI Girlfriends" is also part of Scott's masterwork - he does really like to virtue signal by swapping generic genders in the least sensible ways.
But the important part is the AI predictions, and I'll admit they put together a nice list of graphs and citations. However, I still feel like, with their destination already decided, they were just backfitting all the new data to the same old doomer predictions from years ago - terminal goals, deceptive alignment, etc. LLMs are meaningfully different than the reward-seeking recursive agents that we used to think would be the AI frontrunners, but this AI 2027 report could basically have come out in 2020 without changing any of the AI Safety language.
They have a single appendix in their "AI Goals Forecast" subsection that gives a "story" (their words!) about how LLMs may somehow revert to reward-seeking cognition. But it's not evidence-based, and it is the single most vital part of their 2027 prediction! Oh dear.
I mean, I see it. Women are much larger consumers of smut. Men already have porn; it's women that are craving the emotional connection which AI simulates. Especially when what they're looking for is unreal in the same sense that a lot of porn is unreal.
It'll go both ways for sure but I can absolutely see AI boyfriends being more popular.
Huh. I just thought it was obvious that the frontier of online smut would be male-driven, but now you've made me doubt. Curious to see what the stats actually are.
At the very least there will be less concern about the effects of AI-generated erotica for men versus for women. For instance, compare these two takes from the BBC, one focusing on AI for women, the other focusing on AI for men.
There's still quite a bit of concern in the male focused article too though. As usual it's that weird 'anything a man wants to bang has agency' angle everyone takes when dealing with male focused sexual entertainment technology where the big worry is the technology feeling underappreciated.
My expectation is that yeah women will use it more (it's more imagination based) and we will eventually discover women have wayyyyyy darker fantasies than men when they think nobody is looking, and then we'll quietly drop the subject and get awkward if anyone brings it up.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
One would have thought it would indeed be the AI catgirl love interests as the majority, and yet here we are.
(rDrama is a goldmine of such stories, but that relies on a heavily qualified meaning of "gold").
More options
Context Copy link
AI-generated (male-orientated, visual) porn isn't itself intelligent and doesn't need to be.
Sure, but that's not AI girlfriends.
More options
Context Copy link
More options
Context Copy link
There is no reliable data that I know of, but reporting so far seems to indicate that it is indeed women who most use chatbots this way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This idiot has no idea what China is like or how Chinese actually feel about "pro-democracy" movements.
More options
Context Copy link
There’s also another ending where we all die/are reduced to slaves. Happy endings often sound fake/gay/cliche so maybe you’d like the more cynical version.
With a new tech, it’s hard to even comprehend the direction things will take. But it’s a well regarded consensus that P(doom) is high and rising, so this is an effort to write a fictional story about how “doom” happens. If you can do better it would be a great contribution to AI safety and alignment.
It's not a well regarded consensus at all. AI is very likely to be used in malicious ways by the powers that be, it is very likely to have second order effects that will make society dumber and people less resilient, but none of it is going to happen in the way the AI safety movement predicts. AGI / ASI / Whatever we're calling it today is unlikely to exist, not just in the next 5 years, but in the foreseeable future (I'll bet you money on this).
Care to elaborate? What kinds of things do you think are going to happen differently than the AI safety people think?
For starters: alignment is easy; instrumental convergence doesn't actually happen even in very smart models; and neuralese is a myth.
I agree that alignment is easy in the sense of getting models to understand what we want, but it's far from clear that it's easy in the sense of making models want the same thing. RL models reward hack all the time.
What on earth makes you think instrumental convergence "doesn't actually happen"? It happens all the time, e.g. by reward hacking or sycophancy! It's almost the definition of agency!
Neuralese is a myth? What is that supposed to mean? RL on soft tokens is an active area of research and will almost certainly always work better (in the sense of getting higher rewards) than using hard tokens everywhere.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yeah it didn't help. I just try to remind myself that it's OK for someone to be great at some things and hilariously, hopelessly naive about others.
It is possible that AGI happens soon, from LLMs? Sure, grudgingly, I guess. Is it likely? No. Science-fiction raving nonsense. (My favorite genre! Of fiction!)
Scott's claim that AGI not happening soon is implausible because too many things would have to go wrong is so epistemically offensive to me. The null hypothesis really can't be "exponential growth continues for another n doubling periods." C'mon.
I genuinely don't understand how you can say it's plausible to happen at all, but sci-fi nonsense to happen likely. By and large probability is in the mind, and "sci-fi" is usually a claim about the reality part of a belief rather than the opinion. It'd be like saying "It's possible that it happens soon, but it's raving sci-fi nonsense for you to be worried about it."
Plausible to happen at all: intelligence can be created - humans exist. It doesn't follow that it can be created from transistors, or LLMs, or soon - but these are all plausible, i.e. p > epsilon. They are all consistent with basic limits on physics and information theory afaik.
Science-fiction raving nonsense: but, there is absolutely insufficient reason to be confident they are going to happen in the next few years, or even the next few decades. Such beliefs are better grounded than religion, but unclear to me if closer to that or to hard science. They most resemble speculative science fiction, which has discussed AI for decades.
Probability is in the mind: I disagree. Probability is a concrete mathematical concept, used in many mundane contexts every day. Even the rat sense of the word ("90% confident that X") is reasonably concrete: a person (or process or LLM) with a well-calibrated (high correlation) relationship between stated probabilities and occurrence frequency should be trusted more on further probabilities.
Out of interest, do you think that a mars base is sci-fi? It's been discussed in science fiction for a long time.
I think any predictions about the future that assume new technology are "science fiction" p much by definition of the genre, and will resemble it for the same reason: it's the same occupation. Sci-fi that isn't just space opera ie. "fantasy in space", is inherently just prognostication with plot. Note stuff like Star Trek predicting mobile phones, or Snowcrash predicting Google Earth: "if you could do it, you would, we just can't yet."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Some people need a refresher on sigmoid functions. I've thought this for a long time about singularity believers.
Early in the process it looks exponential and projecting forward a few periods gives implausible results. "At this rate of bacterial growth, the entire universe will be this bacteria in a few months" level of silliness. Obviously that doesn't happen and instead some physical constraint is reached and the system slows in growth and flattens off over time.
"At this rate of growth, the entire lake will be this algae in a few days". "Ludicrous silliness!"
The point is we don't have a clue where the sigmoid will level, and there doesn't seem to be a strong reason to think it'll level at the human norm considering how different AI as a technology is to brains. To be clear, I can see reasons why it'll level below the human norm; lithography is a very different technology from brains and it does sure look like the easily Moore-reachable performance for a desktop or even datacenter deployment will sigmoid out well below the human brain scale. But note how that explanation has nothing to do with human brains for reference, and if things go a bit different and Moore keeps grinding for a few more turns, or we find some way to sidestep the limits of lithography like a much cheaper fabrication process leading to very different kinds of deployment, or OpenAI decide to go all in on a dedicated megatraining run with a new continuous-learning approach that happens to work on first, second or third try (their deployed capacity is already around a human brain), then there's nothing stopping it from capping out well above human level.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What little technical discussion i caught was in "not even wrong" territory.
More options
Context Copy link
And everyone clapped and a man came up to me and handed me a crisp hundred dollar bill. That man's name? ChatGPTeinstein.
More options
Context Copy link
You're talking about this passage?
What's your objection? I think this paragraph makes clear that this isn't really an organic phenomenon; it's humans being memetically hacked by AI systems. We're long past the the point in the story where they "are superhuman at everything, including persuasion, and have been integrated into their military and are giving advice to the government." And the Chinese AGI had been fully co-opted by the US AGI at that point, so it was serving US interests (as the paragraph above again makes clear).
I'd also flag that you're probably not the only (or even the main) audience for the story - it's aimed in large part at policy wonks in the US administration, and they care a lot about geopolitics and security issues. "Unaligned AGIs can sell out the country to foreign powers" is (perversely) a much easier sell to that audience than "Unaligned AGIs will kill everyone."
But there's no reason that the US and Chinese AI should agree to give the victory to "democracy" (which is a fake veneer over the true control of the world by the AIs) rather than "communism" (which would also be a fake veneer over the true control of the world by the AIs).
Indeed, why resolve everything in favour of the US rather than China, so long as all potential conflicts are resolved in order to keep the peace and not interrupt the control of the AIs? Maybe they could switch off every so often; DeepCent-2 wins for China this year, OpenMind wins for the US next year. It's The Culture in reality and if the human pets imagine they have any real say, it's so cute how they could almost be mistaken for sapient, isn't it?
It's a beautiful example of the Whig version of history, where Whiskey! Sexy! Democracy! are just so gosh-darn self-evidently better that naturally it all wins out in the end as the chosen system of totalitarian authoritarian global control by superintelligences manipulating humanity like puppets.
Er...
More options
Context Copy link
It's just dumb, and displays a gross ignorance/lack of understanding of algorithmic behavior.
And the efficacy of 'memetic hacking.'
Propaganda has been a tool for millennia. Tailored systemic propaganda has been a state practice for centuries. It still has yet to demonstrate the level of social/political control that advocates predict or require for other predictions.
This may, indeed, be an argument tailored to certain bureaucratic political interests... but a key lesson of the last decade of politics has been the increasingly clear limits of political propaganda in changing positions as opposed to encouraging pre-existing biases. And in the US in particular, many of the policy makers most convinced in the value of systemic propaganda are also in the process of being replaced by the previous targets of systemic propaganda campaigns.
I feel like it's another one of those midwit bell-curve memes. The low information take is that if you're going to peddle propoganda/bullshit, at least make it a Studio Ghibli meme. The "midwit" take is that as very serious people thinking seriously about serious topics you (the public) need to take our ideas very seriously. Meanwhile, the the high info take is that engaging seriously with propoganda/bullshit is a waste of time but Studio Ghibli memes are fun.
If, as doglatine suggests, this is all propaganda targeted at US admin officials who, in exchange for backing the policies the AI doomers want implemented, want to hear that the USA will win out in the end over the dirty Commies, then it makes a lot more sense than naive "of course democracy will blossom and even the Chinese AI will push it" fairytale ending.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have to think that very much feels like a Disney fairy tale ending. The good girl (here: the US) did her work (here: solved alignment) and gets rewarded (by the trivial reward of gaining global dominance), while the bad girl (here: the CCP) did not do her work and get's punished.
It seems to be targeting the median six-year-old, but perhaps there is some overlap with US policy wonks.
The way this story is going to turn out is that China, by not caring about alignment is the first to summon ASI. Then the ASI is either aligned-by-default (in which case we will have more red and fewer stars on the flags when we settle the galaxy), or it is unaligned and will decide that it requires the atoms which build our world for something else. There is no moral except "coordination failure is bad", but that is something you need a median ten-year-old to understand.
The ASI's engineer China to adopt democracy, but what does that even mean? The centralized AI's have already shown that they can manipulate the public in whatever way they want, does anyone expect them to stop their manipulations at that point? (Nor are these manipulations necessarily evil, but just come from the fact that if you have a lot of policy power and a lot of foresight, you can't help but notice the electoral consequences of your choices. Any decision branch which ends with "and then The People will vote an anti-AI party into power, and will proceed to settle policy the 19th century way" will not be considered by even the most aligned AI. A fig leaf of voting (for what? A figurehead politician? A utility function?) does not change that such a system would be much closer to China's vision of the state than the vision of the founders.)
But if the best way to get the US to care about alignment is "unaligned AI is a national security risk", then whatever.
More options
Context Copy link
If he meant AI controlled technocratic shadow-totalitarianism, he should have said so. I agree it seems silly to think that democracy could exist in such a scenario. I wonder why he didn’t address this? But beyond not addressing it, he’s specifically saying democracy wins.
More options
Context Copy link
More options
Context Copy link
It sounds like something Peter Zeihan would say after accidentally ingesting DMT.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link