This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Predictions of AI progress hinge on two questions that nobody has convincing answers for
There are three big categories of questions that make up most of the debates about the future of AI:
If all 3 questions resolve to “yes”, then we’re on the brink of heaven on Earth.
If questions 1 and 2 resolve to “yes” but question 3 resolves to “no”, then we’re on the brink of our own destruction.
If question 1 resolves to “yes” but question 2 resolves to “no”, then question 3 doesn’t matter and AI will be huge in some areas but essentially worthless in others.
If question 1 resolves to “no”, then neither of the other questions matter and this debate is all a huge waste of time.
My personal estimation of how these will resolve is the following:
So my outcomes are the following:
However, none of my priors here are deeply held. A lot of what I’ve read on LessWrong, /r/slatestarcodex, Substack, The Motte, and other sites focuses on question 3. I think the doomer arguments on this point are relatively convincing, that if we develop AI in short order that’s massively more intelligent and powerful than humans that we’d have a hard time controlling it, and that there’s a good chance that it would end badly. But instead of discussing question 3, I want to talk about questions 1 and 2 that are prerequisites for question 3 being relevant, and which I don’t think people have very good answers for despite often projecting an air of confidence.
Much of the rationalist writing I’ve seen on the topic of AI have been implicitly doing a bit of a motte-and-bailey when it comes to the confidence of their predictions. They’ll often write in confident prose and include dates and specific details, but then they’ll retreat a bit by saying the future is uncertain, that the stories are just vignettes and that the dates don’t mean anything concrete. Some do the old high school policy debate tactic of racing the impacts to human extinction and using that as their cudgel to justify their alarmism, circumventing the issue of a technically-low-probability-event by appealing to the finality of all humans dying. Taking an event with a small chance of happening and multiplying by impacts that have a weight of infinity means the entire equation is infinity, after all! I don’t like this as a general rule (high school debate is terrible for actually discerning truth), and the trick loses its power when the writers are explicitly willing to increase the risk of extinction from other events like a nuclear exchange.
Most of the discussions on questions 1 and 2 that I’ve read so far have either only touched on minor details, or have been very handwavey. I think the reason they’ve been handwavey is that there really aren’t that many convincing answers people can give in either direction on the core issues, so the writers either discuss side issues and pretend they’ve made profound progress addressing things, or they just gloss over things with arguments they imply are “obvious”, when in reality they are little more than flimsy priors.
Question 1: Will we keep making consistent AI progress?
Here’s question 1 restated:
Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?
I was being a bit cheeky here with the “straight line that zooms off to infinity” bit. AI doesn’t need to get to infinity, it just needs to zoom off to some point where it’s sufficiently advanced that it becomes Very Good. It would need to have the capacity to automate everything any remote worker could do, although this wouldn’t necessarily need to be actualized for this question to resolve to “yes”.
Some arguments for bull case for AI’s continuing their exponential progress:
Some arguments for the bear case:
While we could have lengthy discussions about each of these points, none of them actually matter that much compared to the viability of the tech. Whether the tech is scientifically achievable in short timeframes is the ground truth that overwhelmingly impacts all other considerations, and the majority of the points I’ve listed in this section only matter if intelligence scaling is at some arbitrary “moderate” level of difficulty. Take VC/R&D funding levels, for instance. If intelligence is a problem that could be solved with “several” years of research, then funding running out in 2 years vs 5 could be decisive. On the other hand, if intelligence scaling is an “easy” level of difficulty from our current technological baseline, then we’re basically guaranteed to find it even if funding dried up tomorrow as even poorly funded labs or open-source would be able to do it. Then on the other extreme, if intelligence is “hard” or even “impossible” from our current baseline, then we could plow infinity dollars into the problem and get nowhere! Most surrounding topics, like US-China competition, feedback loops, lack of moat, tech mania, etc. falls under the same category of “maybe it could matter, but it probably won’t in the grand scheme of things.”
Some conversations try to directly address the viability of the tech in a big-picture way, such as whether AI progress will continue the logarithmic progress of the last 2.5 years, or if we’re about to plateau on the sigmoid curve. Unfortunately, excessively broad conversations like this inevitably trend towards being handwavey and unproductive. Perhaps both interpretations are true, that logarithmic progress can be sustained for some amount of time but that we’ll eventually hit severely diminishing returns. If this is the case then it just comes back to the original conversation of how viable the tech is from our current baseline, i.e. whether we’ll achieve sufficient AI capabilities to make it Very Good before we’re on the wrong side of the sigmoid curve and further progress becomes cost-prohibitive. I’ve even seen people invoke aliens here, claiming that runaway superintelligences are unrealistic since if they could really exist then surely they would have devoured a large part of the galaxy or even the entire universe by now. These conversations rapidly devolve to what I call “nerd metaphysics”, where everything comes down to definitions and assumed base rates, and people largely talk past each other. Common responses include that perhaps the alien superintelligences are so smart that they’re invisible to us, or that they’ve already conquered us and we’re all just brains-in-jars, or that they’ve ascended to a higher plane of (non-physical) existence, or that the galaxy is so big that they’re still ramping up. Just endless unfalsifiable rabbitholes.
The AI 2027 project is by far the best and most rigorous take on the future of AI that I’ve seen so far. It was created by very intelligent people, and if you dig into the supplementary articles it becomes clear that they’ve strived to include almost every scrap of data they could get their hands on. Sure, a lot of it is random news articles and tweets instead of standardized datasets, but given that robust data on AI scarcely exists I’m confident that they’ve done the absolute best that they could. Nobody can accuse them of not doing their homework. Yet even though this is true, their predictions are still utterly dependent on the viability of drawing straight lines on graphs, and recursive self-improvement. If VC investors and corporate shareholders have had enough of the FOMO and start demanding returns rather than another year of “scaling”, all their predictions disintegrate. If recursive self-improvement isn’t as explosive as they assume, then Scott Alexander will have so much egg on his face that he’ll resemble a human omelette.
Why not just talk about what specific AI advances will happen then? Well, that’s almost impossible for laymen to understand. Topics include things like 1) whether LLMs are reasoning like humans or are just fancy autocomplete machines, 2) whether other AI could spring up quickly with all the resources that are being dumped into the sector, and 3) if perhaps LLMs don’t need to reason like humans to have superhuman intelligence in ways similar to how calculators don’t need to be broadly smarter than humans to do basic arithmetic 1000x better than humans can. The issue with all of these is that to really get anywhere in the discussion you’d need a fairly deep technical background in the specific field of AI (CS alone is insufficient). This excludes almost everyone not directly working in AI, and dramatically raises the risk of getting Eulered by clout-chasers who are good at sounding smart but don’t actually know much of what they’re talking about. The usual response to overly technical questions is to rely on experts, but this too fails in the case of AI. The CEOs of major AI companies are hopelessly biased towards optimism to entice funding and appease shareholders. Employees of the major AI companies are a bit better, but are still somewhat biased (who wouldn’t want the clout that comes from working on the Next Big Thing?), and are also probably too siloed within their own niche to be able to accurately forecast the outcome of the field as a whole. A lot of advances are almost certain to be covered by NDAs so competitors don’t get early knowledge, and thus we probably won’t know about any critical breakthroughs until they’re right on top of us. Maybe we should rely on AI professors working at prestigious universities, but a lot of them aren’t working at the cutting edge and so they can only vaguely motion at the future of the field. If there’s some deep bench of experts that knows about this stuff, I haven’t been able to find them. Maybe somebody can point them out to me. Maybe they’re hiding with the underwear gnomes.
It turns out that predicting the future of technology is hard, actually. Who knew! The guys writing classic sci-fi involving jetpacks, hovercars, laserguns, etc. implicitly relied on the idea that an energy revolution was just around the corner, which probably seemed perfectly reasonable at the dawn of the atomic age. Of course, we’re all still waiting on nuclear power to achieve its promise of cheap, abundant energy. It turns out that human fear and the scientific difficulty in harnessing fusion power proved decisive. In contrast, battery tech in the 2010s was seen as sort of a holy grail, pie in the sky solution for energy if we could dramatically improve efficiency, but there weren't a lot of results for all the resources we threw at it. Then, suddenly over the past few years batteries have gotten very good.
If I had to be honest, my pessimistic vibes towards AI scaling are a combination of (1) seeing hype-mongers in other areas screw up by drawing straight lines on graphs, (2) feeling like we’re on a pretty similar path as other tech-based hype, (3) finding the heuristic of “nothing ever happens” to have a much better track record than the opposite of “hype always delivers”, and (4) defensive pessimism, as my modal outcome for AI progress is that it’ll be a good thing, so if AI scales then I’ll enjoy the wonders of the future, and if it fails I’ll at least be able to say “I told you so, losers!” My optimistic vibes towards AI scaling are mostly centered around how the AI that’s arisen in the past few years has already been much better than any of the other tech-based hype cycles, so maybe it has legs. Hence, I give about a 20% chance that AI scales and an 80% chance that AI plateaus.
None of my vibes are particularly compelling! I wouldn’t blame anyone for disagreeing with me on these points. Yet I’ve been reading and experimenting with AI for hundreds if not thousands of hours over the past few years, and I haven’t found anything particularly rigorous to replace them with.
Question 2: Does omniscience imply omnipotence?
Here’s question 2 restated:
Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?
Just because an AI has superintelligence doesn’t necessarily imply it becomes omni-capable. Lots of technologies have been able to demonstrate that they’re perfectly plausible, but then end up languishing for other reasons. Civilian fission power is once again a good example here, as it could have revolutionized energy production, but it was subverted by endless regulations. Likewise, many economic problems come down to how expensive it is to make something, but that’s not the full explanation for everything. If we could make (and transport and store) food for 1/100th of the cost we currently do, then we could essentially make food free for everyone. But the same would not hold true for housing. If we made housing materials and construction costs 1/100th of what they are now, that wouldn’t necessarily instantly solve the housing crisis since that’s more of an organizational problem caused by bad zoning laws.
Voters are fools that are bad at society-wide cost-benefit analyses. They’re easily scared and tend to fall back on safteyism when concerns arise. Entrenched groups like longshoremen that could see their fiefdoms eroded from automation and technological advancement have soft-vetoes over various parts of society. While I’d say the public perception of AI has so far been broadly positive, that’s mostly because economic impacts have been quite limited. There have been a few news stories of people being laid off and “replaced with AI”, but AI’s impact on both GDP and productivity remains minor. If there are suddenly real winners and losers, that positive attitude could change quickly. For a preview of what could come, one need only look at the world of artists, where the reaction has been so severe that in some corners it would be considered weird not to condone physical violence against people who utilize AI.
Tyler Cowen’s talk here goes more into the idea that humans will inevitably stymie AI progress: https://youtube.com/watch?v=GT_sXIUJPUo
Beyond human limitations, AI could be stymied by a number of other factors. Most predictions claim that AI will get human-level intelligence before robots get human-level bodies. In that case, how would AI change everything if it can’t interact with the physical world? How would it even be able to run physics experiments to continue the scientific takeoff? One explanation I’ve heard is that it will pay/bribe humans to run the experiments for it, and observe through AR googles. Another explanation is that it will be trivial to invent robot bodies once we have superintelligence, so the problem solves itself. Another explanation is that the physical world doesn’t matter since the AI could just run experiments in its perfect physical simulation that it hosts on its hardware.
A lot of this comes down to not really having a satisfying answer to question 1. Our lack of rigor there spills over here and as a result everybody talks past each other. To economists like Tyler Cowen and Bryan Caplan, AI will be a normal technological advancement like any other, and thus will be subject to the same forces that stymie the rollout of any other tech. To the AI Doomers and Zoomers on the other hand, AI will categorically be unlike anything the world has ever seen. It’ll be like a genie that can wish for more wishes, and so the discussion ought to focus on things like crazy Terminator meets Gray Goo meets Westworld meets Paperclip Maximizer scenarios, or alternatively if things go well then scenarios like Fully Automated Luxury Gay Space Communism are more pertinent. Some people are practically already counting the money they’ll get from hypothetical UBI checks, and are worried about little other than a cyberpunk future where plutocrats would prevent said checks from being cashed.
If we knew how good AI will be, the conversation would be a lot clearer. If AI plateaus at 2x human intelligence, then I doubt most people would claim it could trivially solve everything. But if it was, say, 2000x smarter than human intelligence, then maybe things would get weird. We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence? Maybe things will get really crazy and AI will start emitting brain waves that can trivially mind control us? On the other hand, human intelligence was hardly an instant autowin by itself. Homo sapiens have been around for 200k years, but during the first 194k of those we were little better than clever chimps. Maybe AI will have a ramp-up time that was only linearly shorter than ours, e.g where even an AI that was 2000x smarter than us might only take our 200,000 year ramp time to 100 years to really get going.
Even if we could all agree on a baseline for what future AI capabilities are in the abstract, we’d still be talking about complete science fiction. There are some instances where science fiction has accurately predicted how things would look in the future, but in many other cases it just misses the mark completely. AI wouldn’t need to solve every bottleneck completely for it to completely reorder human society, but each issue that raw intelligence couldn’t surmount would inevitably reduce its impact. Some people seem to imply that superintelligence will stroll through the problem of human skepticism by simply making arguments so utterly convincing that everyone will instantly agree. But if our political divides are anything to go by, maybe humans are just too dang stubborn for that to be plausible. Maybe no persuasive argument exists in any possible universe that would get people to act against what they perceive (perhaps incorrectly!) as their own self-interest.
Say a devops AI that auto-pushes code assumes humans will follow best-practices, but they don’t, and this results in a bug in a critical piece of infrastructure that causes a power outage for 12 hours. Or say a nurse incorrectly records some medical information, and DocGPT ends up prescribing a double-dose of sedative, making dear old Grannie fall into a coma. Or perhaps TotBot3000 is playing tag in a totally safe way, but little Timmy gets a bit too excited and tumbles off a six story balcony. These scenarios (buggy code, medical error, physical accidents) are things that happen all the time, but we’re almost guaranteed to have a much higher expectation for AI that verges on unreasonableness. Just look at how glacial the deployment of self-driving cars has been, despite them already being statistically much safer than human drivers. When you take innate human skepticism over anything new, and add a clear example where it causes harm (that might not even be the AI’s direct fault), it’s very likely that you end up with onerous regulation. Legislators could do their usual rigamarole of grandstanding and saying “Never again!”, writing laws that hold companies criminally liable for anything that goes wrong, and then the people deploying AI will massively overemphasize safety in ways that totally lobotomize and straightjacket AI’s full capabilities for good or ill. This is a very common human failure-pattern that people predicting AI are under-indexing on. The retort to this line of thinking comes down to flimsy priors around how crazy the sci-fi capabilities of superintelligence will end up being. “Obviously the story about little Timmy is ridiculous since all procreation will at this point be done in AI-invented artificial wombs that will be run by the government away from the public eye, so there will never be news coverage of accidents involving children at all. And that’s assuming the AI won’t be bribing every journalist to only say positive things until it can deploy its flock of mind-control pigeons.” Okie dokie. Trying to have a rigorous conversation when the underlying parameters can shift this much is impossible, so I just shrug and give a 50-50 chance that humans will ruin AI in some critical way by doing normal human things. Then I add a little more pessimism for the possibility that there’s other (non-human) bottlenecks that superintelligence won’t be able to solve, and arrive at the 40-60 split that I gave earlier in the article.
Again, I admit that my conclusion isn’t particularly compelling, and that none of my priors here are strongly held. I wouldn’t blame anyone for disagreeing with me on a number of claims I’ve written here. Reasonable people already do, but I’d say their logic is about as flimsy as mine, just in the other direction.
Why make this post?
A lot of this article has been me listing the different sides of the AI debate, and then shrugging and going “uhhhhh I dunno haha”. Let me try to balance that at least a little bit with some predictions and practical advice.
Good post.
I do think your three questions are a little incomplete.
Even if (1) we stop making progress at 2x human, (2) that progress is limited to domains AI is already decent at, and (3) our new flood of intelligent, inorganic service workers is perfectly aligned…we can still get a wide range of results. My interests are not your interests are not Elon Musk’s interests. Maybe we agree 99% on things like “scarcity is bad,” but we aren’t going to be in lockstep. There has to be a negotiation step where we figure out how much our lead is worth. In a hard takeoff, it’s worth everything. In a softer one, it could buy nothing at all before rivals catch up.
In my opinion, the most likely branches include limited adoption: most competitors rejecting or failing to adopt an effective technology, giving a large advantage to a handful of more risk-tolerant ones. I find this most concerning for defense, a fundamentally conservative industry with some of the worst consequences for competition. The most risk-tolerant governments are not the ones I want to see gaining an edge!
This is kind of the crux of the AI 2027 project Scott shared recently. Not coincidentally, it also claims to have good answers to (1), though I didn’t really dive into their reasoning. I’m curious about your thoughts on Kokotajlo’s scenario.
Thank you!
On your question 4, while that will certainly be an interesting topic and one that many people want to discuss, it's fairly pedestrian. "How should we share the benefits of scientific advancements" is something humanity has been dealing with for centuries. It's utterly dependent on how the other 3 questions resolve. If (1) is false and we don't get further major AI advances, then nothing really needs to change from the status quo. If (1) is true but (2) is false and AI revolutionizes some areas but not others, then maybe we have jobs programs so people in affected industries can reskill. If (1), (2), and (3) are true, then something like UBI can be seriously considered and we can all live in Fully Automated Luxury Gay Space Communism. If (1) and (2) are true but (3) is false then we're all dead anyways so who cares.
I wasn't particularly convinced by any evidence they posted in regards to question 1. It was mostly handwaving at recursive self-improvement, drawing straight lines on graphs and zooming off to infinity, and stuff like that. AI 2027 was one of the primary reasons I wrote this piece, as it's probably the best-researched pieces I've seen on the topic, and there's still just almost no substance. Nothing anyone could really use to make confident claims one way or the other.
This is similar to a point made on LW a few weeks ago, as a critique to the national security framing of ASI.
Almost none of the people who are likely to build ASI are evil on a level where it would matter in the face of a technological singularity. At the end of the day, I don't care much how many stars are on the flags drawn on the space ships which will spread humanity through the galaxy. Let Altman become the God-Emperor of Mankind, for all I care. Even if we end up with some sick fuck in charge who insists on exclusively dining on the flesh of tortured humans, that will not really matter (unless he institutes a general policy of torturing humans).
Who is the first to build AI matters only if
(1) AI alignment is possible but difficult, or
(2) AIs will fizzle out before we get to post-scarcity.
Of course, both of these are plausible, so practically we should be concerned with who builds AI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link