This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
When will the AI penny drop?
I returned from lunch to find that a gray morning had given way to a beautiful spring afternoon in the City, the sun shining on courtyard flowers and through the pints of the insurance men standing outside the pub, who still start drinking at midday. I walked into the office, past the receptionists and security staff, then went up to our floor, passed the back office, the HR team who sit near us, our friendly sysadmin, my analysts, associate, my own boss. I sent some emails to a client, to our lawyers, to theirs, called our small graphics team who design graphics for pitchbooks and prospectuses for roadshows in Adobe whatever. I spoke to our team secretary about some flights and a hotel meeting room in a few weeks. I reviewed a bad model and fired off some pls fixes. I called our health insurance provider and spoke to a surprisingly nice woman about some extra information they need for a claim.
And I thought to myself can it really be that all this is about to end, not in the steady process envisioned by a prescient few a decade ago but in an all-encompassing crescendo that will soon overwhelm us all? I walk around now like a tourist in the world I have lived in my whole life, appreciating every strange interaction with another worker, the hum of commerce, the flow of labor. Even the commute has taken on a strange new meaning to me, because I know it might be over so soon.
All of these jobs, including my own, can be automated with current generation AI agents and some relatively minor additional work (much of which can itself be done by AI). Next generation agents (already in testing at leading labs) will be able to take screen and keystroke recordings (plus audio from calls if applicable) of, say, 20 people performing a niche white collar role over a few weeks and learn pretty much immediately know how to do it as well or better. This job destruction is only part of the puzzle, though, because as these roles go so do tens of millions of other middlemen, from recruiters and consultants and HR and accountants to millions employed at SaaS providers that build tools - like Salesforce, Trello, even Microsoft with Office - that will soon be largely or entirely redundant because whole workflows will be replaced by AI. The friction facilitators of technical modernity, from CRMs to emails to dashboards to spreadsheets to cloud document storage will be mostly valueless. Adobe alone, which those coworkers use to photoshop cute little cover images for M&A pitchbooks, is worth $173bn and yet has been surely rendered worthless, in the last couple of weeks alone, by new multimodal LLMs that allow for precise image generation and editing by prompt1. With them will come an almighty economic crash that will affect every business from residential property managing to plumbing, automobiles to restaurants. Like the old cartoon trope, it feels like we have run off a cliff but have yet to speak gravity into existence.
It was announced yesterday that employment in the securities industry on Wall Street hit a 30-year high (I suspect that that is ‘since records began’, but if not I suppose it coincides with the final end of open outcry trading). I wonder what that figure will be just a few years from now. This was a great bonus season (albeit mostly in trading), perhaps the last great one. My coworker spent the evening speaking to students at his old high school about careers in finance; students are being prepared for jobs that will not exist, a world that will not exist, by the time they graduate.
Walking through the city I feel a strange sense of foreboding, of a liminal time. Perhaps it is self-induced; I have spent much of the past six months obsessed by 1911 to 1914, the final years of the long 19th century, by Mann and Zweig and Proust. The German writer Florian Illies wrote a work of pop-history about 1913 called “the year before the storm”. Most of it has nothing to do with the coming war or the arms race; it is a portrait (in many ways) of peace and mundanity, of quiet progress, of sports tournaments and scientific advancement and banal artistic introspection, of what felt like a rational and evolutionary march toward modernity tempered by a faint dread, the kind you feel when you see flowers on their last good day. You know what will happen and yet are no less able to stop it than those who are comfortably oblivious.
In recent months I have spoken to almost all smartest people I know about the coming crisis. Most are still largely oblivious; “new jobs will be created”, “this will just make humans more productive”, “people said the same thing about the internet in the 90s”, and - of course - “it’s not real creativity”. A few - some quants, the smarter portfolio managers, a couple of VCs who realize that every pitch is from a company that wants to automate one business while relying for revenue on every other industry that will supposedly have just the same need for people and therefore middlemen SaaS contracts as it does today - realize what is coming, can talk about little else.
Many who never before expressed any fear or doubts about the future of capitalism have begun what can only be described as prepping, buying land in remote corners of Europe and North America where they have family connections (or sometimes none at all), buying crypto as a hedge rather than an investment, investigating residency in Switzerland and researching countries likely to best quickly adapt to an automated age in which service industry exports are liable to collapse (wealthy, domestic manufacturing, energy resources or nuclear power, reasonably low population density, produce most food domestically, some natural resources, political system capable of quick adaptation). America is blessed with many of these but its size, political divisions and regional, ethnic and cultural tensions, plus an ingrained highly individualistic culture mean it will struggle, at least for a time. A gay Japanese friend who previously swore he would never return to his homeland on account of the homophobia he had experienced there has started pouring huge money into his family’s ancestral village and directly told me he was expecting some kind of large scale economic and social collapse as a result of AI to force him to return home soon.
Unfortunately Britain, where manufacturing has been largely outsourced, most food and much fuel has to be imported and which is heavily reliant on exactly the professional services that will be automated first seems likely to have to go through one of the harshest transitions. A Scottish portfolio manager, probably in his 40s told me of the compound he is building on one of the remote islands off Scotland’s west coast. He grew up in Edinburgh, but was considering contributing a large amount of money towards some church repairs and the renovation of a beloved local store or pub of some kind to endear himself to the community in case he needed it. I presume that in big tech money, where I know far fewer people than others here, similar preparations are being made. I have made a few smaller preparations of my own, although what started as ‘just in case’ now occupies an ever greater place in my imagination.
For almost ten years we have discussed politics and society on this forum. Now events, at last, seem about to overwhelm us. It is unclear whether AGI will entrench, reshape or collapse existing power structures, will freeze or accelerate the culture war. Much depends on who exactly is in power when things happen, and on whether tools that create chaos (like those causing mass unemployment) arrive much before those that create order (mass autonomous police drone fleets, ubiquitous VR dopamine at negligible cost). It is also a twist of fate that so many involved in AI research were themselves loosely involved in the Silicon Valley circles that spawned the rationalist movement, and eventually through that, and Scott, this place. For a long time there was truth in the old internet adage that “nothing ever happens”. I think it will be hard to say the same five years from now.
1 Some part of me wants to resign and short the big SaaS firms that are going to crash first, but I’ve always been a bad gambler (and am lucky enough, mostly, to know it).
I have to wonder when people like you post stuff like this about AI (and my past self-included) have actually used these models to do anything other than write code or analyze large datasets. AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop. The AI assistants they have on phone calls and websites instead of real customer service are terrible, and AI for fact-checking/research is just seems to be a worse version of Google (despite Google's best efforts to destroy itself). Maybe I'm blind, but I just don't see this incoming collapse that you seem to be worried about (although I do believe we are going to have a collapse for different reasons).
It's unfortunate how strongly the chat interface has caught on over completion-style interfaces. The single most useful LLM tool I use on a daily basis is copilot. It's not useful because it's always right, it's useful because it's sometimes right, and when it's right it's right in about a second. When it's wrong, it's also wrong in about a second, and my brain goes "no that's wrong because X Y Z, it should be such and such instead" and then I can just write the correct thing. But the important thing is that copilot does not break my flow, while tabbing over to a chat interface takes me out of the flow.
I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).
But yeah, LLMs are great at the "babble" part of "babble-and-prune":
And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).
The "customer service AIs are terrible" thing is I think mostly a separate thing where customer service is a cost center and their goal is usually to make you go away without too much blowback to the business. AI makes it worse, though, because the executives trust an AI CS agent even less than they would trust a low-wage human in that position, and so will give that agent even fewer tools to actually solve your problem. I think the lack of trust makes sense, too, since you're not hiring a bunch of AI CS agents you can fire if they mess up consistently, you're "hiring" a bunch of instances of one agent, so any exploitability is repeatable.
All that said, I expect that for the near future LLMs will be more of a complement than a replacement for humans. But that's not as inspiring goal for the most ambitious AI researchers, and so I think they tend to cluster at companies with the stated goal of replacing humans. And over the much longer term it does seem unlikely that humans are at an optimal ability-to-do-useful-things-per-unit-energy point. So looking at the immediate evidence we see the top AI researchers are going all-in on replacing humans, and over the long term human replacement seems inevitable, and so it's easy to infer "oh the thing that will make humans obsolete is the thing that all these people talking about human obsolescence are working on".
I don't think it's unlikely that humans are far more optimized for real-world relevant computation than computers will ever be. Our neurons make use of quantum tunneling for computation in a way that classical computers can't replicate. Of course quantum computers could be a solution to this, but the engineering problems seem to be incredibly challenging. There's also evolution. Our brain has been honed by 4 billion years of natural selection. Maybe this natural selection hasn't selected for the exact kinds of processes we want AI to do, but there certainly has been selection for some combination of efficient communication and accurate pattern recognition. I'm not convinced we can engineer better than that.
The human brain may always be more efficient on a watt basis, but that doesn’t really matter when we can generate / capture extraordinary amounts of energy.
Energy infrastructure is brittle, static and vulnerable to attack in a way that the lone infantryman isn't. It matters.
Do you expect that to remain true as the price of solar panels continues to drop? A human brain only takes about 20 watts to run. If we can get within a factor of 10 of that, that's 200 watts. Currently that's a few square meters of solar panels costing a couple thousand dollars, and a few dozen kilos of battery packs, also costing a couple thousand dollars. It's not as robust as a lone infantryman, but it's already quite a lot cheaper, and the price is continuing to drop.
Although that said, solar panels require quite a lot of sensitive and stationary infrastructure to make, I could see the argument that the ability to fabricate them will not last long in any large scale conflict.
The industry required to make all these doodads just becomes the target. Unless you dealing with something fully autonomous to the degree that it carries its own reproduction, you're not gonna beat life in a survival contest.
That said, I don't really expect portable energy generation to be efficient enough in the near future to matter in the way you're thinking. Moreover, this totally glosses over maintenance which is a huge weakness any high tech implement has in terms of logistics.
More options
Context Copy link
About 6 sqm of panels at STC, probably more like 12-18 realistically (2.4-3.6kW, plus at least 10-15kWh of batteries. The math gets brutal for critical uptime off-grid solar, but some people have more than that on an RV these days. So it's not really presenting a much larger target than a road-mobile human would be (at least one with the comms and computer gear needed to do a similar job)
And the machine brain is always going to be vastly more optimized for multitasking than a human.
More options
Context Copy link
More options
Context Copy link
I dunno, some of the ways I can think of to bring down a transformer station or a concrete-hulled building involve violent forces that would, in fact, be similarly capable of reducing a lone infantryman to a bloody pulp.
You're probably thinking of explosives or some kind, but you're thinking about terminal ballistics instead of the delivery mechanism and other factors.
A man in khakis with a shovel can move out of the way of bombardment, use cover to hide and dig himself fortifications, all of which mitigates the use of artillery and ballistic missiles.
Static buildings that house infrastructure have no such advantage and require active defense forces to survive threats. They're sitting ducks.
I'm not pulling this analysis out of my ass mind you, this is what you'll find in modern whitepapers on high intensity warfare that recommend against relying on anything that requires a complex supply chain because everybody expects most complex infrastructure (sats, power grids, etc) to be destroyed early and high tech weapons to become either useless or prized reserves that won't be doing the bulk of the fighting.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Do you have a source on the quantum tunneling thing? That strikes me as wildly implausible.
Roger Penrose has been beating this drum since the 1990s and hasn't managed to convince many other people, but he is a Nobel laureate now so I guess he's a pretty high-profile advocate. The way he argues for this stuff feels more like a cope for preserving some sort of transcendental, irreducible aura for human mathematical thinking rather than empirically solid neuroscience though.
More options
Context Copy link
Relevant paper: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.110.024402
Relevant other links: https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/, https://www.biorxiv.org/content/10.1101/2020.04.23.057927v1.full, https://www.rintrah.nl/a-universe-fine-tuned-for-biological-intelligence/
My read on that paper is that it says
I might find this study convincing if it was presented alongside an experiment where e.g. scientists slowly removed the insulating myelin coating from a single long nerve cell in a worm and watched what happened to the timing of signals across the brain. I'd expect the signals between distant parts of the brain not to stay synchronized as the myelin sheath degrades. If there's a sudden drop-off in synchronization at a specific thickness, rather than a gradual decline as the insulation thins, it might suggest quantum entanglement effects rather than just classical electrical conductivity changes.
In the absence of any empirical evidence like that I don’t find this paper convincing though.
I also don't think the paper authors were trying to convince readers that this is a thing that does happen in real neurons, just that further study is warranted.
More options
Context Copy link
This is highly speculative, and a light-year away from being a consensus position in computational neuroscience. It's in the big if true category, and far from being confirmed as true and meaningful.
It is trivially true that human cognition requires quantum mechanics. So does everything else. It is far from established that you need to explicitly model it at that detail to get perfectly usable higher level representations that ignore such detail.
The brain is well optimized for what's possible for a kilo and change of proteins and fats in a skull at 37.8° C, reliant on electrochemical signaling, and a very unreliable clock for synchronization.
That is nowhere near the optimal when you can have more space and volume, while working with designs biology can't reach. We can use copper cable and spin up nuclear power plants.
I recall @FaulSname himself has a deep dive on the topic.
That is a very generous answer to something that seems a lot more like complete gibberish. A single neural structure with known classical functions may, under their crude (the author's own words) theoretical model, produce entangled photons is the only real statement in that article. Even granting this, to go from that to neurons communicating using such photons in any way would be an absurd leap. Using the entanglement to communicate is straight up impossible.
You are also replying to someone who can't differentiate between tunneling and entanglement, so that's a strong sign of complete woo as well.
You're correct that I'm being generous. Expecting a system as macroscopic and noisy as the brain to rely on quantum effects that go away if you look at them wrong is a stretch. I wouldn't say that's impossible, just very, very unlikely. It's the kind of thing you could present at a neuroscience conference, without being kicked out, but everyone would just shake their heads and tut the whole time.
If this were true, then entering an MRI would almost certainly do crazy things to your subjective conscious experience. Quantum coherence holding up to a tesla-strong field? Never heard of that, at most it's incredibly subtle and hard to distinguish from people being suggestible (transcranial magnetic stimulation does do real things to the brain). Even the brain in its default state is close to the worst case scenario when it comes to quantum-only effects with macroscopic consequences.
And even if the brain did something funky, that's little reason to assume that it's a feature relevant to modeling it. As you've mentioned, there's a well behaved classical model. We already know that we can simulate biological neurons ~perfectly with their ML counterparts.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link