It may not be a universally-accepted truth, but it is a scientific truth. We're a sexually dimorphic species. There are plenty of tests which easily tell the two groups apart with 99.99% accuracy, and if you're MtF you'd sure as hell better inform your doctor of that fact rather than acting like you're just a normal woman.
Joe Blow down the street thinks he's Napoleon. So, it's not a "universally-accepted truth" that he's not Napoleon. And maybe he gets violent if you don't affirm his Napoleonness in person, so there are cases where feeding his delusion is the path of least resistance. There's a "fundamental values conflict" there. But it remains an objective truth that he's not Napoleon.
I agree with you, but I'll note that our entire legal system seems to be based on "one weird trick"s, all the way down. That's how they got a felony conviction against Trump for a misdemeanor whose statute of limitations had expired. Unfortunately if the system really wants to get you, they will. I don't know how to fix it, but at the very least let's keep calling it out wherever we see it.
Er, but "man" and "woman" really do have an objective scientific meaning, unlike "relative", which is a social convention. (Note that it would be equally incorrect to say "an in-law is your blood relative".) So I don't agree with your analogies; saying "trans women are women" is just an incorrect statement of fact, rather than describing social conventions.
That said, I do think your framing of transness as a social status is reasonable. If we were simply allowed to say someone was "living as the other sex", rather than the Orwellian thought control that the ideologues insist on, I think it wouldn't be nearly as controversial.
Yes, the whole theoretical point of academic tests is to be an objective measure of the capacity of students. Because when you go out and get a real job, you have to actually be able to do that job. If these remedial courses aren't necessary for being a psychiatrist, then there should be a path to becoming a practicing psychiatrist that doesn't require them. If they ARE necessary, then lightening the requirements because, gosh, you can't satisfy the requirements but really want to graduate ends up causing harm later on in life.
This is exactly @jeroboam's point - you say "AI is a junior engineer" as if that's some sort of insult, rather than unbelievably friggin' miraculous. In 2020, predicting "in 2025, AI will be able to code as well as a junior engineer" would have singled you out as a ridiculous sci-fi AI optimist. If we could only attach generators to the AI goalposts as they zoom into the distance, it would help pay for some of the training power costs... :)
It's weird and a surprise that current AI functions differently enough from us that it's gone superhuman in some ways and remains subhuman in others. We'd all thought that AGI would be unmistakable when it arrived, but the reality seems to be much fuzzier than that. Still, we're living in amazing times.
Let me join the chorus of voices enthusiastically agreeing with you about how jobs are already bullshit. I've never been quite sure whether this maximally cynical view is true, but it sure feels true. One white-collar worker has 10x more power to, well, do stuff than 100 years ago, but somehow we keep finding things for them to do. And so Elon can fire 80% of Twitter staff, and "somehow" Twitter continues to function normally.
With that said, I worry that this is a metastable state. Witness how thoroughly the zeitgeist of work changed after COVID - all of a sudden, in my (bullshit white-collar) industry, it's just the norm to WFH and maybe grudgingly come in to the office for a few hours 3 days a week. Prior to 2020, it was very hard to get a company to agree to let you WFH even one day a week, because they knew you'd probably spend the time much less productively. Again, "somehow" the real work that was out there still seems to get done.
If AI makes it more and more obvious that office work is now mostly just adult daycare, that lowers the transition energy even more. And we might just be waiting for another sudden societal shock to get us over that hump, and transition to a world where 95% of people are unemployed and this is considered normal. We're heading into uncharted waters.
Great post. But I'm pessimistic; Scott's posted about how EA is positively addicted to criticizing itself, but the trans movement is definitely not like that. You Shall Not Question the orthodox heterodoxy. People like Ziz may look ridiculous and act mentally deluded (dangerously so, in retrospect), but it wouldn't be "kind" to point that out!
When I go to rationalist meetups, I actually think declaring myself to be a creationist would be met more warmly than declaring that biology is real and there's no such thing as a "female brain in a male body". (Hell, I bet people would be enthused at getting to argue with a creationist.) Because of this, I have no way to know whether 10% or 90% of the people around me are reasonable and won't declare me an Enemy of the People for saying unfashionable true things. If it really is 90% ... well, maybe there's hope. We'd just need a phase change where it becomes common knowledge that most people are anti-communist gender-realist.
Sorry, this is just tired philosobabble, which I have no patience for. All the biological ways to define man and woman agree in >99% of cases, and agree with what humans instinctively know, too. If you want to pretend that obvious things aren't obvious for the sake of your political goals, I'm not going to play along. That's anti-intelligence.
Uh, what do you mean we don't have self-driving cars? I took two driverless Waymo rides last week, navigating the nasty, twisting streets of SF. It drove just fine. Maybe you could argue it's not cost-effective yet, or that there are still regulatory hurdles, but I think what you meant is that the tech doesn't work. And that's clearly false.
Also, I'm a programmer and productively using ChatGPT at work, so I'd say the score so far is Magusoflight 0, my lying eyes 2.
I just can’t take these people seriously. They’re almost going out of their way to be easy for any real authoritarian government to round up, by being obvious about their identity.
LARPing is fun. They believe that they believe they're bravely resisting a dictatorship. But their actions make it clear that, at some level, they know there's no actual danger.
I consider it similar to climate activists who believe that they believe that the future of human civilization depends on cutting CO2 emissions to zero. And who also oppose nuclear power, because ick.
Eh, I guess I'm incoherent then. I generally do use people's preferred pronouns in person; it's polite, and not every moment of your life needs to be spent fighting political battles. Caitlyn Jenner's put a lot of effort into living as a woman, and isn't a bad actor, and has passed some poorly-defined tipping point where I'm ok with calling her a her. I just don't want it to be mandatory. I want it to be ok to disagree on who's a "valid" trans person. I absolutely don't want Stalinist revision of history/Wikipedia to pretend that Bruce Jenner never existed. And in the appropriate discussions I want to be free to point out that it's all just paying lip service to a fantasy. "XXX isn't a real woman" is a true statement that I should be allowed to say; but I generally wouldn't, any more than I'd point out that "YYY is ugly".
I've had the "modern AI is mind-blowing" argument quite a few times here (I see you participated in this one), and I'm not really in a good state to argue cogently right now. But you did ask nicely, so I'll offer more of my perspective.
LLMs have their problems: You can get them to say stupidly wrong things sometimes. They "hallucinate" (a term I consider inaccurate, but it's stuck). They have no sense of embodied physics. The multimodal ones can't really "see" images the way we do. Mind you, just saying "gotcha" for things we're good at and they're not cuts both ways. I can't multiply 6 digit numbers in my head. Most humans can't even spell "definately" right.
But the one thing that LLMs really excel at? They genuinely comprehend language. To mirror what you said, I "do not understand" how people can have a full conversation with a modern chatbot and still think it's just parroting digested text. (It makes me suspect that many people here, um, don't try things for themselves.) You can't fake comprehension for long; real-world conversations are too rich to shortcut with statistical tricks. If I mention "Freddie Mercury teaching a class of narwhals to sing", it doesn't reply "ERROR. CONCEPT NOT FOUND." Instead there is some pattern in its billion-dimensional space that somehow fuzzily represents and works with that new concept, just like in my brain.
That already strikes me as a rather General form of Intelligence! LLMs are so much more flexible than any kind of AI we've had before. Stockfish is great at Chess. AlphaGo is great at Go. Claude is bad at Pokemon. And yet, the vital difference is that there is some feature in Claude's brain that knows it's playing Pokemon. (Important note: I'm not suggesting Claude is conscious. It almost certainly isn't.) There's work to do to scale that up to economically useful jobs (and beating the Elite Four), but it's mainly "hone this existing tool" work, not "discover a new fundamental kind of intelligence" work.
The clueless people who made Last Wish really messed up. They were supposed to make a soulless by-the-numbers sequel to a forgettable spinoff of an overrated series. Instead they made one of the best animated films in years, better than anything Pixar's done since Coco. I sure hope somebody got fired for that.
It seems like the big AI companies are deathly terrified of releasing anything new at all, and are happy to just sit around for months or even years on shiny new tech, waiting for someone else to do it first.
Surprised you didn't mention Sora here. The Sora demo reel blew everyone's minds ... but then OpenAI sat on it for months, and by the time they actually released a small user version of it, there were viable video generation alternatives out there. As much as it annoys me, though, I don't entirely blame them. Releasing an insufficiently-safecrippled video generator might be a company-ending mistake in today's culture, and that part isn't their fault.
As a member of the grubby gross masses who Cannot Be Trusted with AI tech, I've been pretty heartened that, thus far, all you need to do to finally get access to these tools has been to wait a year for them to hit open source. Then you'll just need to ignore the NEW shiny thing that you Can't Be Trusted with. (It's like with videogames - playing everything a year behind, when it's on sale or free - and patched - is so much cheaper than buying every new game at release...)
They do not have a cognitive architecture that resembles human neurology. In terms of memory, they have a short-term memory and a longterm one, but the two are entirely separate, without an intermediate outside of the training phase. The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.
Insofar as any analogy is really going to help us understand how LLMs think, I still think this is a little off. I don't believe their context window really behaves in the same way as "short-term memory" does for us. When I'm thinking about a problem, I can send impressions and abstract concepts swirling around in my mind - whereas an LLM can only output more words for the next pass of the token predictor. If we somehow allowed the context window to consist of full embeddings rather than mere tokens, then I'd believe there was more of a short-term thought process going on.
I've heard LLM thinking described as "reflex", and that seems very accurate to me, since there's no intent and only a few brief layers of abstract thought (ie, embedding transformations) behind the words it produces. Because it's a simulated brain, we can read its thoughts and, quantum-magically, pick the word that it would be least surprised to see next (just like smurf how your brain kind of needle scratches at the word "smurf" there). What's unexpected, of course - what totally threw me for a loop back when GPT3 and then ChatGPT shocked us all - is that this "reflex" performs so much better than what we humans could manage with a similar handicap.
The real belief I've updated over the last couple of years is that language is easier than we thought, and we're not particularly good at it. It's too new for humans to really have evolved our brains for it; maybe it just happened that a brain that hunts really really well is also pretty good at picking up language as a side hobby. For decades we thought an AI passing the Turing test, and then understanding the world well enough to participate in human civilization, would require a similar level of complexity to our brain. In reality, it actually seems to require many orders of magnitude less. (And I strongly suspect that running the LLM next-token prediction algorithm is not a very efficient way to create a neural net that can communicate with us - it's just the only way we've discovered so far.)
Listen, I did not intentionally trap those Sims in their living room. The placement of the stove was an innocent mistake. That fire could have happened anywhere! A terrible tragedy.
You know, sometimes pools just accidentally lose their exit. Common engineering mishap. My sincere condolences to those affected.
Aren't we supposed to be convincing the upcoming ASI that we're worth keeping alive?
The problem is it's not AT ALL the Bean that we see in Ender's Game. You can tell because his personality changes drastically when he has conversations with Ender (since those were already canon). The book tries to excuse it as him "feeling nervous around Ender", but that's incredibly weak. Similarly, the only reason all his manipulations of Ender (and his backseat ro have to be so subtle and behind-the-scenes is to be compatible with the original narrative; there's no good in-universe explanation.
Orson Scott Card just thought up a neat new OC and shoehorned him into the original story, and it shows. And I hate how completely it invalidates Ender's choices. But hey, that new character does come into his own in the sequels, at least, when he's not undermining a previously-written story.
There are plenty of tasks (e.g. speaking multiple languages) where ChatGPT exceeds the top human, too. Given how much cherrypicking the "AI is overhyped" people do, it really seems like we've actually redefined AGI to "can exceed the top human at EVERY task", which is kind of ridiculous. There's a reasonable argument that even lowly ChatGPT 3.0 was our first encounter with "general" AI, after all. You can have "general" intelligence and still, you know, fail at things. See: humans.
While I am 100% on board the Google hate train, I think this particular criticism is unfair. I believe what's happening here is just a limitation of current-gen multimodal LLMs - you have to lose fidelity in order to express a detailed image as a sequence of a few hundred tokens. Imagine having, say, 10 minutes to describe a person's photograph to an artist. Would that artist then be able to take your description and perfectly recreate the person's face? Doubtful; humans are HIGHLY specialized to detect minute details in faces.
Diffusion-based image generators have a lot of detail, but no real understanding of what the prompt text means. LLMs, by contrast, perfectly understand the text, but aren't capable of "seeing" (or generating) the image at the same fidelity as your eyes. So right now I think there's an unavoidable tradeoff. I expect this to vanish as we scale LLMs up further, but faces will probably be one of the last things to fall.
I wonder if, this year, there'll be workflows like: use an LLM to turn a detailed description of a scene into a picture, and then use inpainting with a diffusion model and a reference photo to fix the details...?
I'd say a steelmanning of the Yuddite view is this: "Yes, we along with everyone else did not predict that LLMs could be so powerful. They do not fit our model of an agentic recursive neural net that runs on reward signals, and even a superintelligent LLM is likely to super-understand and do what its creator wants (which is still a risk, but of a different kind). However, it would be a mistake to extrapolate from these last few years where LLMs are ahead in the AI race and assume that this will continue indefinitely. It is still possible that agentic AIs will once again surpass predictive models in the short-to-mid-term future, so there is still risk of FOOM and we need to keep studying them."
I've spoken with some doomers who have this level of intellectual humility. I can't imagine seeing it from Yudkowsky himself, sadly.
There's also the concern of what kind of suffering a post-singularity society can theoretically enable; it might go far, far beyond what anyone on Earth has experienced so far (in the same way that a rocket flying to the moon goes farther than a human jumping). Is a Universe where 99.999% of beings live sublime experiences but the other 0.001% end up in Ultrahell one that morally should exist?
Waymo has a lot of data, and claims a 60-80% reduction in accidents per mile for self-driving cars. You should take it with a grain of salt, of course, but I think there are people holding them to a decent reporting standard. The real point is that even being 5x safer might not be enough for the public. Same with having an AI parse regulations/laws...
Fantastic post, thanks! Lots of stuff in there that I can agree with, though I'm a lot more optimistic than you. Those 3 questions are well stated and help to clarify points of disagreement, but (as always) reality probably doesn't factor so cleanly.
I really think almost all the meat lies in Question 1. You're joking a little with the "line goes to infinity" argument, but I think almost everyone reasonable agrees that near-future AI will plateau somehow, but there's a world of difference in where it plateaus. If it goes to ASI (say, 10x smarter than a human or better), then fine, we can argue about questions 2 and 3 (though I know this is where doomers love spending their time). Admittedly, it IS kind of wild that this this a tech where we can seriously talk about singularity and extinction as potential outcomes with actual percentage probabilities. That certainly didn't happen with the cotton gin.
There's just so much space between "as important as the smartphone" -> "as important as the internet" (which I am pretty convinced is the baseline, given current AI capabilities) -> "as important as the industrial revolution" -> "transcending physical needs". I think there's a real motte/bailey in effect, where skeptics will say "current AIs suck and will never get good enough to replace even 10% of human intellectual labour" (bailey), but when challenged with data and benchmarks, will retreat to "AIs becoming gods is sci-fi nonsense" (motte). And I think you're mixing the two somewhat, talking about AIs just becoming Very Good in the same paragraph as superintelligences consuming galaxies.
I'm not even certain assigning percentages to predictions like this really makes much sense, but just based on my interactions with LLMs, my good understanding of the tech behind them, and my experience using them at work, here are my thoughts on what the world looks like in 2030:
- 2%: LLMs really turn out to be overhyped, attempts at getting useful work out of them have sputtered out, I have egg all over my face.
- 18%: ChatGPT o3 turns out to be roughly at the plateau of LLM intelligence. Open-Source has caught up, the models are all 1000x cheaper to use due to chip improvements, but hallucinations and lack of common sense are still a fundamental flaw in how the LLM algorithms work. LLMs are the next Google - humans can't imagine doing stuff without a better-than-Star-Trek encyclopedic assistant available to them at all times.
- 30%: LLMs plateau at roughly human-level reasoning and superhuman knowledge. A huge amount of work at companies is being done by LLMs (or whatever their descendant is called), but humans remain employed. The work the humans do is even more bullshit than the current status quo, but society is still structured around humans "pretending to work" and is slow to change. This is the result of "Nothing Ever Happens" colliding with a transformative technology. It really sucks for people who don't get the useless college credentials to get in the door to the useless jobs, though.
- 40%: LLMs are just better than humans. We're in the middle of a massive realignment of almost all industries; most companies have catastrophically downsized their white-collar jobs, and embodied robots/self-driving cars are doing a great deal of blue-collar work too. A historically unprecedented number of humans are unemployable, economically useless. UBI is the biggest political issue in the world. But at least entertainment will be insanely abundant, with Hollywood-level movies and AAA-level videogames being as easy to make as Royal Road novels are now.
- 9.5%: AI recursively accelerates AI research without hitting engineering bottlenecks (a la "AI 2027"), ASI is the new reality for us. The singularity is either here or visibly coming. Might be utopian, might be dystopian, but it's inescapable.
- 0.5%: Yudkowsky turns out to be right (mostly by accident, because LLMs resemble the AI in his writings about as closely as they resemble Asimov's robots). We're all dead.
- Prev
- Next
We shield kids from a lot of complicated real-world things that could affect them. 4-year-olds can have degenerative diseases. Or be sexually abused. Both are much more common than being "intersex" (unless you allow for the much more expansive definitions touted by activists for activist reasons). So I guess schools should have mandatory picture books showing a little kid dying in agony, while their sister gets played with by their uncle, right? So that these kids can be "at peace" with it?
...Of course not. Indoctrination is the only reason people are pushing for teaching kids about intersex medical conditions. Kids inherently know that biological sex is real, and can tell the difference between men and women. Undoing that knowledge requires concerted effort, and the younger you start, the better.
More options
Context Copy link