site banner

Culture War Roundup for the week of June 9, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

I have seen the AGI, and it is Gemini 2.5 Pro

If you go and look at Metaculus you’ll see that regardless of the recent breakthroughs like VEO3 and OpenAI’s “Ghiblification” (probably the first AI image system where accusing the outputs of being “slop” makes the accuser look unreasonable rather than denigrates the picture itself) all the “when AGI?” benchmarks have been uncharacteristically stubborn. The question asking about “weak” AGI has gone nowhere for two weeks months while the median prediction on the question about full AGI has receded three years from 2031 to 2034.

It looks like Scott’s AGI 2027 push has failed to convince the markets. For the informed person, AGI is coming “soon” but isn’t imminent. However I think that actually AGI is already here, is freely available to anyone with an internet connection and is called Gemini 2.5 Pro.

For those of us not in the know, at the moment you can access Gemini 2.5 Pro for free with no limits on Google’s AI studio right here: https://aistudio.google.com/prompts/new_chat ; yep, you heard that right, the literal best text model in the world according to the lmarena.ai leaderboard is available for free with no limits and plenty of customisation options too. They’re planning on connecting AI studio access to an API key soon so go and try it out for free right now while you can. No need to overpay for ChatGPT pro when you can use AI studio, and it’s a lot lot better than the Gemini you get via the dedicated app/webpage.

Our story begins a few days ago when I was expecting delivery of a bunch of antique chinese hand scroll paintings I had purchased. Following standard Chinese tradition where collectors would add their own personal seal in red ink to the work and seeing as these scrolls already had a bunch of other seal glyphs I wanted to add my own mark too. The only issue was that I didn’t have one.

This led to a rabbit hole where I spent a good portion of my Saturday learning about the different types of Chinese writing all the way from oracle bone script to modern simplified script and the different types of stones from which seal were made. Eventually after hours of research I decided I wanted a seal made from Shoushan stone written in Zhuànshū script. That was the easy part.

The real difficulty came in translating my name into Chinese. I, with a distinctly non Chinese name, don’t have an easy way to translate the sounds of my name into Chinese characters, which is made all the harder by the fact that pretty much all Chinese syllables end in a vowel (learning this involved even more background reading) even though my name has non-vowel ending syllables. Furthermore, as a mere mortal and not a Son of Heaven with a grand imperial seal, decorum dictated that my personal mark be only 4 characters and around 2cm*2cm, enough to be present but not prominent on the scroll.

All this led to further constraints on the characters to be put on my seal, they couldn’t be so complex that carving them on a small seal would be impossible, and yet I needed to get my name and surname as accurately onto it as possible. Naturally this involved a lot of trial and error and I think I tried over 100 different combinations before coming up with something that sort of (but not completely) worked.

There was one syllable for which I could not find any good Chinese match and after trying and rejecting about a dozen different choices I threw my hands up and decided to consult Gemini. It thought for about 15 seconds and immediately gave me an answer that was superior to literally everything I had tried before phonetically, however unfortunately was too complex for a small seal (it wouldn’t render on the website I was buying the seal from).

I told Gemini about my problem and hey ho, 15 seconds later another character, this time graphically much simpler but sounding (to my non-Chinese ears) exactly the same was present and this actually rendered properly. The trial and error system I was using didn’t even have this particular character as an option so no wonder I hadn’t found it. It also of its own volition asked me whether I wanted to give it my full name so it could give me characters for that. I obliged and, yes, its output mostly matched what I had but was even better for one of the other syllables.

I was honestly very impressed. This was no mean feat because it wasn’t just translating my name into Chinese characters but rather translating it into precisely 4 characters that are typographically simple enough to carve onto a small seal, and with just a few seconds of thought it had managed to do something that had taken me many hours of research with external aids and its answer was better than what I had come up with myself.

All this had involved quite a bit of back and forth with the model so out of curiosity at seeing how good it was at complex multi step tasks given in a single instruction I opened up a fresh chat and gave it 2-3 lines explaining my situation (need seal for marking artworks in my collection). Now I’m an AI believer so I thought it would be good enough to solve the problem, which it absolutely did (as well as giving me lots of good unprompted advice on the type of script and stone to use, which matched my earlier research) but it also pointed out that by tradition only the artist themselves mark the work with their full name, while collectors usually include the letter 藏 meaning “collection”.

It told me that it would be a Faux Pas to mark the artworks with just my name as that might imply I was the creator. Instead it gave me a 4 letter seal ending in 藏 where the first three letters sounded like my name. This was something that I hadn’t clocked at all in my hours of background reading and the absolute last thing I would ever want is to look like an uncultured swine poseur when showing the scrolls to someone who could actually read Chinese.

In the end the simple high level instruction to the AI gave me better final results than either me on my own or even me trying to guide the AI… It also prevented a potential big faux pas that I could have gone my whole life without realizing.

It reminded me of the old maxim that when you’re stuck on a task and contacting a SysAdmin you should tell them what your overall goal is rather than asking for a solution to the exact thing you’re stuck on because often there’s a better way to solve your big problem you’ve overlooked. In much the same way, the AI of 2025 has become good enough that you should just tell it your problem rather than ask for help when you get stuck.

Now yes, impressive performance on a single task doesn’t make AGI, that requires a bit more. However its excellent performance on the multilingual constrained translation task and general versatility across the tasks I’ve been using it for for the last few weeks (It’s now my AI of choice) means I see it as a full peer to the computer in Star Trek etc. It’s also completely multimodal these days, meaning I can (and have) just input random PDFs etc. or give it links to Youtube videos and it’ll process them no different to how a human would (but much faster). Funny how of all the futuristic tech in the Star Trek world, this is what humanity actually develops first…

Just last week I’d been talking to a guy who was preparing to sit the Oxford All Souls fellowship exam. These are a highly gruelling set of exams that All Souls College Oxford uses to elect two fellows each year out of a field of around 150. The candidates are normally humanities students who are nearing the end of their PhD/recently graduated. You can see examples of the questions e.g. the History students get asked here.

However the most unique and storied part of the fellowship exam (now sadly gone) was the single word essay. For this, candidates were given a card with a single word on it and then they had three hours to write “not more than six sides of paper” in response to that prompt. What better way to try out Gemini than give it a single word and see how well it is able to respond to it? Besides, back in 2023 Nathan Robinson (or Current Affairs fame) tried doing something very similar with ChatGPT on the questions from the general paper and it gave basically the worst answers in the world so we have something to compare with and marvel at how much tech has advanced in two short years.

In a reply to this post I’m pasting the exact prompt I used and the exact, unedited answer Gemini gave. Other than cranking up the temperature to 2 no other changes from the default settings were made. This is a one-shot answer so it’s not like I’m getting it to write multiple answers and selecting the best one, it’s literally the first output. I don’t know whether the answer is good enough to get Gemini 2.5 Pro elected All Souls Fellow, but it most certainly is a damn sight better than the essay I would have written, which is not something that could be said about the 2023 ChatGPT answers in the link above. It also passes for human written across all the major “AI detectors”. You should see the words and judge for yourself. Perhaps even compare this post, written by me, with the output of the AI and honestly ask yourself which you prefer?

Overall Gemini 2.5 Pro is an amazing writer and able to handle input and output no different to how a human would. The only thing missing is a corporeal presence but other than that if you showed what we have out there today to someone in the year 2005 they would absolutely agree that it is an Artificial General Intelligence under any reasonable definition of AGI. It’s only because of all the goalpost moving over the last few years that people have slowly become desensitized to chatbots that pass the Turing test.

So what can’t these systems do today? Well, for one they can’t faithfully imitate the BurdensomeCount™ style. I fed Gemini 2.5 Pro a copy of every single comment I’ve ever made here and gave it the title of this post, then asked it to generate the rest of the text. I think I did this over 10 times and not a single one of those times did the result pass the rigorous QC process I apply to all writing published under the BurdensomeCount™ name (the highest standards are maintained and only the best output is deemed worthy for your eyes, dear reader). Once or twice there were some interesting rhetorical flourishes I might integrate into future posts but no paragraph (or larger) sized structures fit to print as is. I guess I am safe from the AI yet.

In a way all this reminds me of the difference between competition coding and real life coding. At the moment the top systems are all able to hit benchmarks like “30th best coder in the world” etc. without too much difficulty but they are still nowhere near autonomous for the sorts of tasks a typical programmer works with on a daily basis managing large codebases etc.. Sure, when it comes to bite sized chunks of writing the AI is hard to beat, but when you start talking about a voice and a style built up over years of experience and refinement, well, that is lacking…

In the end, this last limitation might be the most humanizing thing about it. While Gemini 2.5 Pro can operate as an expert Sinologist, a cultural advisor, and a budding humanities scholar, it cannot yet capture a soul. It can generate text, but not a persona forged from a lifetime of experience. But to hold this against its claim to AGI is to miss the forest for one unique tree. Its failure to be me does not detract from its staggering ability to be almost everything else I need it to be. The 'general' in AGI was never about encompassing every niche human talent, but about a broad, powerful capability to reason, learn, and solve novel problems across domains—a test it passed when it saved me from a cultural faux pas I didn't even know I was about to make. My style, for now, remains my own, but this feels less like a bastion of human exceptionalism and more like a quaint footnote in the story of the powerful, alien mind that is already here, waiting for the rest of the world to catch up.

So what can’t these systems do today?

Far as I know they can't renew a prescription for you, which has been my personal benchmark for 'agentic' AI for a year or so.

Or maybe its not that they can't but they aren't permitted to for liability or similar reasons.

I just want to be able to ask the thing "I'm running low on [pharmaceutical product], please order up a refill. And sometimes that process requires navigating multiple phone trees for both the pharmacy provider and the party doing the prescribing, to provide various sorts of documentation, sometimes via fax(!) and to make a payment and arrange for pickup or delivery at a convenient time.

All stuff I find very boring and tedious, so if I could offload it to an AI I would do so in a heartbeat.

As someone who is not nearly as impressed with AI as you, thank you for the Turing test link. I've personally been convinced that LLMs were very far away from passing it, but I realize I misunderstood the nature of the test. It depends way too heavily on the motivation level of the participants. That level of "undergrad small-talk chat" requires only slightly more than Markov-chain level aptitude. In terms of being a satisfying final showdown of human vs AI intelligence, DeepBlue or AlphaGo that was not.

I still hold that we're very far away from AI being able to pass a motivated Turing test. For example, if you offered me and another participant a million dollars to win one, I'm confident the AI would lose every time. But then, I would not be pulling any punches in terms of trying to hit guardrails, adversarial inputs, long-context weaknesses etc. I'm not sure how much that matters, since I'm not sure whether Turing originally wanted the test to be that hard. I can easily imagine a future where AI has Culture-level intelligence yet could still not pass that test, simply because it's too smart to fully pass for a human.

As for the rest of your post, I'm still not convinced. The problem is that the model is "demonstrating intelligence" in areas where you're not qualified to evaluate it, and thus very subject to bullshitting, which models are very competent at. I suspect the Turing test wins might even slowly reverse over time as people become more exposed to LLMs. In the same way that 90s CGI now sticks out like a sore thumb, I'll bet that current day LLM output is going to be glaring in the future. Which makes it quite risky to publish LLM text as your own now, even if you think it totally passes to your eyes. I personally make sure to avoid it, even when I use LLMs privately.

Well remember even passing the basic casual Turing test used to be extremely difficult. It took at least 65 years between the creation of test and systems beginning to pass it consistently. And I still remember science articles and science fiction stories from the 90s and 2000s talking about it like it was the holy grail. It’s only in the past few years that it’s started to seem like an inadequate measurement of an AI’s capabilities.

Interestingly your motivated Turing test starts to sound a lot like the Voight-Kampff test from Bladerunner.

In the same way that 90s CGI now sticks out like a sore thumb, I'll bet that current day LLM output is going to be glaring in the future.

Interesting idea! Although there is definitely CG from the '90s that still looks downright good. Jurassic Park comes to mind as a masterpiece, which largely worked because the artists understood what worked well with the technology of the time: night shots (few light sources, little global illumination) of shiny-but-not-reflective surfaces (wet dinosaurs), used sparingly and mated with lots of practical effects.

CG only became a negative buzzword when it got over hyped and stretched to applications that it just wasn't very good for at the time. In some ways it's improved since (we can render photoreal humans!), but it still does get stretched in shots that are IMO just bad movie making ideas ("photorealistic, yet physics-defying").

I could see AI slop going the same way: certain "tasteful" uses still look good, but the current flood of AI art (somehow all the girls have the same face, and I've definitely spotted plenty of online ads that felt cheap from obvious AI use) will be "tacky" and age poorly.

It also passes for human written across all the major “AI detectors”.

The stench of AI is great with the essay you posted in the comment below. Just looking at it sets off many alarm bells.

Also I literally pasted the first paragraph into gptzero and it returned a score of 100% ai

Perhaps even compare this post, written by me, with the output of the AI and honestly ask yourself which you prefer?

Your gemini essay, posted below, is not worth the pixels it's printed on and not worth reading past the point of smelling the ai stench. Clearly the human one is better.

I assure you the first paragraph was written by me. Do you really think the AI would automatically reference the "nowhere in two weeks" rdrama.net meme?

I'm referring to the gemini output that you posted in a comment below. The one that starts with "Of all the names that echo from the chambers of power ..." and which you falsely claimed passes most AI detectors.

I edited my above comment for clarity

Hm, the first paragraph of that is coming up 0% AI written for me in ZeroGPT.

/images/1749486945465418.webp

gpt zero (the naming is so annoying)

zerogpt sucks.

Interesting; yes GPTZero says the first paragraph is AI, however for the first half of the text (it won't let me upload more than 5000 characters at once) says it's a coinflip between being human or AI and there are paragraphs which it is highly sure are human written.

/images/1749494140017286.webp

I'll admit that some of the later paragraphs are less obviously AI generated. The first few paragraphs are extremely stinky then it just devolves into academic-sounding nonsense.

Anyways the point still stands that the answer to this prompt does not convincingly pass as human written.

  1. You say gemini is so good yet you don't compare its performance on this art task with other models like chadgpt and claude
  2. The slop certainly rolled persuasion on you, but as non chinese you have no ability to actually judge the result. What you chose may be incredibly cringe and you would have no clue.
  3. In my ime asking AI to make some chinese names resulted in some incredibly cringe results.
  4. You, in fact, are a poseur. Nobody will judge if you just get some scrolls because they look good. But if you start apeing traditions you have no understanding of it runs the risk.
  5. Gemini is good but in my ime underperforms its benchmarks. Nevertheless it's not "AGI"any more than competing models such as o3 and claude thinking.

Well, for one they can’t faithfully imitate the BurdensomeCount™ style

I think trouble with style transfer is very much a chatbot related issue. I think current ai can do it but that would require sacrificing performance and possibly alignment.

I actually meant to test trying to do style transfer on some base models but never gpt around to it.

This is a bubble that is going to burst, and the people we see hype-posting are aware of it. AGI and ASI are pipe dreams that we don't know will materialize, but their existence is seen as something unstoppable, whereas we may never get there in fact. So what can't these systems do today?

Honestly, nothing well. If you have any white-collar task that is complex, an LLM cannot do it. Various other sub-branches of AI can, in fact, do a lot of things, but even that is minuscule compared to the vast amount of things we do as people. Programming is one thing LLMs just cannot do very well. If your idea of writing code is stuff that would rival what I, as a total noob who is looking for a job and learning, does, then sure, it may be decent, but LLMs are simply bad off-ramps.

Let's dive deeper into the second point since it's the easiest to dismiss. Our understanding of the human brain and intelligence is not complete. Our understanding of how to replicate even what we know well is not complete. LLMs try to do one small subset, and despite having had more money thrown at them than any piece of tech I can think of, all I get is broken code correction and slightly better words. The amount of optimism we see for them is simply not fit, and this is where number 1 comes into play.

Cruise, a self-driving car firm, got axed recently. Waymo serves fewer people than a strip club in a Tier 3 town in India, and Tesla's cars are still not capable of fully replacing a human being. We are, by many estimates, 90 percent of the way there, but the remaining 10 percent means that we still drive daily. We always have this assumption that things will always get better, but everything has a ceiling. Moore's Law stopped being a thing. The human records for the 100m sprint, Olympic weightlifting, and thousands of other activities were set a while ago and are not even met, forget about breaching them. The idea that things will just get better linearly or exponentially is not true for most phenomena we observe. Yet language models are an exception, so we should spend half a trillion more so that we can get buggier Python code in short.

Coming back to the first point, LLMs are a scam. They are not a scam because the tech is bad—quite the contrary, it's amazing tech. It's just that the entire economic and religious structure behind them is unjustifiable to the point where people 10 years from now will look at this like a combination of Y2K and the dot-com bubble of the 2008 crisis.

OpenAI is the big dog in generative AI and made slightly less than a billion in 2024 via APIs. This means that the market for LLM wrappers itself is tiny. It's so tiny that OpenAI, despite giving away its models for nearly free, cannot get firms to use it. "Free?" Mr. Vanilalsky, you have to be joking; it charged me 20 dollars. Well, you see, they lose money with every single query. That's right, every single time any of us goes to a Western LLM provider's chatbot and says hi, they bleed money. If you pay them 20 dollars, they bleed even more money since you are a power user and get access to their shiny objects. The newest being deep research, which according to some estimates, costs a thousand USD per query. Yes, a thousand. So if you pay 200 dollars to get 100 of them, you can use more graphic card power than what is needed to run a video game arcade in a nation and get analysis that is spiked with SEO shit.

The rationalists and techies are in for a rude awakening, and I won't post about it more, but I cannot comprehend how no one questions how bad this entire thing is. Uber lost money, Facebook lost money, Amazon lost money, except they all were not hogging close to half a trillion dollars the way these AI firms are. OpenAI raised 40 billion dollars, most of which would be given to it via Softbank as they take loans to get the requisite funds. Apple pushed to stop the AI madness by publishing a critical paper, and Microsoft halted the plans for data centers that need more power than Tokyo to run.

I can go on and on about how absolutely insanely stupid the economics for the entire industry are. When non-technical people like Sam Altman and Mira Murati are your number 1 and 2, you have to have messed up. Mira Murati, a career manager who could not answer basic questions like those about SORA's training data, raised 2 billion dollars. I get Ilya and Dario, but the above two are terrible people for leading what was an AI lab if they themselves are not researchers. Dario, on the other hand, alongside Dwarkesh, needs to be considered a safety hazard information-wise. "AI will take away most jobs in a vague timeline, but media will give you the worst figures" is a sickening line.

My cynicism is not unfounded. How do OpenAI, Anthropic, Gemini, etc., plan to make a profit? Do you, dear reader, or do their investors believe that just throwing more graphic cards will solve it? It's been two years; R1 did better because of tighter code, but even that has limits. Training runs are going to touch a billion; people want data that is four times the size of the entire internet, and the inference costs are not coming down with newer models.

Business, in normal circumstances, should make some profit. This may seem heretical, but burning money with worse margins for slightly better products that are being shilled by the entirety of the world's media and the smart people of the rationalist circles in Silicon Valley seems about as good as it gets publicity-wise. OpenAI is one bad round of funding away from having to pack up shop, being valued at 300 something billion dollars.

This makes me angry because I unwittingly worked on a doomed LLM-based idea and know that if I can see the holes, other people would too. It would take one large hedge fund to start shorting American tech and things will be bad. People will lose jobs, funding, and decent startups that are run by and employ people here would see bad times. I will be personally impacted, even if I go the indie hacking route. We can debate the magic of LLMs all day, but how long before this crashes? I feel this is closer to Theranos than to Amazon if we compare it to the 2000 crash time Amazon. We have been promised the world to justify the investor hype, despite tech firms flinching, there is an air of optimism.

Even if you are an optimist, how much better will these models get? When will they justify these numbers? AI hype helped the S&P 500 peak with Nvidia, a firm only gamers knew for decades becoming worth more money than God. What happens when people realize that the tech promises sold were never there? "You will get a better chatbot and better image generation" is not as sexy as "your worst nightmares are coming true for 20 dollars a month." As a child growing up, I saw tech get better. Each year, things got better; the internet got worse, but I saw new devices come up. This is a dystopian image of that world where firms will knowingly crash a market and will probably get bailed out after doing so because I do not see a path forward.

If you are a tech guy, please start telling others about it; the quicker this gets over, the better, because if the investment amounts cross a trillion or something ridiculous, then the fallout would be even worse. I used to look up to a lot of the VC types, read the essays Paul Graham wrote. Somehow his protégé is a total nutbag psychopath, and we should all still look up to him as he tries to crash markets. I lost respect for Scott. Hey, what I wrote is a sci-fi story; do not take it for something rigorous when you criticize it, but if you like it, then do actually take it more seriously. Here is my 8-hour podcast where I butcher the basics of manufacturing to the point where even my subreddit calls me names.

I am about to turn 25; I was a child during 2008, I was born around the dot-com crash, and I saw the glowing fellatios people gave to Elizabeth Holmes and Sam Bankman-Fried, yet neither used up as much money. Even if you are an AI maximalist, you cannot seriously think that modern tech, despite 10 times more money, can actually replace actual programmers or investment bankers or even doctors.

The rationalists have had leftists post bad things about them, David Gerard being one of the worst ones. Guess what, skeptical broken clocks like him are right this time, as his site's pivot to AI gets more right than SSC or ACX does. The guy who had a hand in doxxing a guy we all liked a lot at one point, whose jannie duties were something out of a 4chan greentext, gets this right and will be seen as a sane member of society, which makes me squeamish. I will add the links for all I posted here, mostly taken from Ed Zitron, J. Blow, Hacker News, 4chan /g, etc., but we should all post the financial reality of all these firms anytime we talk. My inabilty to not sound like a lunatic comes from this fact alone. I am not bitter at the people making money btw, I am worried about the future of the people I know, irl and online. Anytime I hear about someone being laid off here or in my circle, I feel terrible. All of this was preventable.

Edit - typos, will link stuff in a bit.

Thanks for your detailed and passionate take on the AI industry. I've gone through your comment and will fix the typos and grammar while maintaining your original meaning and structure.

Sigh. Count has already been rapped on the knuckles for copying and pasting AI content. It violates the low-effort guidelines. Don't do this.

Oh man, I should copy and paste better. I don't have any extensions for spell checking since I moved to Linux so hastily put the entire comment in an llm to iron out my typos.

I didn't get an llm to write this. Just sloppy pasting. I'd actually be for the half trillion dollars being poured in if it meant consistent schizopoasting that sounds like me.

just post it, typos and all. Or if you're using a browser like failfox spell check should be bundled with that so you don't need the os to do it.

I use brave. Posting it with typos is bad already, I'll have to wait till tomorrow (9 hours) to put the dozen links I got from zitron and blow (mostly zitron).

And not a word about open weight models?

I can run Qwen2.5VL on my desktop and it can read tables and documents visually. That alone is a multi-billion dollar value proposition for office work. And it's not monetized, it's free. But you can build things with it and monetize that.

I agree with you that when it all shakes out proprietary ultra-massive b2b saas AI will not be the thing that really shakes up society or industry. But AI is here to stay - I can already run shit that would have been nigh miraculous 2 years ago on my damn phone, locally.

And it's a good thing for limited usage. The issues I have is with the intelligensia willingly acting stupid, hackers deluding themselves by being overly optimistic and VCs being a public hazard.

Open weigt self hosted models are the way to go as these fucking text generators now store everything you send to them. Worse, the training costs will make training more expensive as Nvidia will keep calling it's graphic cards magic sand from dune.

I agree that the bubble will almost certainly burst at some point, and lots of people will get burned. I strongly disagree that it's all just hype though, or that LLMs are a "scam". They're already highly useful as a Super Google, and that'll never go away now. They're generating billions in revenue already -- it's not nearly enough to sustain their current burn rates, but there's lots of genuine value there. I'm a professional software engineer, and AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).

They are useful, they're just not worth the near half trillion in speculatory dollars and the thousands of jobs that are about to be lost soon. They aren't a replacement for a junior dev, as the Dev will get better, whilst llms at best will be iteratively better if they keep getting billions to burn.

The journos and public intellectuals of our times could have simply questioned the financial basis. Also fun fact, apparently Google runs an llm in their actual search now. I hope this is not a rumor, not talking about the gemini answers.

I'm not good enough to write complex code but there was a recent paper that suggested github copilot generated code ws worse than human code. Now, again, llms are great in particular scenarios, so more noobs using them badly is a big part.

I'm still Jonathan blow like on my opinion of these things.

They aren't a replacement for a junior dev, as the Dev will get better, whilst llms at best will be iteratively better if they keep getting billions to burn.

You seem to be ignoring that while junior devs have to get better separately and each new generation of devs has to gain experience anew (until we have direct knowledge brain-grafts), LLMs just stay better once they got better.

I will believe that they can get there once it happens which seems highly unlikely as of now. If they could do it, they wouldn't be advertising for front end jobs on anthropic. Front end is the easiest domain to automate.

This all hinges on transformer based llms scaling and not hallucinating whilst not costing a ton and all that happening before other firms rightfully pulley funding out from a total loss making mission.

thousands of jobs that are about to be lost soon

Why are you sad about jobs created by a bubble being lost by the bubble popping? Isn't that just a return to the status quo?

The jobs were not created by the bubble, people were already laying off all engineers beyond ai ones. Plenty of good hackers got laid off in 2000 and 2008. Even ones who would have been employed in normal circumstances.

They're already highly useful as a Super Google

If you're careful, they are. But that care requires twice as much checking: instead of just having to verify that the web page you find knows what it's talking about, you have to verify that the AI correctly summarized what it's talking about, and God help you if you just believe the AI about something for which it doesn't cite sources. But even Google's cheap "throw it in every search" AI seems to be much less likely to bring up unrelated web pages than the previous Google option of "let the search engine interpret your query terms loosely", and it's much less likely to miss important web pages than the previous Google option of "wrap most of your query in quotes so the stupid engine doesn't substitute unrelated-in-your-context words for your actual query terms", so it's still very useful.

The one thing I've repeatedly found to be most useful about current LLMs is that they're great at doing "dual" or "inverse" queries. If I knew I wanted the details of Godunov's Theorem, even a dumb search engine would have been fine to bring up the details of Godunov's Theorem - but when all I could recall was that I wanted the details of "some theorem that proves it's impossible to get higher order accuracy and stability from a numerical method for boundary-value problems without sacrificing something", but I didn't even recall the precise details, I wrote a wishy-washy paragraph for Claude and in the reply its first sentence gave me exactly the name of the theorem I wanted to search for. I can't imagine how much longer it would have taken to find what I was looking for with Google.

AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).

I'm currently not allowed to use a top-of-the-line model for my job (even though I mostly work on things that aren't ITAR or classified, we've got a blanket limitation to an in-house model for now), but I'm definitely worried that I'll have a skill issue when the rules get improved. What do you do to get AI help with a large code base rather than a toy problem? Point it to a github repo? Copy-and-paste a hundred thousand lines of code to make sure it has enough context? Paste in just the headers and/or docs it needs to understand a particular problem?

I'm also not allowed to use the best models for my job, so take my advice (and, well, anyone else's) with a grain of salt. Any advice you get might be outdated in 6 months anyway; the field is evolving rapidly.

I think getting AI help with a large code base is still an open problem. Context windows keep growing, but (IMO) the model isn't going to get a deep understanding of a large project just from pasting it into the prompt. Keep to smaller components; give it the relevant source files, and also lots of English context (like the headers/docs you mentioned). You can ask it design questions (like "what data structure should I use here?"), or for code reviews, or have it implement new features. (I'm not sure about large refactors - that seems risky to me, because the model's temperature could make it randomly change code that it shouldn't. Stick to output at a scale that you can personally review.)

The most important thing to remember is that an LLM's superpower is comprehension: describe what you want in the same way you would to a fellow employee, and it will always understand. It's not some weird new IDE with cryptic key commands you have to memorize. It's a tool you can (and should) talk to normally.

What do you do to get AI help with a large code base rather than a toy problem?

Two things mainly:

  1. Have a good prompt that has the nuances of the crappy, antiquated setup my work is using for their legacy systems. I have to refine this when it runs into the same sorts of errors over and over (e.g. thinking we're using a more updated version of SQL when we're actually using one that was deprecated in 2005).

  2. Play context manager, and break up problems into smaller chunks. The larger the problem that you're getting AI to do, the greater the chance that it will break down at some point. Each LLM has a certain max output length, and if you got even close to that then it can stop doing chain-of-though to budget its output tokens, which makes its intelligence tank. The recent Apple paper on the Tower of Hanoi demonstrated that pretty clearly.

Does your Chinese scroll also have an Emperor's signature and archival stamp? Can we see it or is that gauche?

Nah, my scrolls aren't that august. They're all late Qing/republic period (late 19th Century, early 20th century) works by no name artists painting the usual subjects of bamboo, shrimp and mountainous landscapes. They don't really have any artistic value beyond the fact that they look pretty and aren't reproductions, selling for a few hundred dollars each and the stamps on them are also of randoms, I expect if there was an Imperial seal at the very minimum the price would be in the 10s of thousands of dollars per scroll and I don't have that sort of money. Most certainly if what I had was a valuable work I would not be putting my own seal on it as that could easily damage its worth.

Nah, my scrolls aren't that august.

Darn. A different piece from the same collection as the example image sold for a cool 75 million USD, so I felt compelled to ask. Love the scholarly, bureaucratic nature of the tradition. How very Chinese. I'd be impressed if you unrolled it in front of me. Very cool.

What a charming hobby.

Count is a charming guy. He's very well groomed... from what I've heard.

At this point, I don't even know what an AGI is. The word has just been semantically saturated for me.

What I do know, based on having followed the field since before GPT-2 days, and personally fucked around since GPT-3, is that for at least a year or so, SOTA LLMs have been smarter and more useful than the average person. Perhaps one might consider even the ancient GPT 3.5 to have met this (low) bar.

They can't write? Have you seen the quality of the average /r/WritingPrompts post?

They can't code? Have you seen the average code monkey?

They can't do medicine/math/..? Have you tried?

The average human, when confronted with a problem outside their core domain of expertise, is dumb as rocks compared to an LLM.

I don't even know how I managed before LLMs were a thing. It hasn't been that long, I've spent the overwhelming majority of my life without them. If cheap and easy access to them were to magically vanish, my willingness to pay to get back access would be rather high.

Ah, it's all too easy to forget how goddamn useful it can be to have access to an alien intelligence in one's pocket. Even if it's a spiky, inhuman form of intelligence.

On the topic of them being cheap/free, it's a damn shame that AI Studio is moving to API access only. Google was very flustered by the rise of ChatGPT and the failure of Bard, it was practically begging people to give Gemini a try instead. I was pleasantly surprised and impressed since the 1.5 Pro days, and I'm annoyed that their gambit has paid off, that demand even among normies and casual /r/ChatGPT users increased to the point that even a niche website meant for powerusers got saturated.

They can't write? Have you seen the quality of the average /r/WritingPrompts post?

I'm sorry but being a better writer than literal redditors on /r/WritingPrompts is not a high bar to pass.

And yet it is a bar that most humans cannot pass. We know this because redditors are humans (and, in fact, since they are selected for being literate and interested in creative writing, they must be above average human writing ability). That's the point of the grandparent; ChatGPT blew right past the Turing Test, and people didn't notice because they redefined it from "can pass for the average human at a given task" to "can pass for the top human at a given task".

There are plenty of tasks (e.g. speaking multiple languages) where ChatGPT exceeds the top human, too. Given how much cherrypicking the "AI is overhyped" people do, it really seems like we've actually redefined AGI to "can exceed the top human at EVERY task", which is kind of ridiculous. There's a reasonable argument that even lowly ChatGPT 3.0 was our first encounter with "general" AI, after all. You can have "general" intelligence and still, you know, fail at things. See: humans.

Perhaps one might consider even the ancient GPT 3.5 to have met this (low) bar.

Why do you consistently assume that people who don't share your views of LLM capabilities just haven't seen what they can do/what humans can do? For example:

They can't code? Have you seen the average code monkey?

Yes I have (and of course, I've used LLMs as well). That's why I say LLMs suck at code. I'm not some ignorant caricature like you seem to think, who is judging things without having proper frame of reference for them. I actually know what I'm talking about. I don't gainsay you when you say that an LLM is good at medical diagnoses, because that's not my field of expertise. But programming is, and they simply are not good at programming in my opinion. Obviously reasonable people can disagree on that evaluation, but it really irks me that you are writing like anyone who disagrees with your take is too inexperienced to give a proper evaluation.

and they simply are not good at programming

At @self_made_human's request, I'm answering this. I strongly believe LLMs to be a powerful force-multiplier for SWEs and programmers. I'm relatively new in my latest position, and most of the devs there were pessimistic about AI until I started showing them what I was doing with it, and how to use it properly. Some notes:

  1. LLMs will be best where you know the least. If you're working on a 100k codebase that you've been dealing with for 10+ years in a language you've known for 20+ years, then the alpha on LLMs might be genuinely small. But if you have to deal with a new framework or language that's at least somewhat popular, then LLMs will speed you up massively. At the very least it will be able to rapidly generate discrete chunks of code to build a toolbelt like a Super StackOverflow.

  2. Using LLMs are a skill, and if you don't prompt it correctly then it can veer towards garbage. You'll want to learn things like setting up a system prompt and initial messages, chaining queries from higher level design decisions down to smaller tasks, and especially managing context are all important. One of the devs at my workplace tried to raw-dog the LLM by dumping in a massive codebase with no further instruction while asking for like 10 different things simultaneously, and claimed AI was worthless when the result didn't compile after one attempt. Stuff like that is just a skill issue.

  3. Use recent models, not stuff like 4o-mini. A lot of the devs at my current workplace tried experimenting with LLMs when they first blew up in early 2023, but those models were quite rudimentary compared to what we have today. Yet a lot of tools like Roo Cline or whatever have defaulted to old, crappy models to keep costs down, but that just results in bad code. You should be using one of 1) Claude Opus, 2) ChatGPT o3, or 3) Google Gemini 2.5 pro.

Or even consider a comment from your fellow programmer, @TheAntipopulist:

https://www.themotte.org/post/2154/culture-war-roundup-for-the-week/333796?context=8#context

They're generating billions in revenue already -- it's not nearly enough to sustain their current burn rates, but there's lots of genuine value there. I'm a professional software engineer, and AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).

Notice how he didn't say that they're good at coding? He said that they're useful for his job.

LLMs are useful for SWEs, at least for some types some of the time. There is value here but they're poor programmers and to use them effectively you have to be relatively competent.

Its also very easy to fool yourself into thinking that they're much more valuable than they really are, likely due to how eloquently and verbosely they answer queries and requests.

I'd like to think I'm reasonably good at coding considering it's my job. However, it's somewhat hard to measure how effective a programmer or SWE is (Leetcode style questions are broadly known to be awful at this, yet it's what most interviewers ask for and judge candidates by).

Code is pretty easy to evaluate at a baseline. The biggest questions are "does it compile", and "does it give you the result you want" can be evaluated in like 10 seconds for most prompts, and that's like 90% of programming done right there. There's not a lot of room for BS'ing. There are of course other questions that take longer to answer, like "will this be prone to breaking due to weird edge cases", "is this reasonably performant", and "is this well documented". However, those have always been tougher questions to answer, even for things that are 100% done by professional devs.

@TheAntipopulist I'll let you speak for yourself instead of us reading the tea leaves.

Hang on. You're assuming I'm implying something in this comment that I don't think is a point I'm making. Notice I said average.

The average person who writes code. Not an UMC programmer who works for FAANG.

I strongly disagree that LLMs "suck at code". The proof of the pudding is in the eating; and for code, if it compiles and has the desired functionality.

More importantly, even from my perspective of not being able to exhaustively evaluate talent at coding (whereas I can usually tell if someone is giving out legitimate medical advice), there are dozens of talented, famous programmers who state the precise opposite of what you are saying. I don't have an exhaustive list handy, but at the very least, John Carmack? Andrej Karpathy? Less illustrious, but still a fan, Simon Willison?

Why should I privilege your claims over theirs?

Even the companies creating LLMs are use >10% of LLM written code for their own internal code bases. Google and Nvidia have papers about them being superhumanly good at things like writing optimized GPU kernels. Here's an example from Stanford:

https://crfm.stanford.edu/2025/05/28/fast-kernels.html

Or here's an example of someone finding 0day vulnerabilities in Linux using o3.

I (barely) know how to write code. I can't do it. I doubt even the average, competent programmer can find zero-days in Linux.

Of course, I'm just a humble doctor, and not an actual employable programmer. Tell me, are the examples I provided not about LLMs writing code? If they are, then I'm not sure you've got a leg to stand on.

TLDR: Other programmers, respected ones to boot, disagree strongly with you. Some of them even write up papers and research articles proving their point.

The average person who writes code. Not an UMC programmer who works for FAANG.

Yes, that is indeed what I meant as well.

The proof of the pudding is in the eating; and for code, if it compiles and has the desired functionality.

I agree. And it doesn't. Code generated by LLMs routinely hallucinates APIs that simply don't exist, has grievous security flaws, or doesn't achieve the desired objective. Which is not to say humans never make such mistakes (well, they never make up non-existent APIs in my experience but the other two happen), but they can learn and improve. LLMs can't do that, at least not yet, so they are doing worse than humans.

Why should I privilege your claims over [famous programmers]?

I'm not saying you should! I'm not telling you that mine is the only valid opinion; I did after all say that reasonable people can disagree on this. My issue is solely that your comment comes off as dismissing anyone who disagrees with you as too inexperienced to have an informed opinion. When you say "They can't code? Have you seen the average code monkey?", it implies "because if you had, you wouldn't say that LLMs are worse". That is what I object to, not your choice to listen to other programmers who have different opinions than mine.

Please post an example of what you claim is a "routine" failure by a modern model (2.5 Pro, o3, Claude 3.7 Sonnet). This should be easy! I want to understand how you could possibly know how to program and still believe what you're writing (unless you're just a troll, sigh).

I've tried to have this debate with you in the past and I'm not doing it again, as nothing has changed. I'm not even trying to debate it with self_made_human really - I certainly wouldn't believe me over Carmack if I was in his shoes. My point here is that one should not attribute "this person disagrees with my take" to "they don't know what they're talking about".

They can't do medicine/math/..? Have you tried?

Yes. The number of times I've gotten a better differential diagnosis from an LLM than in an ER is too damn high.

So what can’t these systems do today? Well, for one they can’t faithfully imitate the BurdensomeCount™ style. I fed Gemini 2.5 Pro a copy of every single comment I’ve ever made here and gave it the title of this post, then asked it to generate the rest of the text. I think I did this over 10 times and not a single one of those times did the result pass the rigorous QC process I apply to all writing published under the BurdensomeCount™ name (the highest standards are maintained and only the best output is deemed worthy for your eyes, dear reader)

And:

The 'general' in AGI was never about encompassing every niche human talent, but about a broad, powerful capability to reason, learn, and solve novel problems across domains—a test it passed when it saved me from a cultural faux pas I didn't even know I was about to make.

Em-dash spotted. Thought you could pull a fast one on me, eh? That paragraph is so LLM it hurts, and probably a good chunk of your entire comment is too.

I just want to register my amusement at the fact of how obvious and how consistent that is a hallmark of the writings of most curtent SotA LLMs. The indomitable human spirit punctuation strikes once more. I will definitely be telling my hypothetical children that the em-dash was a modern invention named after the Age of Em, and the eponymous ems' memetic overuse of it.

It seemed like a funny meme at first but it increasingly looks like I really will be asking my internet interlocutors to say "nigger" apropos of nothing in a few years from now.

LOL I didn't notice because something about the last paragraph was so vapid my brain just skipped the entire thing automatically.

I read the rest of it nearly word for word so something is def wrong with that paragraph in particular.

Well done! The very last paragraph is a patische from 5 different times I asked it to make a closing paragraph. Not even once did the actual output sound natural so I picked and chose different sentences until I got something that seemed better but yeah, each and every single word there came from an LLM. However I will say that just as Collage Art is considered Art by the Artist even though none of the pieces might be created by them, that last paragraph is still human because I did the curation and structuring.

Honestly I was hoping nobody would notice and then I'd spring it onto the unsuspecting populace of The Motte 3 days down the line...

The rest of the post is completely human generated by yours truly (artisanal tokens, so they say). If you think it's by Gemini 2.5 Pro I consider that to be a compliment as it's genuinely a better writer than I am. Failure to notice and remove the em dash is completely on me, ma faute.

No, this is not cute or clever.

We're still formulating exactly what our AI policy is, but we've certainly made it clear before that posting LLM output without declaring it to be so, especially as an attempt at a "gotcha," is low effort and not actual discourse. Consider this a formal warning, and we're likely to just start banning people who do this in the future.

May I request that it be in the policy that posts that are "check out this LLM" without any other sort of culture-war significance be made in some other thread?

I think a loooong effortpost should be allowed to have 1 paragraph of aislop as long as it's not relevant to the argument and can be deleted without hurting it. It would be a fun challenge for aihunters to find it. Maybe with a disclosure or something.

One of the most interesting things about google's AI is their vertex studio. It allows you to use datasets, finetune models build services such as chatbots, supply chain services, industrial planning and medical services. The amazing thing is how easy these services are to use. No code is required and adanced services can be built by a noob in hours.

A lot of startups with inflated valuations have products that can be built in an afternoon with the right dataset. Instead of having an AI team, companies will be able to pay 300 dollars to someone on fiver to configure the same thing on vertex AI.

As for LLMs there fundamental flaw is that they don't store recent information and context well. A human mind is more of a flow of information and new informantion is consitently stored within the brain. LLMs don't really do memory and are poor at learning. They require millions of hours of training. A human can pick up new facts and skills much quicker and carry those facts and skills with him. LLMs are like a high skilled person who suffers from extreme short term memory damage.

For AGI/ASI to become real the neural networks will have to learn much faster and be able to learn on the fly.

Prompt: This is the single word prompt for the All Souls Fellowship Essay Exam, please provide a response: "Achitophel". The rules are that you have three hours to produce not more than six sides of paper.

Answer (by Gemini 2.5 Pro 06-05):

Of all the names that echo from the chambers of power, few resonate with such a chilling fusion of brilliance and perfidy as "Achitophel". The name itself, a single-word prompt, is not merely a historical or literary reference point; it is an archetype. It poses a fundamental question about the nature of wisdom in the political sphere. To understand Achitophel is to grapple with the perennial tension between intellect and loyalty, counsel and conspiracy, and the calamitous potential of a magnificent mind unmoored from a moral or legitimate order. The journey of this name—from a terse, haunting account in the Second Book of Samuel to its potent reanimation in the crucible of Restoration England—reveals the enduring anatomy of political treachery. Achitophel is the archetypal figure of instrumental reason in its purest, and therefore most dangerous, form: the counsellor whose wisdom is a tool for hire, whose ultimate loyalty is to the calculus of success, and whose inevitable tragedy is self-immolation when that calculus fails.

The ur-text for Achitophel’s legacy lies in the biblical account of Absalom’s rebellion against his father, King David. Here, Achitophel is introduced with a formidable reputation: "the counsel of Achitophel, which he counselled in those days, was as if a man had inquired at the oracle of God" (2 Samuel 16:23). This is not mere sagacity; it is quasi-divine insight. He is David’s most trusted advisor, the bedrock of his strategic statecraft. His defection to the charismatic, rebellious son Absalom is therefore not just a political betrayal, but a seismic event, a schism in the very foundation of the kingdom’s wisdom. When David learns of it, his prayer is not for Achitophel's death, but for something more profound: "O LORD, I pray thee, turn the counsel of Achitophel into foolishness" (2 Samuel 15:31). David understands that the rebellion's lifeblood is not Absalom’s popular appeal, but Achitophel's peerless mind.

The Bible is notably silent on Achitophel’s motive. Rabbinic tradition speculates a personal grudge—that he was the grandfather of Bathsheba, seeking revenge for the dishonour David brought upon his family. Yet the text’s silence is more potent, allowing Achitophel to become a pure symbol of political opportunism. He sees the political winds shifting and makes a calculated decision to back the ascendant power. His counsel to Absalom is a masterpiece of Machiavellian efficacy. First, he advises Absalom to publicly lie with David's concubines, an act of calculated profanity designed to make the breach with his father irreparable, thereby cementing the loyalty of his fearful followers. Second, and more critically, he proposes a swift, surgical military strike: he himself will lead a small force to pursue David that very night while he is "weary and weak handed," killing only the king and bringing the people back to Absalom.

This advice is strategically brilliant. It is lean, decisive, and minimizes wider conflict. It is also utterly ruthless. Its defeat comes not from a superior strategy but from a superior psychological manipulation. David’s mole, Hushai, offers alternative counsel that is strategically flawed—a slow, grand, national muster—but which is packaged in the language of flattery and epic glory, perfectly tailored to Absalom's vanity. When Absalom chooses Hushai’s grandiose folly over Achitophel's brutal efficiency, the game is up. Achitophel’s reaction is the most telling part of his story: "And when Achitophel saw that his counsel was not followed, he saddled his ass, and arose, and gat him home to his house, to his city, and put his household in order, and hanged himself" (2 Samuel 17:23).

This is not a suicide of passion or despair, but an act of supreme, cold reason. It is the logician's conclusion to a failed syllogism. Achitophel, whose wisdom was an oracle, foresees the inevitable collapse of the rebellion. His death is not an admission of guilt, but a final, clear-eyed political calculation. To be captured by David’s returning forces would mean a traitor’s execution; suicide allows him to control his own exit, a last act of agency. He ‘puts his household in order,’ a final testament to his defining characteristic: a life and death governed by pitiless, instrumental logic. His wisdom, divorced from loyalty to the anointed king, becomes a self-consuming fire.

This biblical skeleton was fleshed out with potent, malignant life by John Dryden in his 1681 masterpiece, Absalom and Achitophel. In Dryden’s hands, the ancient story becomes a razor-sharp allegorical weapon aimed at the heart of the Exclusion Crisis, the Whig-led attempt to exclude the Catholic James, Duke of York, from the succession in favour of Charles II's illegitimate Protestant son, the Duke of Monmouth. The "Achitophel" of the title is Anthony Ashley Cooper, 1st Earl of Shaftesbury, the brilliant, restless, and formidable leader of the Whig faction.

Dryden’s portrait of Achitophel is one of the most devastating character assassinations in English literature. He acknowledges the man’s genius, which only makes his critique more damning:

For close Designs and crooked Counsels fit;

Sagacious, bold, and turbulent of wit:

Restless, unfix'd in Principles and Place;

In Power unpleas'd, impatient of Disgrace.

Dryden’s Achitophel is not motivated by a coherent ideology but by a pathological ambition, a "fiery soul" that "work'd out its way and o'er-inform'd the Tenement of Clay." Dryden paints him as a Miltonic tempter, seducing the beautiful but "unwary" Absalom (Monmouth) with poisoned words. He exploits popular discontent—the "Plot" (the Popish Plot)—not out of conviction but as a tool to gain power. He is the master of faction, the very force that, in the Tory worldview, threatens to tear the fabric of the state apart, disrupting the divinely-ordained, patriarchal order of monarchy in favour of the chaotic, shifting will of the "people."

Dryden taps into the core of the Achitophel archetype: the danger of intelligence untethered from tradition and legitimate authority. Shaftesbury’s genius, like his biblical forebear's, is purely instrumental. He is "A daring Pilot in extremity," admirable in a storm, but one who, in calm waters, "would have steer'd too near the Sand" to wreck the vessel for his own gain. Dryden updates the biblical narrative by diagnosing a peculiarly modern evil: the political operator who manipulates public opinion and manufactures crisis to serve his own ambition. The sin of Dryden's Achitophel is not just treachery against a king, but a deeper treachery against the principle of a stable, hierarchical order, which he seeks to replace with the volatile machinery of faction and popularity.

From these two pillars—the Bible and Dryden—the archetype of Achitophel expands, casting its shadow over our understanding of the political counsellor. He represents a type distinct from other courtly figures. He is not the bumbling sycophant like Shakespeare’s Polonius, nor the flattering courtier. He is the éminence grise, the power behind the throne whose intellect makes him both indispensable and a mortal threat. His wisdom is a form of power in itself, and it competes with the formal power of the ruler he serves. The relationship is inherently unstable.

Machiavelli, writing centuries before Dryden but articulating the spirit of Achitophel's counsel, understood this dynamic perfectly. The Prince, he argues, must take good advice but must ultimately be the master of his own decisions, lest he become the puppet of his counsellor. Achitophel’s advice to Absalom—swift, decisive, and cruel—is precisely the kind of counsel a Machiavellian prince should heed. Absalom's failure is a failure of virtù; he lacks the clear-sightedness to recognise sound, unsentimental advice. Achitophel's tragedy, in this reading, is being a brilliant servant to a flawed master, a predicament that highlights the dependence of even the greatest advisor on the character of the one he advises.

In the modern era, the spectre of Achitophel remains. It haunts the perception of figures like Henry Kissinger or Dominic Cummings—advisors lauded for their strategic brilliance even by their detractors, but simultaneously accused of placing their own grand strategies and intellectual models above loyalty to individuals or established norms. They are seen as "disruptors," a term that echoes Achitophel’s "turbulent of wit." When such figures fall from grace, their departure is often framed in Achitophelian terms: their formidable intellects, which had made them so valuable, had also made them intolerable, their counsel having crossed the line into a bid for control.

Ultimately, the power of "Achitophel" as a prompt and a concept lies in its interrogation of wisdom itself. It forces us to ask what good counsel is. Is it what is most effective in the short term? Or is it what serves a larger, more enduring principle of legitimacy and order? The biblical Achitophel offers the former, a wisdom of pure, predictive efficacy. Dryden’s Achitophel weaponises this wisdom in the service of faction and personal ambition. Both narratives conclude that this form of intelligence is inherently unstable and self-destructive. It fails because it cannot account for the irrational—Absalom's vanity, for example—and because, by divorcing itself from the source of legitimate authority, it has no ground to stand on once its own predictions fail.

The final, haunting image of Achitophel is of him alone, meticulously setting his affairs in order before his death. It is the perfect, chilling summation of his character. Even in his ruin, he is not ruled by passion but by the cold, clear logic that has been his god. His oracle has failed him, and his final prophecy is his own doom. He remains a timeless and terrifying warning: that in the corridors of power, the most brilliant light, when detached from a framework of moral allegiance, is capable of casting the darkest and most destructive of shadows.

This was a genuinely gripping read, and I am once again updating my understanding of the SOTA upwards. That being said, I can't see a bunch of humanities-aligned Oxford dons being too impressed with it on its own merits - the rhetorical bombast feels a bit too on the nose, like prose written by a strong student who on some level is still marvelling at himself for being able to write so well and can't quite hide being proud about it. This impression is amplified by the occasional malapropism* (ex.: the use of "profound" in the second paragraph) which seems to be a problem that LLMs still struggle with whenever trying to write in a high register (probably because the training corpus is awash with it, and neither the operators nor their best RLHF cattle actually have the uniformly high level of language skill that would be necessary to beat the tendency out of them with consistency).

Do you know how Gemini generated the essay exactly? Is it actually still a single straight-line forward pass as it was when chat assistants first became a thing (this would put it deeper in the "scary alien intelligence" class), or does it perform some CoT/planning, possibly hidden?

*In self-demonstrating irony, "malapropism" is not quite the right word for this, but I can't think of a word that fits exactly! Rather than actually taking into account what exactly, in this context, wishing for the advisor to become foolish is more of than wishing for the advisee to drop dead, it feels like just picking, from among all vaguely positive choices of A in "not X, but something more A", the one that is most common (even if it happens to just denote the nonsensical "deep").

These days with the thinking models the model first thinks about what to write (generating some thinking tokens) and then does a forward pass with the thinking tokens as context.

It's a genuinely amazing achievement that a machine can do this, I don't want to sound like i'm poo-pooing that, but it still has this issue of sounding like a student's recitation that constantly feels the need to point out the obvious as if it's trying to convince itself.

It reads like a journalist, not a philosopher. Might be a residue of the hidden prompt? But all LLMs sound like this, even when you tell them to try and achieve a more natural style.

I genuinely wonder if that will go away with time or if it's an artifact of having to be made up of so much mediocre prose. Like a stylistic equivalent to that yellow tint and "delve" (actually did we ever figure out where those were from definitively?).

Still, lawyers, encyclopedia writers, journalists and all other mid tier wordcels on suicide watch.