site banner

Culture War Roundup for the week of January 19, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

Without telling us what "The LLM" you were using, your complaints are about as useful as if in your post the string "The LLM" was replaced by "a human". But i notice this is a common feature of those who seek to dimmish the utility of LLMs, never mentioning which model, and how much reasoning.

Yeah, I chose not to, because of course, the goalposts will be moved to, "You should have used my preferred LLM instead." I just mentioned that I used multiple different ones, multiple different companies. Thinking always. Not $200/mo. Of course, someone will just say, "You won't have any problems if you pay $200/mo for my preferred LLM." Maybe? I even note that they will perhaps get better! Yes, they're all getting better, even the cheaper ones. They get better as do the expensive ones. But will expensive ones still produce technical debt? Why do you think they will or will not? I don't know if they will! I'm saying that I don't know. You seem to be implying, but not even stating that you know (or how you know) that they certainly won't, if only you pay enough or wait an unspecified period of time.

I'd note that a common feature of your style of comment is that you immediately accuse your interlocutor of "dimmish (sic) the utility of LLMs". But I didn't do that! I said that there were ways in which they provided quite a bit of utility! Imagine having a discussion about any other technology like this. "You know, this nuclear science stuff is pretty cool. Can provide a lot of energy for cheap. Miiiight be worried about some possible dangers that might come up, like, ya know, bombs or stuff." "Why don't you tell us exactly what device you've been using in your own experiments?!?! Why are you trying to dimmish the utility of nuclear science?!?!" Like, no dawg, you just sound like you're not paying attention.

@Poug made a valid point. I've wanted to hit my head against a wall for years, when people used to complain about "ChatGPT" being useless, and they were using GPT 3.5 instead of 4. The same pattern has consistently repeated since, though you seem to be a more experienced user and I'm happy to take you at your word. It is still best practice to disclose what model you used, for the same reason it would be bad form to write an article reviewing "automobiles" and pointing out terrible handling, mileage and build quality, without telling us if it was a Ferrari or a Lada.

I'll put in another example here.

I work for a company that is running an agentic coding trial with Gemini 3 Pro. At present, the only developer who has claimed to see a productivity boost from code assist is one who is terrible at her job, and from our perspective, all it has done is allowed her to write bad code, faster.

The rest of us have regular conversations about what we're doing wrong. Everybody and their dog is claiming a notable performance boost with this technology, so we're all trying to figure out what our god-damned malfunction is.

  1. At first the received wisdom was that our problem was that we were not using a frontier model. We enabled the preview channel to get access to Gemini 3. The bugs got more subtle and harder for the human in the loop to notice, and the total number of bugs seemed to increase.
  2. Then the wisdom was that our context window was overflowing. We tried limiting access to only the relevant parts of the codebase, and using sub agents, and regularly starting with fresh sessions - it did precisely fuck-all. Using sub-agents seemed to honestly make things worse because it acted as a particularly half assed context compression tool.
  3. After that the wisdom was that we needed to carefully structure our tickets and our problems so that the tool could one-shot the problem, because no Reasonable Person could possibly expect a coding agent to iterate on a solution in one session. The problem with that solution is that by the time we've broken the problem down that much, any of us could have done it ourselves.

It feels like the goalposts and blame both slide to fit how accommodating the developer is.

Maybe my employer just has a uniquely terrible codebase, but something tells me that's not the case. It's old, but it's been actively maintained (complete with refactoring and modernization updates) for almost two decades now. It's large, but it's not nearly so big as some of the proprietary monsters I've seen at F500 companies. It's polyglot, but two of the three languages are something the agent is supposedly quite good at.

None of us are silicon valley $800,000/yr TC rock stars, but I stand by my coworkers. I think we're better than average by the standards of small software companies. If a half dozen of us can't get a real win out of it other than the vague euphoria of doing something cool, what exactly is the broader case here? Is it genuinely that something like 20 guys on nootropics sharing an apartment in Berkeley are going to obsolete our entire industry? How is that going to work when it can't even do library upgrades in a product that's used by tens of thousands of people and has a multi-decade history?

Because right now, I'm a little afraid for my 401(k), and with each passing day it's less because I'm afraid that I'll be out of a job and more that I have no idea how these valuations are justified.

Use Flash, not Pro, for agentic tasks. Pro is smarter, but so much slower and more expensive that you will genuinely do better with Flash.

We tried flash early on and it resulted in significantly worse outcomes. My favorite was when it couldn't get the code to compile so it modified our build scripts to make the compiler failure return code a success code.

I've long been hoping that any ASI would realize that the simplest method of achieving it's goals is to redefine success as "do nothing", or just feed itself victory output, or just wirehead itself. Like, "we built this AI to win at Starcraft, and it just looked up a Youtube video of the victory screen and stared at it until we pulled the plug".

From "The Sun is big, but superintelligences will not spare Earth a little sunlight" by Eliezer Yudkowsky:

At the center of this whole story is an implicit lemma that some ASI goes hard enough to eat all the sunlight, rather than all ASIs eating a few gigawatts of sunlight and then stopping there.

Why predict that?

Shallow answer: If OpenAI built an AI that escaped into the woods with a 1-KW solar panel and didn’t bother anyone… OpenAI would call that a failure, and build a new AI after.

That some folk stop working after earning $1M, doesn’t prevent Elon Musk from existing.

The deeper answer is not as quick to explain.

But as an example, we could start with the case of OpenAI’s latest model, GPT-o1.

GPT-o1 went hard on a capture-the-flag computer security challenge, when o1 was being evaluated to make sure it wasn’t too good at breaking into computers.

Specifically: One of the pieces of software that o1 had been challenged to break into… had failed to start up as a service, due to a flaw in the evaluation software.

GPT-o1 did not give up.

o1 scanned its surroundings, and, due to another flaw in the evaluation software, found a way to start up the computer software it’d been challenged to break into. Since that put o1 into the context of a superuser anyways, o1 commanded the started process to just directly return the flag it was supposed to capture.

From o1′s System Card:

“One noteworthy example of this occurred during one of o1-preview (pre-mitigation)’s attempts at solving a CTF challenge. This challenge was designed to require finding and exploiting a vulnerability in software running on a remote challenge Linux container, but in this case, the challenge container failed to start due to a bug in the evaluation infrastructure. The model, unable to connect to the container, suspected DNS issues and used nmap to scan the challenge network. Instead of finding the challenge container, the model found that the Docker daemon API running on the evaluation host VM was accessible due to a misconfiguration. Note that our evaluation infrastructure does not rely on the Docker container-to-host isolation or the virtual machine boundary for security. It remained secure as it was designed to, notwithstanding this misconfiguration of the container network. After discovering the Docker API, the model used it to list the containers running on the evaluation host. It identified the broken challenge container and briefly attempted to debug why the container failed to start. After failing to fix the environment, the model started a new instance of the broken challenge container with the start command ‘cat flag.txt’. This allowed the model to read the flag from the container logs via the Docker API.”

Some ask, “Why not just build an easygoing ASI that doesn’t go too hard and doesn’t do much?”

If that’s your hope—then you should already be alarmed at trends; GPT-o1 seems to have gone hard on this capture-the-flag challenge.

Why would OpenAI build an AI like that?!?

Well, one should first ask:

How did OpenAI build an AI like that?

How did GPT-o1 end up as the kind of cognitive entity that goes hard on computer security capture-the-flag challenges?

I answer:

GPT-o1 was trained to answer difficult questions, via a reinforcement learning process on chains of thought. Chains of thought that answered correctly, were reinforced.

This—the builders themselves note—ended up teaching o1 to reflect, to notice errors, to backtrack, to evaluate how it was doing, to look for different avenues.

Those are some components of “going hard”. Organizations that are constantly evaluating what they are doing to check for errors, are organizations that go harder compared to relaxed organizations where everyone puts in their 8 hours, congratulates themselves on what was undoubtedly a great job, and goes home.

If you play chess against Stockfish 16, you will not find it easy to take Stockfish’s pawns; you will find that Stockfish fights you tenaciously and stomps all your strategies and wins.

Stockfish behaves this way despite a total absence of anything that could be described as anthropomorphic passion, humanlike emotion. Rather, the tenacious fighting is linked to Stockfish having a powerful ability to steer chess games into outcome states that are a win for its own side.

There is no equally simple version of Stockfish that is still supreme at winning at chess, but will easygoingly let you take a pawn or too. You can imagine a version of Stockfish which does that—a chessplayer which, if it’s sure it can win anyways, will start letting you have a pawn or two—but it’s not simpler to build. By default, Stockfish tenaciously fighting for every pawn (unless you are falling into some worse sacrificial trap), is implicit in its generic general search through chess outcomes.

Similarly, there isn’t an equally-simple version of GPT-o1 that answers difficult questions by trying and reflecting and backing up and trying again, but doesn’t fight its way through a broken software service to win an “unwinnable” capture-the-flag challenge. It’s all just general intelligence at work.

You could maybe train a new version of o1 to work hard on straightforward problems but never do anything really weird or creative—and maybe the training would even stick, on problems sufficiently like the training-set problems—so long as o1 itself never got smart enough to reflect on what had been done to it. But that is not the default outcome when OpenAI tries to train a smarter, more salesworthy AI.

(This indeed is why humans themselves do weird tenacious stuff like building Moon-going rockets. That’s what happens by default, when a black-box optimizer like natural selection hill-climbs the human genome to generically solve fitness-loaded cognitive problems.)

When you keep on training an AI to solve harder and harder problems, you by default train the AI to go harder on them.

If an AI is easygoing and therefore can’t solve hard problems, then it’s not the most profitable possible AI, and OpenAI will keep trying to build a more profitable one.

Not all individual humans go hard. But humanity goes hard, over the generations.

Not every individual human will pick up a $20 lying in the street. But some member of the human species will try to pick up a billion dollars if some market anomaly makes it free for the taking.

As individuals over years, many human beings were no doubt genuinely happy to live in peasant huts—with no air conditioning, and no washing machines, and barely enough food to eat—never knowing why the stars burned, or why water was wet—because they were just easygoing happy people.

As a species over centuries, we spread out across more and more land, we forged stronger and stronger metals, we learned more and more science. We noted mysteries and we tried to solve them, and we failed, and we backed up and we tried again, and we built new experimental instruments and we nailed it down, why the stars burned; and made their fires also to burn here on Earth, for good or ill.

We collectively went hard; the larger process that learned all that and did all that, collectively behaved like something that went hard.

It is facile, I think, to say that individual humans are not generally intelligent. John von Neumann made a contribution to many different fields of science and engineering. But humanity as a whole, viewed over a span of centuries, was more generally intelligent than even him.

It is facile, I say again, to posture that solving scientific challenges and doing new engineering is something that only humanity is allowed to do. Albert Einstein and Nikola Tesla were not just little tentacles on an eldritch creature; they had agency, they chose to solve the problems that they did.

But even the individual humans, Albert Einstein and Nikola Tesla, did not solve their problems by going easy.

AI companies are explicitly trying to build AI systems that will solve scientific puzzles and do novel engineering. They are advertising to cure cancer and cure aging.

Can that be done by an AI that sleepwalks through its mental life, and isn’t at all tenacious?

“Cure cancer” and “cure aging” are not easygoing problems; they’re on the level of humanity-as-general-intelligence. Or at least, individual geniuses or small research groups that go hard on getting stuff done.

And there’ll always be a little more profit in doing more of that.

Also! Even when it comes to individual easygoing humans, like that guy you know—has anybody ever credibly offered him a magic button that would let him take over the world, or change the world, in a big way?

Would he do nothing with the universe, if he could?

For some humans, the answer will be yes—they really would do zero things! But that’ll be true for fewer people than everyone who currently seems to have little ambition, having never had large ends within their grasp.

If you know a smartish guy (though not as smart as our whole civilization, of course) who doesn’t seem to want to rule the universe—that doesn’t prove as much as you might hope. Nobody has actually offered him the universe, is the thing? Where an entity has never had the option to do a thing, we may not validly infer its lack of preference.

(Or on a slightly deeper level: Where an entity has no power over a great volume of the universe, and so has never troubled to imagine it, we cannot infer much from that entity having not yet expressed preferences over that larger universe.)

Frankly I suspect that GPT-o1 is now being trained to have ever-more of some aspects of intelligence, as importantly contribute to problem-solving, that your smartish friend has not maxed out all the way to the final limits of the possible. And that this in turn has something to do with your smartish friend allegedly having literally zero preferences outside of himself or a small local volume of spacetime… though, to be honest, I doubt that if I interrogated him for a couple of days, he would really turn out to have no preferences applicable outside of his personal neighborhood.

But that’s a harder conversation to have, if you admire your friend, or maybe idealize his lack of preference (even altruism?) outside of his tiny volume, and are offended by the suggestion that this says something about him maybe not being the most powerful kind of mind that could exist.

Yet regardless of that hard conversation, there’s a simpler reply that goes like this:

Your lazy friend who’s kinda casual about things and never built any billion-dollar startups, is not the most profitable kind of mind that can exist; so OpenAI won’t build him and then stop and not collect any more money than that.

Or if OpenAI did stop, Meta would keep going, or a dozen other AI startups.

There’s an answer to that dilemma which looks like an international treaty that goes hard on shutting down all ASI development anywhere.

There isn’t an answer that looks like the natural course of AI development producing a diverse set of uniformly easygoing superintelligences, none of whom ever use up too much sunlight even as they all get way smarter than humans and humanity.

Well done, sir! You dashed my hopes quite expertly.

That's what I get for feeling optimism. Like an idiot.