site banner

Culture War Roundup for the week of March 30, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

I don't know if I would call myself an AI sceptic, but I haven't seen a huge win from agentic coding in my professional life.

I work in Java - LLMs seem to do better with python or typescript.

I work on a legacy codebase - LLMs tend to do better with greenfield.

I work on a large codebase that is architected as a monolith - it's been my experience that the odds of an LLM shitting the bed begin to rise after about 15,000 lines and approaches 100% after about a million lines.

I work on a codebase that has a surprising amount of non-CRUD code. LLMs get confused by that - especially when it's similar to stuff on GitHub, but not identical.

Quite a few of our customers operate in a regulated industry, and LLMs absolutely make shit up about regulatory compliance right now.

Overall, I don't think that my job is at risk, but I do have some concerns that somebody might vibe up a competitor that can eat enough of our customer base to knock us out of profitability. After the Delve fiasco, they might just straight up lie about compliance and temporarily capture some of the regulated customers as well.


Moving on from my personal experience, I have two acquaintances who are deep in agentic mania right now.

The first is not a professional programmer, but he has always wanted to use programming to achieve specific goals in his personal hobbies. Claude code has been an absolute god send for him. He's writing things that don't need to scale, don't really need to perform, and have no real consequences for incorrectness. From his perspective, the God-Machine is here electro-immanentize the cyber-eschaton and the techno-rapture is nigh. The number of "ha ha you're gonna be out of a job and die in a gutter ha ha"-coded jibes I've gotten from him has been starting to wear on me. It's a perfect example of "the agent is only bad at things where I have personal expertise" playing out right in front of my eyes.

The second is a professional programmer, and his employer is going all in on agentic coding. They're actively tracking how many tokens each person is burning and actually using AI detectors in reverse to make sure that PRs are sufficiently crammed full of AI code. They're having huge problems because the agents are getting stuck in endless loops because they can't figure out how to write code that passes their pre-existing automated test suites. In the end, they're actually considerably less productive, but line go up. He's stuck with it, so he's desperately trying to make things not suck. He's convinced that there must be some secret sauce that makes the agents write quality code and not descend into iterative schizophrenia every time it encounters a ticket that's more complex than "change the color of this CSS class". He's spent dozens of hours of his own time trying to figure it out, and he had, until recently, been absolutely convinced he could make it work. Just one more bit of prompting - just a few more custom skills and it would do what all the boosters promised. He finally broke down recently and had a full blown crisis because he looked at Steve Yegge's gas town build pipeline. The damned thing basically never passes. Steve Yegge, the guy who is both highly technical and absolutely sold on the future of agentic coding, can't consistently get this shit to work. At this point my acquaintance called it all nonsense and gave up. He's doing the absolute bare minimum at work that he needs to do in order not get fired and he's waiting for the tool chain to stabilize.

I'm not really sure where I'm going with it, but the three different experiences are interesting.

actually using AI detectors in reverse to make sure that PRs are sufficiently crammed full of AI code

Holy hell, hoping my employer doesn't try this. They're already tracking our AI usage pretty thoroughly.

Their star performer hasn't actually generated a deliverable yet. He spent over $7,000 in API usage in one month doing something something training agents for HR something something. He doesn't even have a document outlining the plan.