site banner

Culture War Roundup for the week of April 6, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

Alright, AI bros, follow-up from last week. I was able to secure access to Claude Opus 4.6 at my job, and I gave it the same prompt that I had given to Sonnet. It overlooked the authentication part of the HTTP client library completely this time in what it generated. In a follow-up I asked it to extract out the common logic for the authentication portions specifically. It didn't do that, instead it generated a class with two helper methods.

The first helper method was just a thin wrapper around System.Text.Json for deserializing the response. There's an optional flag to pass in for when case insensitive deserialization is needed, and nothing else.

The second helper method was something for actually making the HTTP calls. The strangest part with this one is that it has two delegates as parameters, one for deserializing successful responses, the other for handling (but not deserializing) error responses. It didn't do anything to split out handling of the 2 different ways to authenticate at all.

The issues with what was generated (for both the API client as a whole, and for the authentication part of the code specifically) are numerous, here are a small handful that I identified:

  1. It assumes that an HTTP 200 code is the only successful response code, even though some endpoints return 202, 207, and more.

  2. It assumes that all endpoints return plaintext or JSON content, even though several return binary data, CSV data, etc.

  3. It didn't do null checking in several places. I assume it was mostly trained on C# code that either didn't do null checks correctly, and/or on code that doesn't use the nullable reference type feature that was added in C# 8 (back in 2019). Regardless, the null checks are missing/wrong regardless of whether nullable reference types are enabled or disabled. Also it always checks nulls with == or != null. This works 99% of the time, but best practice is to use "is null" and "is not null" for the rare cases where the equality operator is overloaded. Once again, I assume this is because most of the training data uses == and !=.

  4. It doesn't handle url query parameters (nor path parameters), it assumes everything is going to use a JSON body for the request.

  5. It uses the wrong logging templates for several of the logging calls. For example, the logs for an error response use the log template for logging the requests that are sent. Even more troubling is that it removed all the logic for stripping user secrets out of these logs.

There are quite a few more issues, but overall my experience with Opus was even worse than my experience with Sonnet, if anything. AI bros still in shambles. I definitely have zero fears that AI will replace me, though I'm still definitely fearful that retarded C-suite execs will think it can replace me.

My post from last week about using Claude Sonnet: https://www.themotte.org/post/3654/culture-war-roundup-for-the-week/426666?context=8#context

Edit: Just saw a very relevant post over on Orange Reddit about this very topic: https://news.ycombinator.com/item?id=47660925

This AI bro vs (idk what to call the opposition) schism on this site is very funny

I feel like both sides are talking passed each other in many ways, and also have no interest in bridging the epistemic gaps.

About me

I'm firmly in the "AI bro" camp I guess. I do not code, nor do I know how to do code aside from simple programming 101 type stuff, which is all I need(ed) to make VBA scripts work in excel. I will never copy/paste another line of Stack Overflow VBA to jank together a macro again, and that makes me very happy.

Adoption is slow, but it's gradually happening at my employer $MULTI_NATIONAL_FINANCE_CO. It is very clear to me that I will see (and already have seen) large productivity gains, especially as agent scaffolds are made for things other than coding.

LLMs are both extremely powerful and very jagged. I think a huge amount of their "jaggedness" is due to their nature as LLMs, and are very unlikely to get to ASI/some versions of AGI*. My best guess is they'll be as disruptive as the ~computer (i.e. the information age) was from 19XX-now, perhaps slightly smaller given "AI impact on human civilization" is kind of a subset of "computer impact on human civilization".

*Notwithstanding some kind of paradigm change in algorithm/AI approach. Which is always possible, but we're pretty clearly on the LLM-tech tree path for the next bit.

Vague Predictions

I am sure many white collar jobs will disappear entirely, many will be insulated for any number of reasons (ranging from genuine limits to retarded bureaucracy and everything in between) and will remain unchanged for a while, and some, like mine, will keep their core identity but day to day tasks will shift a lot and who knows what happens to employment (too many factors to guess per job).

Coding

It is clearly revolutionizing coding. This cannot be denied. GitHub commits are now going parabolic, so people are "building things". Much of which is slop. I am one of those people, I now have a small but growing fleet of personal tools. I'm sure they are coded awfully, I've never looked and wouldn't understand if I did. I don't care, they work for me.

There are much more accomplished coders on twitter, etc, who are also reporting massive changes to their lives. Many of them are incentivized to say such things and over exaggerate, but I doubt it's a massive coordinated lie or mass delusion. So there is truth there.

The more sensible ones will even agree that AI code is on average mediocre to bad, and AI can't do high precision high quality specialized code like a cracked human can. AI will even take your amazing high precision high quality specialized code and slop it if you're not careful. Many of them, like Karpathy, have just given up and accepted the slop as a price of doing business. Because they're accomplishing what they want with the code too. It works.

It's assumed that AI performance will improve massively from where it is today. It has so far, it's a pretty safe assumption right now. It's rumored that the new Claude model beat expectations on performance vs scaling laws. AI model hype is always a large % bullshit, but we'll find out the real capabilities soon, and no matter what they will be better than they are now.

I don't think LLMs are going to bring us the ASI digital god of Sam's wet dreams/nightmares. I think they are going to profoundly change our service economies regardless.

Your situation

I don't know your codebase or the thing you're getting it to do. I don't know anything about HTTP.

I seriously doubt you're trying to set the AI up for success at all. I can't code and I'm probably using more AI coding best practices than you are, and all my git commits are titled "lol".

It's also very possible that it's not worth the time to set up AI "properly" to fix this. There's a very real possibility it's much faster, if more tedious, to just do it yourself. But this is one task. N=1. There are things AI can do for you today, that's a guarantee.

The bubble

The usual retort of "skill issue" is "well if I have to set it up and use best practices then AI is a bubble". I think that's a strawman, because I am not stuck in a reflexive yes/no binary where if you like AI you can't also think it's a bubble. It could be a bubble, I don't know (or care). It's incredibly easy for an asset to be over-financed and you never know if you've done enough capex until you do too much (at any scale). What I care about is the AI tools I can access which are excellent and also flawed.

Maybe AI needs to be that good out of the box to justify the trillions in capex. It probably does. But does that matter here? Neither you nor I control capex spend or can predict how long the scaling laws will hold for.

I don't care if AI is a bubble - we'll all find out and predictions of this scale/magnitude are essentially worthless. If you have alpha and guess right, all power to you, but the bubble conversational branch strikes me as a fool's errand. And it's irrelevant to "can LLMs do things for you?".

Closing thoughts

We have LLMs here right now that are massively changing basically any digital task you point them at. It's not easy, and it doesn't work everywhere, but it's insane when it does.

It's cognitively exhausting. It's a new way of thinking + every time new models/tools come out you change many things you were previously doing. So many assumptions and bottlenecks change. It's genuinely not easy or obvious always how to implement it. We are learning this in real time as a culture.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

If you want to refuse or deny the power of these tools you can. You can set about finding examples of them sucking to point and laugh. But you're letting your bias blind you, and leaving a lot of value on the table. You can tell your computer to do stuff and it can now, it's awesome.

Also noting that in your HN link the inventor of Claude Code is asking ppl for feedback/providing explanations live as I type this.

I've never looked and wouldn't understand if I did. I don't care, they work for me.

This might be a huge part of the divide between doubters and believers.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

Yet those who care for the quality of the code or product it might grate when they look and see the inelegance of the solutions and the lack of foresight.

Apply this to the AI art debate, too. Sure a trained eye will notice deficiencies and shortfalls. But the average user notices that they can produce a logo or a cute cartoon portrait in 15 seconds for pennies.

Me, I'm now basically using the LLMs to do final review on any work I don't feel 100% competent on, since its attention to detail is now impeccable and of course it never gets tired or complains.

Sometimes it hits some nitpicks I genuinely find stupid because in actual practice its an irrelevant detail for the actual outcome of the matter. But it catches things, so it almost feels like it'd be malpractice to not use the tool.

Anyway, its broke through to normies, AI agents are going to be huge among small busineses, I see people who are otherwise technologically inept with Grok AND ChatGPT on their phones lock screens. They are already relying on this tech to a degree that might startle you. Genie ain't going back in the bottle.

Get psychologically (and financially) prepared to adapt, that's the only advice that I can truly offer right now.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

Love this uncertainty. On the one hand, I could 10x my productivity and cut my rates by half and still be making crazy money for myself. Seriously, the number of basic and intermediate tasks that GPT can do for me is freeing up time to engage with the higher leverage tasks that I enjoy and get paid the most for.

But if it gets just a little better then my role as an expert intermediary becomes redundant. I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I hate this uncertainty.

One of my favorite parts of this forum is moments like this, when someone puts my thoughts into words better than I could. I agree with every word.

I have the exact same view on AI art. I have quite low skills in "artistic taste", it's never I skill I've been good at or sought to develop much (low reward per n time vs things I like more). But now I can get to make funny images and concept art and express ideas in mediums that were previously locked to me. What fun! Yet there's people crying and screaming on the internet because like game developers are using AI agents to help them make games faster+better. I'm just excited for the golden age of AI gameslop. Good dev studios are going to be absolutely cooking.

I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go a wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I'm hoping this window of time lasts a while. I'm adjacent to the legal world and they're going to use every institution they wield (many!) to keep themselves in this state for as long as they can.

That's interesting.

I have tolerable art skills, but low intrinsic motivation. Sometimes I get over the hurdle of deciding what to paint, and then I do actually paint it, but more often I get stuck at "what could I possibly do with another painting of a flower? Where will I store it? I already have too many photos even, and I don't have to physically store them" and don't paint it.

I spend a lot of time talking to software developers and adjacent people, and am entirely unable to imagine what kind of thing I would program, if I had a program creating genii. There are some apps (egg, terrible word, so tired of it) that I have that want fewer of. Why did I have to download an app to get a concert ticket? I don't think it's just because I'm old, I'm not all that old. My millennial friends are often talking about how much they dislike screens, and want less of them, and less things on them, and how there's quite too much digital product already, even for free.