site banner

Culture War Roundup for the week of April 6, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

This AI bro vs (idk what to call the opposition) schism on this site is very funny

I feel like both sides are talking passed each other in many ways, and also have no interest in bridging the epistemic gaps.

About me

I'm firmly in the "AI bro" camp I guess. I do not code, nor do I know how to do code aside from simple programming 101 type stuff, which is all I need(ed) to make VBA scripts work in excel. I will never copy/paste another line of Stack Overflow VBA to jank together a macro again, and that makes me very happy.

Adoption is slow, but it's gradually happening at my employer $MULTI_NATIONAL_FINANCE_CO. It is very clear to me that I will see (and already have seen) large productivity gains, especially as agent scaffolds are made for things other than coding.

LLMs are both extremely powerful and very jagged. I think a huge amount of their "jaggedness" is due to their nature as LLMs, and are very unlikely to get to ASI/some versions of AGI*. My best guess is they'll be as disruptive as the ~computer (i.e. the information age) was from 19XX-now, perhaps slightly smaller given "AI impact on human civilization" is kind of a subset of "computer impact on human civilization".

*Notwithstanding some kind of paradigm change in algorithm/AI approach. Which is always possible, but we're pretty clearly on the LLM-tech tree path for the next bit.

Vague Predictions

I am sure many white collar jobs will disappear entirely, many will be insulated for any number of reasons (ranging from genuine limits to retarded bureaucracy and everything in between) and will remain unchanged for a while, and some, like mine, will keep their core identity but day to day tasks will shift a lot and who knows what happens to employment (too many factors to guess per job).

Coding

It is clearly revolutionizing coding. This cannot be denied. GitHub commits are now going parabolic, so people are "building things". Much of which is slop. I am one of those people, I now have a small but growing fleet of personal tools. I'm sure they are coded awfully, I've never looked and wouldn't understand if I did. I don't care, they work for me.

There are much more accomplished coders on twitter, etc, who are also reporting massive changes to their lives. Many of them are incentivized to say such things and over exaggerate, but I doubt it's a massive coordinated lie or mass delusion. So there is truth there.

The more sensible ones will even agree that AI code is on average mediocre to bad, and AI can't do high precision high quality specialized code like a cracked human can. AI will even take your amazing high precision high quality specialized code and slop it if you're not careful. Many of them, like Karpathy, have just given up and accepted the slop as a price of doing business. Because they're accomplishing what they want with the code too. It works.

It's assumed that AI performance will improve massively from where it is today. It has so far, it's a pretty safe assumption right now. It's rumored that the new Claude model beat expectations on performance vs scaling laws. AI model hype is always a large % bullshit, but we'll find out the real capabilities soon, and no matter what they will be better than they are now.

I don't think LLMs are going to bring us the ASI digital god of Sam's wet dreams/nightmares. I think they are going to profoundly change our service economies regardless.

Your situation

I don't know your codebase or the thing you're getting it to do. I don't know anything about HTTP.

I seriously doubt you're trying to set the AI up for success at all. I can't code and I'm probably using more AI coding best practices than you are, and all my git commits are titled "lol".

It's also very possible that it's not worth the time to set up AI "properly" to fix this. There's a very real possibility it's much faster, if more tedious, to just do it yourself. But this is one task. N=1. There are things AI can do for you today, that's a guarantee.

The bubble

The usual retort of "skill issue" is "well if I have to set it up and use best practices then AI is a bubble". I think that's a strawman, because I am not stuck in a reflexive yes/no binary where if you like AI you can't also think it's a bubble. It could be a bubble, I don't know (or care). It's incredibly easy for an asset to be over-financed and you never know if you've done enough capex until you do too much (at any scale). What I care about is the AI tools I can access which are excellent and also flawed.

Maybe AI needs to be that good out of the box to justify the trillions in capex. It probably does. But does that matter here? Neither you nor I control capex spend or can predict how long the scaling laws will hold for.

I don't care if AI is a bubble - we'll all find out and predictions of this scale/magnitude are essentially worthless. If you have alpha and guess right, all power to you, but the bubble conversational branch strikes me as a fool's errand. And it's irrelevant to "can LLMs do things for you?".

Closing thoughts

We have LLMs here right now that are massively changing basically any digital task you point them at. It's not easy, and it doesn't work everywhere, but it's insane when it does.

It's cognitively exhausting. It's a new way of thinking + every time new models/tools come out you change many things you were previously doing. So many assumptions and bottlenecks change. It's genuinely not easy or obvious always how to implement it. We are learning this in real time as a culture.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

If you want to refuse or deny the power of these tools you can. You can set about finding examples of them sucking to point and laugh. But you're letting your bias blind you, and leaving a lot of value on the table. You can tell your computer to do stuff and it can now, it's awesome.

Also noting that in your HN link the inventor of Claude Code is asking ppl for feedback/providing explanations live as I type this.

I've never looked and wouldn't understand if I did. I don't care, they work for me.

This might be a huge part of the divide between doubters and believers.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

Yet those who care for the quality of the code or product it might grate when they look and see the inelegance of the solutions and the lack of foresight.

Apply this to the AI art debate, too. Sure a trained eye will notice deficiencies and shortfalls. But the average user notices that they can produce a logo or a cute cartoon portrait in 15 seconds for pennies.

Me, I'm now basically using the LLMs to do final review on any work I don't feel 100% competent on, since its attention to detail is now impeccable and of course it never gets tired or complains.

Sometimes it hits some nitpicks I genuinely find stupid because in actual practice its an irrelevant detail for the actual outcome of the matter. But it catches things, so it almost feels like it'd be malpractice to not use the tool.

Anyway, its broke through to normies, AI agents are going to be huge among small busineses, I see people who are otherwise technologically inept with Grok AND ChatGPT on their phones lock screens. They are already relying on this tech to a degree that might startle you. Genie ain't going back in the bottle.

Get psychologically (and financially) prepared to adapt, that's the only advice that I can truly offer right now.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

Love this uncertainty. On the one hand, I could 10x my productivity and cut my rates by half and still be making crazy money for myself. Seriously, the number of basic and intermediate tasks that GPT can do for me is freeing up time to engage with the higher leverage tasks that I enjoy and get paid the most for.

But if it gets just a little better then my role as an expert intermediary becomes redundant. I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I hate this uncertainty.

One of my favorite parts of this forum is moments like this, when someone puts my thoughts into words better than I could. I agree with every word.

I have the exact same view on AI art. I have quite low skills in "artistic taste", it's never I skill I've been good at or sought to develop much (low reward per n time vs things I like more). But now I can get to make funny images and concept art and express ideas in mediums that were previously locked to me. What fun! Yet there's people crying and screaming on the internet because like game developers are using AI agents to help them make games faster+better. I'm just excited for the golden age of AI gameslop. Good dev studios are going to be absolutely cooking.

I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go a wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I'm hoping this window of time lasts a while. I'm adjacent to the legal world and they're going to use every institution they wield (many!) to keep themselves in this state for as long as they can.

I mean, there's no way that the legal profession doesn't outlaw AI use in law the moment it becomes a threat to their jobs, right? Lots of law makers are lawyers, and I don't think they are above using the levers of power to make sure their profession can't be replaced.

I'm not sure how they'll catch attorneys who are careful about the end products they're filing.

You might see attorneys staying suspiciously effective despite juggling large caseloads, making surprisingly adept legal arguments in their briefs while their performance at a live hearing is lacklustre.

But yeah it'll be banned from any client or public-facing roles to large extents.

AI use by attorneys will get lots of attention for job market and ethics reasons, but the courts are 100% unprepared for the day when pro se litigants start filing piles of plausible-sounding briefs in their traffic ticket/misdemeanor/family court cases.

They're already doing it in low-stakes Civil cases.

Ask me how I know.

The UK NIMBY community was one of the first groups to take up legal AI with Objector. Every significant planning application now receives multiple lengthy AI-generated objections stating every plausible legally valid reason for rejecting it.

The UK is handed access to the demigod machine and immediately wishes for all development to grind to an absolute halt. They've created the Saul Goodman protocol.

On the other hand, if the legal system grinds to a halt when every single participant is able to level every single possible legal argument that might return their desired result, that's a good sign that things need to be streamlined and reformed, and an incentive to do so.

On the YIMBY side, it should in theory make it way easier to get permitting and to anticipate all likely objections

Seriously, if the legal system is slanted in even the tiniest way towards either side in a given type of case, the ability to exploit every possible legal advantage will result in hugely asymmetric outcomes for the 'favored' side. Might make it clear precisely where the scales of justice need some re-balancing.

And I'd hope there are cases where the use of LLMs facilitates mediation that obviates the need for litigation entirely.

More comments