site banner

Culture War Roundup for the week of April 6, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

This AI bro vs (idk what to call the opposition) schism on this site is very funny

I feel like both sides are talking passed each other in many ways, and also have no interest in bridging the epistemic gaps.

About me

I'm firmly in the "AI bro" camp I guess. I do not code, nor do I know how to do code aside from simple programming 101 type stuff, which is all I need(ed) to make VBA scripts work in excel. I will never copy/paste another line of Stack Overflow VBA to jank together a macro again, and that makes me very happy.

Adoption is slow, but it's gradually happening at my employer $MULTI_NATIONAL_FINANCE_CO. It is very clear to me that I will see (and already have seen) large productivity gains, especially as agent scaffolds are made for things other than coding.

LLMs are both extremely powerful and very jagged. I think a huge amount of their "jaggedness" is due to their nature as LLMs, and are very unlikely to get to ASI/some versions of AGI*. My best guess is they'll be as disruptive as the ~computer (i.e. the information age) was from 19XX-now, perhaps slightly smaller given "AI impact on human civilization" is kind of a subset of "computer impact on human civilization".

*Notwithstanding some kind of paradigm change in algorithm/AI approach. Which is always possible, but we're pretty clearly on the LLM-tech tree path for the next bit.

Vague Predictions

I am sure many white collar jobs will disappear entirely, many will be insulated for any number of reasons (ranging from genuine limits to retarded bureaucracy and everything in between) and will remain unchanged for a while, and some, like mine, will keep their core identity but day to day tasks will shift a lot and who knows what happens to employment (too many factors to guess per job).

Coding

It is clearly revolutionizing coding. This cannot be denied. GitHub commits are now going parabolic, so people are "building things". Much of which is slop. I am one of those people, I now have a small but growing fleet of personal tools. I'm sure they are coded awfully, I've never looked and wouldn't understand if I did. I don't care, they work for me.

There are much more accomplished coders on twitter, etc, who are also reporting massive changes to their lives. Many of them are incentivized to say such things and over exaggerate, but I doubt it's a massive coordinated lie or mass delusion. So there is truth there.

The more sensible ones will even agree that AI code is on average mediocre to bad, and AI can't do high precision high quality specialized code like a cracked human can. AI will even take your amazing high precision high quality specialized code and slop it if you're not careful. Many of them, like Karpathy, have just given up and accepted the slop as a price of doing business. Because they're accomplishing what they want with the code too. It works.

It's assumed that AI performance will improve massively from where it is today. It has so far, it's a pretty safe assumption right now. It's rumored that the new Claude model beat expectations on performance vs scaling laws. AI model hype is always a large % bullshit, but we'll find out the real capabilities soon, and no matter what they will be better than they are now.

I don't think LLMs are going to bring us the ASI digital god of Sam's wet dreams/nightmares. I think they are going to profoundly change our service economies regardless.

Your situation

I don't know your codebase or the thing you're getting it to do. I don't know anything about HTTP.

I seriously doubt you're trying to set the AI up for success at all. I can't code and I'm probably using more AI coding best practices than you are, and all my git commits are titled "lol".

It's also very possible that it's not worth the time to set up AI "properly" to fix this. There's a very real possibility it's much faster, if more tedious, to just do it yourself. But this is one task. N=1. There are things AI can do for you today, that's a guarantee.

The bubble

The usual retort of "skill issue" is "well if I have to set it up and use best practices then AI is a bubble". I think that's a strawman, because I am not stuck in a reflexive yes/no binary where if you like AI you can't also think it's a bubble. It could be a bubble, I don't know (or care). It's incredibly easy for an asset to be over-financed and you never know if you've done enough capex until you do too much (at any scale). What I care about is the AI tools I can access which are excellent and also flawed.

Maybe AI needs to be that good out of the box to justify the trillions in capex. It probably does. But does that matter here? Neither you nor I control capex spend or can predict how long the scaling laws will hold for.

I don't care if AI is a bubble - we'll all find out and predictions of this scale/magnitude are essentially worthless. If you have alpha and guess right, all power to you, but the bubble conversational branch strikes me as a fool's errand. And it's irrelevant to "can LLMs do things for you?".

Closing thoughts

We have LLMs here right now that are massively changing basically any digital task you point them at. It's not easy, and it doesn't work everywhere, but it's insane when it does.

It's cognitively exhausting. It's a new way of thinking + every time new models/tools come out you change many things you were previously doing. So many assumptions and bottlenecks change. It's genuinely not easy or obvious always how to implement it. We are learning this in real time as a culture.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

If you want to refuse or deny the power of these tools you can. You can set about finding examples of them sucking to point and laugh. But you're letting your bias blind you, and leaving a lot of value on the table. You can tell your computer to do stuff and it can now, it's awesome.

Also noting that in your HN link the inventor of Claude Code is asking ppl for feedback/providing explanations live as I type this.

I've never looked and wouldn't understand if I did. I don't care, they work for me.

This might be a huge part of the divide between doubters and believers.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

Yet those who care for the quality of the code or product it might grate when they look and see the inelegance of the solutions and the lack of foresight.

Apply this to the AI art debate, too. Sure a trained eye will notice deficiencies and shortfalls. But the average user notices that they can produce a logo or a cute cartoon portrait in 15 seconds for pennies.

Me, I'm now basically using the LLMs to do final review on any work I don't feel 100% competent on, since its attention to detail is now impeccable and of course it never gets tired or complains.

Sometimes it hits some nitpicks I genuinely find stupid because in actual practice its an irrelevant detail for the actual outcome of the matter. But it catches things, so it almost feels like it'd be malpractice to not use the tool.

Anyway, its broke through to normies, AI agents are going to be huge among small busineses, I see people who are otherwise technologically inept with Grok AND ChatGPT on their phones lock screens. They are already relying on this tech to a degree that might startle you. Genie ain't going back in the bottle.

Get psychologically (and financially) prepared to adapt, that's the only advice that I can truly offer right now.

It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).

Love this uncertainty. On the one hand, I could 10x my productivity and cut my rates by half and still be making crazy money for myself. Seriously, the number of basic and intermediate tasks that GPT can do for me is freeing up time to engage with the higher leverage tasks that I enjoy and get paid the most for.

But if it gets just a little better then my role as an expert intermediary becomes redundant. I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I hate this uncertainty.

I hate this uncertainty.

I've always been an anxious person, worried for the future, etc. I've basically given up with AI, the world has gotten so ridiculous it's just funny.

I have no control, everything is going to change. Everything has changed a lot already in my lifetime. I'm just gonna ride it out, I had my friends over for a BBQ last night. Trying to do more of that this year.

hell yeah brother.

The thing about singularity-like situations, reliable prediction becomes impossible. Although technically I don't have to predict with real accuracy, just better than 90+% of the population. Beat the masses to do alright, provided we aren't all killed. You can fret about this, or you can let go and focus in on the tiny parcel of territory in the vastness of probability-space that you have any influence over.

In my most primal moments, I sometimes think I should literally just locate the most physically enticing female I can attract (and compromise on everything else because what else matters if AGI hits?), liquidate most of my assets except like $100k kept in the S&P, and shack up in my house to have gratuitous amounts of sex, get all my groceries delivered, and just fuck around with AI art generators and see if I can make a bit of money off them before whatever comes next washes over us.

But man, it turns out somebody still has to do the hard work of keeping civilization turning so we can keep the lights on until we can finish the silicon god (or the false idol). Those data centers and nuclear plants won't build themselves. Yet.

I despise people who do that stupid "permanent underclass" posting, specifically to drive anxiety without any actionable outlet.

I had my friends over for a BBQ last night. Trying to do more of that this year.

Strong recommend. I've focused on keeping the friendships I have as strong as possible. Say "yes" to more social invites than you used to. As long as the activities don't kill you before we reach utopia, why spend this exciting time hunched over a desk or lying in bed doomscrolling?

One of my favorite parts of this forum is moments like this, when someone puts my thoughts into words better than I could. I agree with every word.

I have the exact same view on AI art. I have quite low skills in "artistic taste", it's never I skill I've been good at or sought to develop much (low reward per n time vs things I like more). But now I can get to make funny images and concept art and express ideas in mediums that were previously locked to me. What fun! Yet there's people crying and screaming on the internet because like game developers are using AI agents to help them make games faster+better. I'm just excited for the golden age of AI gameslop. Good dev studios are going to be absolutely cooking.

I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go a wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.

I'm hoping this window of time lasts a while. I'm adjacent to the legal world and they're going to use every institution they wield (many!) to keep themselves in this state for as long as they can.

I mean, there's no way that the legal profession doesn't outlaw AI use in law the moment it becomes a threat to their jobs, right? Lots of law makers are lawyers, and I don't think they are above using the levers of power to make sure their profession can't be replaced.

I'm not sure how they'll catch attorneys who are careful about the end products they're filing.

You might see attorneys staying suspiciously effective despite juggling large caseloads, making surprisingly adept legal arguments in their briefs while their performance at a live hearing is lacklustre.

But yeah it'll be banned from any client or public-facing roles to large extents.

AI use by attorneys will get lots of attention for job market and ethics reasons, but the courts are 100% unprepared for the day when pro se litigants start filing piles of plausible-sounding briefs in their traffic ticket/misdemeanor/family court cases.

They're already doing it in low-stakes Civil cases.

Ask me how I know.

The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.

But if it works, how much does the 'average' user care?

In my experience the average user starts to care right around the same time that heir credit card number and mother's maiden name end up for sale to the highest bidder.

No one is going to vibe code their own SAAS to replace Salesforce et al

Salesforce and other huge boys with giant moats will enjoy higher labor efficiency. May experience serious pain due to higher competition > margin pressure but hard to predict.

Mid-cap software will knife fight each other over margins as competitors grow like weeds.

Small-cap/VC/PE idek lol, really excited to watch this space.

I'm super curious to see what happens when a given VC can invest in 5x as many startups per unit of $capital. I assume startups will scale faster. Do VCs stretch themselves thin with more companies in a portfolio? Do funds get bigger or smaller? Are there more or less actual VCs? Is it easier or harder to get a VC fund going?

That last bit is the most interesting part to me.

Right now, my understanding is that VC is extremely hard to get because a handful of AI darlings have sucked all the air out of the room. If they IPO soon, VCs should theoretically have freed up capital to deploy as the OpenAIs/Anthropics of the world start to show a return.

If I believe the argument, then it should result in a much larger number of smaller investments, since labor is ostensibly the biggest cost of software startups and that cost should plummet.

I don't think the normies are THAT far along that they'd trust it with their financial information.

But not too far out, either.

They might trust a Vibe-coded website, though.

As I understand it any website taking customers' financial information will usually use a third party's software rather than roll their own.

If Paypal et al. are vibe coding without regard to security we are in for some pain.

Block is vibe coding now