site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

The psychic cost of AI is already here

Ugh, another AI post.

Today Block laid of about half its 10k employees for AI reasons and the sock soared. Was this pr cover for shedding bloat? Maybe. Elon famously slimmed down Dorsey’s crazy bloated twitter, no AI cover story needed.

But still, the market loved this and will demand more.

Earlier this week IBM stock dived on some Anthropic COBOL skill. Was this premature doom? Maybe. But still.

Let’s put aside whether AI will destroy white collar work in a short term time horizon. Despite the outcome, the idea that it very much might is already mainstream. It’s in the water. How much is the impending fear already shaping decisions? How much psychological weight is it already causing? How much will it accelerate

Certainly people are already changing career plans, college plans, savings strategies, family planning, etc. and it will only get much worse. New broadly available opportunities in AI are not going to open up faster than the fear of AI disruption will spread; we are already in a spiral.

Like many in these spaces, I’ve been worried for a while now, but now it’s going mainstream and will cause aggregate changes in behavior which will have their own effects on society and the economy regardless of the first order effects of AI disruption.

As a minor example, my wife has wanted to move for a few years now. Unfortunately, we’re chained to a 3% mortgage without enough income to achieve escape velocity beyond moving sideways to pay more. We’re finally in a spot this year where we could be a little indulgent and justify moving into a house the right size for a young family of 7, even if means taking on some unoptimized mortgage rate increase.

But I can’t imaging compounding that risk with AI disruption. The music could stop and never start again. Our marriage is good, but my resistance causes its own minor stress. How many marriages aren’t so good, break down over things like this?

How many people don’t get married altogether, etc.

Regardless if Covid was just a flu, the real world response to the percieved threat was transformative. Regardless if AI is just a fad…

and the stock soared

30% of those gains have already evaporated and we're only 3 hours past market open. Maybe wait and see a little to see how the market prices this in with more time to consider.

[conflict of interest disclaimer: I have a modest short position in XYZ (Block) as of a couple hours ago]

You're missing the point of my post. I am emphatically not making any prediction about AIs first order effect on anything, including the market, much less an individul company's stock price. I am pointing to examples where it's already producing mainstream headlines, and (although early) seems to be quickening and broadening in it's reach. I am suggesting the depsite the realities, the costs of the narrative are surely already affecting real people.

Do you not thing there weren't CEOs out there who are udpdating on these stories? Hiring manager who update or hesitate on this news? White collar workers, who rethink their savings plans?

My whole point is not the effects of AI on the economy, which remains to be seen, but the effects of the anticipation of those effects on society, which is here today.

We don't have to agree on the CFR rate of Covid to note that schools are already closing, business trips are already being cancelled, toilet paper is already running out. 'Maybe wait a little and see how the CDC responds' is misunderstanding a comment that is specifically - regardless of how and whether this deepens and whether it's overblown, the effects are here and starting today.

You yourself gave an example of making a financial decision based on this news.

Oh I agree that the effects are here, if that was the point you were making. I didn't realize that was controversial - even in my more mainstream bubbles (non-tech friends and family) people have been freaking out for about a year (in my tech bubbles they've been low-key freaking out since AlphaGo and high-key freaking out since GPT-2).

I do agree that mainstream society is not sufficiently pricing in the magnitude of the coming changes. I think that the tech bubble is doing a better job of estimating the magnitude of the changes, but frequently getting the sign wrong in terms of the anticipated effect on any particular metric.

As a software engineer at a company pushing AI use pretty heavily this whole thing is crazy making. If nothing else AI has some of the people that are the best at branding on its side. At least on the implementation side that I've done with copilot an "agent" and a "skill" are just markdown files. Their documentation is very clear about this.

The idea with a "skill" is there's some repeatable task you might want an AI to do and you hit on a particularly effective prompt that gets it to do the thing. You codify that prompt in a markdown file in a special directory. Then when you ask your more general session to do a thing it can look in that directory for applicable skills and if it thinks one is relevant it will inject its contents ss context.

"Agents" are similar. I thinks it's been known for a long time that if you prompt the AI a specific way ("You are a software engineer proficient in ...") they can perform better at certain tasks. Agents work on this principle. As best I can tell the use of an agent can either be selected by the user or your general AI might select one based on criteria similar to a skill. It then starts a sub-session where the contents of the agent markdown are injected as a kind of pre-prompt before your actual prompt.

So when you hear Anthropic has created a skill or agent or whatever that can do X you should mentally replace that with "wrote a markdown file." "Anthropic published a new skill that makes AI good at COBOL" == "Anthropic published a markdown file that, when injected in a session, makes an AI good at COBOL." Of course, things start sounding more insane. "Tech security stocks dropped on news Anthropic wrote a markdown file." "IBM dropped on news Anthropic wrote a markdown file."

Today Block laid of about half its 10k employees for AI reasons and the sock soared.

I was there. AMA

For the completely ignorant, such as myself, what is Block and why should I care? also, were they financially over-extended and this is just a way of reducing headcount and costs, but wrapping it up in "no no, we're not firing anyone because we can't afford to pay them, we're replacing them with our sexy new AI!"

Block is most well known for making the Square payment terminals and Cash App. The company overall is profitable but has been experiencing poor growth, leading to dissatisfaction among investors.

I'm seeing some reports that the company is also slashing raises and non-salary compensation like equity grants. Is that true?

No. They want to keep those who are left rather than attrit more.

Did you feel like the employees there are/were heavily using AI in their regular job to become more efficient now? Do they have have agentic AIs that can totally replace some people's jobs?

In my admittedly biased opinion, employees who used alot of AI shipped a lot of shitty slop code, while not actually producing that many more PRs overall.

I love chatbots but I hate agents though.

I love chatbots but I hate agents though.

That seems to be where I and a lot of my coworkers are landing these days. The chatbot interface is like a portal into an alternate reality where StackOverflow actually tries to be helpful.

That's... uncannily close yo my experience. No more must I try to find a forgotten answer to how to do something in Excel, I just ask the chatbot!

But where will you get your recommended dose of crazy autistic neckbeard condescension?

Will the company become more effective and profitable now?

Maybe. But not because of AI.

IBM is down about 5% on the week. Painful but not five alarm fire.

No, I don’t think it’s a fire Alarm. I’m not making a claim about the actual immediate affect of AI, but about the perception and how it is going mainstream. A few weeks with 1-2 headlines a week, and I think we get some level of self-fulfilling prophesy as the aggregate behaviors of worried citizens has a material effect

That’s fair. People see the headlines but don’t see the recovery.

The AI bubble is going to pop this year. Private equity no longer has enough money to continue to fund massively unprofitable OpenAI and Anthropic, or even the NVIDIA chip glut. A lot of these layoffs are either performative attempts to raise stock prices, or cutting fat that would have been cut a long time ago even if AI wasn't a thing. As long as you don't have all your savings in AI companies (or are over invested in index funds), and can avoid getting fired in the next 6-10 months, I think you will be okay.

Further reading

Yup. A way to downsize aggressively ahead of the possible next recession/ possible attempted realignment away from the USD before it hits, in a way that increases hype valuation.

Maybe AI happens and they fire everyone, maybe it doesn't and they are better positioned to ride out the next catastrophe, maybe everything stays the same and they take a 3% haircut rehiring everyone. Win/Win/Eh its fine form their postion.

"Oh? What's that? You Work For Wages Producing Goods Or Providing Services, And You Need Money To Live? Have you considered either owning assets or fucking dying instead, you broke ass bitch?"

To be clear, my thesis is not whether this is a bubble or not, rather the fact that already today, the fear is in the water and causing a mental toll on people and will affect decision making. Even if it pops in a few months, 1. That’s enough time to compound effects and 2. Does the shift in perspective just disappear?

Consider AI videos. That is certainly going dissuade may people from going to Hollywood or from seeing filmmaking as a career. Even if AI video capability stops right here and never completes the full verisimilitude, it is a bus stopped right on the edge of a cliff, and the will have a psychic toll.

Ahh I see. This is a much trickier problem. I think your concerns are very valid.

As Lizzardspawn says, the financial implications for a handful of AI companies will mean little to the actual deployment of current AI technology. The bubble bursting does not mean that LLMs simply go away, just as the dotcom bubble did not result in everyone switching back to fax machines and physical mail.

These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time.

This is outright wrong [1]. It’s trivial to run an LLM deterministically. Just do greedy decoding (or beam search, etc.) The fact that the author doesn’t know something so basic about how to use LLMs now makes me doubt that he knows enough to predict what they will or will not be able to do in the future, so I stopped reading here.


[1] Barring nitpicks about subtle non-determinism due to hardware differences or software environment differences across machines. Also, LLMs (particularly older ones) can be highly sensitive to specific changes in their inputs, but this has nothing to do with LLMs being probabilistic.

I think you can trivially make an LLM deterministic in the technical, narrow sense that for exactly the same input you get exactly the same output. Just initialize the pseudo-random number generator deterministically.

However, where LLMs differ from most classical deterministic algorithms is that they are not stable, a small change in the input might result in a big change in the output.

Suppose I have a list of strings I want to sort lexicographically. If I use std::sort (and stick to ASCII), I can expect to get reasonable results every single time. If instead I give the task to a neural network, such as a human, I will get some significantly non-zero error rate. If I use an LLM, I would also expect an elevated error rate. Of course, both the LLM and the human might also refuse to work with certain strings, e.g. racial slurs.

Generally, nobody uses neural networks to solve problems which are easily solvable by classical algorithms, teaching aside. But there are a lot of problems where we do not have nice classical algorithms, such as safely driving a car through the city or translating a text or building a website from informal specifications. So we accept the possibility of failure and hand them out to LLMs or grad students.

However, where LLMs differ from most classical deterministic algorithms is that they are not stable, a small change in the input might result in a big change in the output.

This is not unique to LLMs. This happens to pretty much any algorithm that feeds its outputs back into its inputs without converging. Probably the simplest example is that, if take a degree 3+ polynomial, and you use Newton's method to find the complex roots, and you plot which root was found by the initial value, you get a fractal (Newton's Fractal) rather than a smooth diagram. There's a great 3blue1brown video on this actually.

But yeah, that generalizes to a surprising number of iterative processes (e.g. neural net training)

That is ... irrelevant. The underlying technology works and works surprisingly well. And with white collar labor costs as they are even a 500$ monthly subscription gives a good value. The demand from people for the tools is there. They got used to AI assistants. And distilled Chinese models are quite cheap and good enough-ish too.

Certainly people are already changing career plans, college plans, savings strategies, family planning, etc. and it will only get much worse. New broadly available opportunities in AI are not going to open up faster than the fear of AI disruption will spread; we are already in a spiral.

Co-workers at my current job ended up pivoting away from Software Engineering, particularly because they saw this coming from miles. They are aiming for Cyber-Security. I studied Software Engineering in undergrad, while i didnt pivot away from it because of AI specifically, i wouldnt be surprised if a lot of tech majors pivoted away from it because of the AI boom. With that being said, im of the opinion that the "AI will take our jobs" schtick is slightly overstated: Technology has always replaced jobs, thats how it always goes. New jobs will arise. People forget that most people were working in agriculture before the industrial revolution, all those farmers didnt just stop working, they found the newly produced jobs else ware in the economy (its actually part of the reason urbanization has increased so much!). I dont think we need to worry all that much until we have actual JARVIS/Cortana level computers running around with Terminator robotics.

Technology has always replaced jobs, thats how it always goes. New jobs will arise.

I would argue that this time, it is different from the industrialization or the computer revolution.

The computer revolution was the first time the machines came for stuff which had previously required intelligence. In the niches where they were good, they totally crushed humans. Before electronics, computer was a human job. Today, I can waste more multiplications on playing a video game for an hour than humanity solved in total in 1900.

On the other hand, electronics also came with very sharp limitations. A human who might have worked as a computer in 1900 still had skills which the machines did not have, and could thus be running Excel in 1995.

This time around, it is much less clear that the median human will still have any intellectual comparative advantage over the machines. Heck, even the median MINT PhD might not find employment for their brain in 2035 any more than anyone found employment for their multiplication ability in 2000.

So your "new jobs" which will arise might well being the biodrones of an AI: wear AR goggles and simply follow instructions. Walk to the indicated rack. Unplug the indicated network cable. Plug it back in at the indicated port. Drink exactly 50ml to avoid failure from dehydration without requiring more than the minimum of bathroom breaks. An exciting day at work for the most qualified biodrones might be when they were used to replace the CPU in a machine.

I don't really disagree that this is how the arc of progress is turning, but it does seem a bit ridiculous to worry about what your job is going to be if AI attains intellectual supremacy over humans.

It seems to me that there's really only two possible paths forward; either AI remains jagged in capability like current LLM's and the standard economic arguments about technology hold, or we develop an AGI that represents a perfect labor substitute (it seems hard to believe that an intelligence-complete AGI could not develop sufficiently advanced robotics) and every economic and political assumption grounding society made under the assumption that humans are required for production starts collapsing.

With that being said, im of the opinion that the "AI will take our jobs" schtick is slightly overstated

Maybe so; my post is explicitly agnostic to this. I am noting the toll of the social perception that is here today. Let’s assume it’s more than slightly overstated and straight FUD.

Still, right now articles in the MSM are openly pondering the possibility and whether there will massive economic fall out, “influencers” have viral doom posts, AI leaders go on popular podcasts and doom speculate, and then we see real world shake up’s at least being attributes to AI job displacement.

This is all happening today whether or not it’s based on hype, an my point is that this is going to affect decisions and cause aggregate mental and systemic stress in the immediate term, whether or not the tech pans out.

Honestly I feel like security is gonna get nuked by ai before engineering. It's one thing to poke holes in something, and it's a completely different thing to build something new. You can have AI agents be a red team that never rests, and that constantly looks up new CVEs. The agents can look at all of the code being written, and flag potential flaws. And AI is already pretty good at reverse engineering and pen testing.

The thing with AI slop is that low quality code introduces debt that accumulates, and you eventually end up with something brittle and unworkable. Security has no such problems, you simply poke holes everywhere, and tell people what to do to avoid having holes.

It feels like basically the same dynamic as software engineering. It's a force multiplier for more senior staff who have good intuition about the problem space and can use agents as an army of extremely fast but error-prone interns. Cybersecurity is a very diverse field as well - the guy who sits at a desk watching a dashboard is probably screwed, while experienced vulnerability researchers are having fun being more productive than ever, plus with a whole new set of poorly secured targets in the form of vibe-coded projects.

Was this pr cover for shedding bloat? Maybe

Given how far their stock is down from the peak, and how this cut puts them closer to 2019 headcount than not, I can't help but feel like this entire thing is "AI washing", for lack of a better term

I think so too, but how many AI washes before it really gets to people heads?

Could be a lot of things. It definitely sounds a lot better to say that they made massive layoffs because of AI then because "we had too many useless employees doing nothing" or "our stock was way down so we had to try something drastic." But given that they're a fairly mature software service company, I can actually see them being a prime use case for AI making their employees more efficient.