site banner

Culture War Roundup for the week of June 30, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

So, is AI coming for the programmer jobs? There's a news story in my country about Microsoft seeking redundancies globally which probably means chopping jobs here as well, and one paragraph mentions AI:

Microsoft employs around 4,000 people in Ireland, with a further 2,000 people employed at its subsidiary, Linkedin, which has a base in Dublin.

The cuts are to be implemented across several divisions and geographical offices, according to the Seattle Times, reporting from Microsoft’s global headquarters.

The tech giant has said that the layoffs are part of a restructuring effort.

Microsoft CEO Satya Nadella recently disclosed that up to a third of programming at the tech company is now done by AI, with a higher percentage likely.

However, the current cuts are thought to be aimed across several job categories, including sales and middle-management.

...The company has previously said that cuts would involve "streamlining the organisation, eliminating management layers”, with no further detail on the sectors to be targeted, other than that it intended to shrink expenses in “R&D, marketing, general and administrative” divisions.

Granted, that seems to be trimming jobs across management and admin rather than software engineers, but the little nugget about "up to a third of programming is now done by AI" does seem to be a straw in the wind. Yes? No? Just means they're not hiring new junior staff?

AI is not sufficiently advanced to replace senior programmers yet. It's also not making them particularly more productive such that orgs could forgo hiring junior developers.

AI is the current hype bubble so every executive that wants to be thought of as "forward looking" will broadly gesture at it, and news orgs are chomping at the bit to get any scraps of stories that AI is replacing workers. In this environment you should dramatically downgrade your notion that much of anyone is being replaced, and your null hypothesis should be that any stories alleging it are dressing up normal corporate moves as "AI induced". Don't deviate from that understanding unless the evidence is very high-quality.

Probably not.

AI is on the part of the hype curve where it will get included in press releases for any and all reasons. If the news is bad, AI will be mentioned as a mitigation strategy. If it’s good, it becomes a growth plan.

My company had a little mini-reorg recently. It also consisted of shuffling some matrix management, and it also gave lip service to new AI tools. I hope no one expects a defense contractor to lead the charge in adopting AI-driven requirements.

My company had a little mini-reorg recently. It also consisted of shuffling some matrix management, and it also gave lip service to new AI tools. I hope no one expects a defense contractor to lead the charge in adopting AI-driven requirements.

Same here. Do you know why the reorgs happen so often? It's exhausting.

Same here. Do you know why the reorgs happen so often? It's exhausting.

I think it's largely manueverings associated with the corporate game-of-thrones.

I mean, that's a bog-standard way to look like Important Things are being done. In my far too many years at $current_employer, I have seen it countless times and when my previous boss said that we were no longer playing musical offices reorganizing our workspaces my reaction was ROFLMFAO, GLWT and also to make sure to give him shit every time he brought up the next move we had to do after a several month grace period as well, which I know he appreciated. In fact, I have watched said employer literally move different offices to the other side of the building and then back again less than two years later and that has happened two separate times with different sets of offices even! And it's not just my employer. Behold, Azure has become Entra! Azure Purview and Compliance have merged into the new Purview! Use the new-and-improved Exchange Online Admin (except for all of these things that live in the old Exchange Online Admin that still lives several years later). It never ceases. SMDH.

I have a private theory that reorgs are the company-level analogue to how human bodies evolved to raise their temperature as an immune response. When you can cleanly identify and resolve a dysfunction you do that, but when you can't... when all you have is a lingering sense of dread... you can stagnate, and let your corporate DNA die out, or you can generate a lot of "heat" and hope any entrenched dysfunctions eventually die off. No individual corporate T-cell knows what they're doing-- they're just thinking about advancing their careers and how shitty the coffee is. But the behavior gets reinforced by so many selection pressures that they conform to it anyway, as part of a larger system that they can interact with but never fully comprehend.

(This feeds into my whole conspiracy theory about how the stockmarket is already a meaninfully superhuman artificial intelligence but that's another discussion.)

Administrators and organizers need to generate bullet points for when the boss asks what they accomplished this quarter.

Unfortunately, I think that there are two layers of nonsense compounding on one another in the article.

  1. The reporter isn't specific in exactly where and how the cuts are being made, much less the reasoning behind them.
  2. Satya Nadella saying "1/3 of Code is written by AI" is a nothing burger all its own.

First, the obvious question is "what kind of code?" Does he mean boilerplate stuff that, before LLMs, was handled mostly by copying previous projects and re-using the basics? Does he man config files and deployment scripts for infrastructure? This is very much still code, but not in the user-facing, self-contained full product sense.

Looking deeper, the next questions are "so what?" and "how much code can AI actually write?" I am reminded of the classic The Mythical Man Month. Writing code isn't a linear function. 1.5x inputs does not yield a corresponding ratio of 1.5x outputs. The actual writing of code is often a pareto or power law function; you spend 80% of your time on 20% (or less!) of the codebase. Much like the hard part of writing is editing, the real slog in coding comes in debugging and, later, refactoring. Shitting out shitty but "hey it works" code is easy.

Every mid-to-senior level developer, data scientist, and ML engineer I've had discussions with more or less comes to the same conclusion space; AI is really handy, right now, for discrete problems. It's a massive time saver. It's actually extra handy for writing tests. In the not so distant future, it will probably be able to do some real system engineering work.

But it can't replace all the devs because, at some point, using more LLMs in your development will actually cause the project to take longer (again, reference The Mythical Man Month). If you look at the "thinking" output of Chain Of Thought models, you can see how it flirts with recursion loops. It tells itself to think about x but also to make sure it considers y too and, oh yeah, definitely make sure z is in there too. And that's for simple chat based prompts. If you have an LLM read a detailed system design plan and then hit the "do it" button, my worry isn't that it would output broken, non-internally consistent code, but that it would never actually output anything functional. Instead, I imagine millions of lions of incomplete functions with a lot of extraneous documentation and the wholesale swapping in and out of design patterns. Spaghetti code, but without even a "fuck it, it works" level of functionality.

I would be very surprised if Microsoft, a massive and bloated multinational corporation, switches to having one third of its programming done by AI in the near future. I don’t think it’s that nimble. I suspect the figure is a guesstimate to sound impressive and that the layoffs are Twitter-style bloat removal / offshoring.

It can be true that Microsoft is replacing workers with AI, slimming down a bloated management apparatus and that they're also hiring more cheap H1Bs to watch over the AIs or for internal factional reasons.

I may be in the minority here but I still believe that AI is a big, big thing and advancing fast. No plateau, just steady growth and new avenues to advance down. Two recent papers.

https://arxiv.org/abs/2506.24119

Using SPIRAL, self-play on zero-sum games produces reasoning capabilities that transfer broadly. Training Qwen3-4B-Base on Kuhn Poker alone achieves 8.6% improvement on math and 8.4% on general reasoning, outperforming SFT on 25,000 expert game trajectories. Analysis reveals that this transfer occurs through three cognitive patterns: systematic decomposition, expected value calculation, and case-by-case analysis. Multi-game training (TicTacToe, Kuhn Poker, Simple Negotiation) further enhances performance as each game develops distinct reasoning strengths. Applying SPIRAL to a strong reasoning model (DeepSeek-R1-Distill-Qwen-7B) can still lead to 2.0% average improvement. These results demonstrate that zero-sum games naturally develop transferable reasoning capabilities, highlighting a promising direction for autonomous reasoning development.

https://arxiv.org/abs/2505.03335

we propose a new RLVR paradigm called Absolute Zero, in which a single model learns to propose tasks that maximize its own learning progress and improves reasoning by solving them, without relying on any external data. Under this paradigm, we introduce the Absolute Zero Reasoner (AZR), a system that self-evolves its training curriculum and reasoning ability by using a code executor to both validate proposed code reasoning tasks and verify answers, serving as an unified source of verifiable reward to guide open-ended yet grounded learning. Despite being trained entirely without external data, AZR achieves overall SOTA performance on coding and mathematical reasoning tasks, outperforming existing zero-setting models that rely on tens of thousands of in-domain human-curated examples. Furthermore, we demonstrate that AZR can be effectively applied across different model scales and is compatible with various model classes.

Two major papers on fully autonomous learning, in coding and reasoning via playing simple games respectively. Admittedly both are on a small scale, using small models and little compute. The former used maybe $1000 worth of compute when the big boys have billions to spend. However, recursive AI self-improvement isn't science fiction, it's eminently possible, presumably it's being done right now on a large scale.

Are people really going to be better at programming than a machine that has thousands, millions of years of experience? Surely not, just like they're not going to be better at chess or go or starcraft.

Almost all of these employees laid off employees will be replaced by H1-Bs (Microsoft put in for over 6000 the first two quarters this year) as well as previously announced hiring in India.

I’m not sure where AI comes in but they certainly aren’t replacing their laid-off workers with AI unless AI stands for “another Indian”.

Doing a similar standard of job for 30% of the price, or even doing the same job to an 80% standard for 30% of the price for almost all jobs is a very valid component to the merits of an individual for a job. Microsoft choosing to recognise and reward this doesn't reflect badly on either them or the person doing the job for cheap, the only person to boo here is the person who wants to extract economic rents by artificially restricting competition.

Given that the 30% wages essentially work due to relative purchasing power and / or arbitrage between a Third World childhood and a First World adulthood, isn’t this global laissez faire approach basically poison for long-term economic growth?

If it becomes widely accepted that economic growth means an increased quality of life here and now, but that the window of opportunity only lasts maybe 1.5 generations before your (grand)children are priced out of the global market, that seems to make growth and laissez faire economics a much tougher sell.

I’m not convinced that most people make decisions on that timescale.

It’s the kind of sentiment that convinced communists the world revolution was coming any…minute…now.

It's only poison for disproportionate economic growth of your country relative to the other countries of the world. It's poison for a selfish country level view of the planet yes, but that is not a bad thing; much like how pesticide is poison for aphids but is very much a good thing. If you support policies that lead to long term global growth instead of merely localized growth then this laissez faire approach isn't bad at all, in fact it's the fastest way to get global GDP growing as fast as possible.

No, I mean that it’s going to make it much harder to get any democratic buy in if people who have already experienced growth think that you’re dooming them to decline, people in third world understand that they’re going to get at best two generations of growth and then decline, etc.

I.e. it’s poison for the idea of economic growth, which up until now was mostly regarded positively.

Why would people in the third world only get two generations of growth and then decline? They catch up to the west and then the great vehicle takes forward all of humanity towards higher and higher levels of economic welfare all together. It's not too much to expect rich developed countries to become economic Bodhisattvas, they will always be at least as far along the path of enightenment luxury gay space communism as everyone else.

If one wishes to escape the cycle of Samsara boom and bust there is no other alternative.

I don’t think it’s the case that, under the near-100% global fluidity you seem to be arguing for, the west will continue to remain ahead. That is, I don’t necessarily see why economic growth should be sticky under conditions of high global fluidity.

At the moment, Britain (say) is in relative decline. Because we once had a very large market and because companies serving that have mostly been staffed by British people for various reasons, that decline has been slow. Say, Toyota sets up a car factory in Sunderland to build cars for the British market; that factory is mostly staffed by British people by virtue of being in Britain and because of various employment laws, meaning that a decent number of British people are earning a decent salary, meaning that the market for Toyota cars in Britain is still decently sized, etc. But the decline is still present because (among other things) Britain is expensive and therefore British workers require global-market-beating salaries to live well.

Under conditions of maximal global liquidity, I would expect to see accelerated growth and decline, with some countries entering into the India/Taiwan/China/Japan role of ‘cheap country where multinationals can get decent work for low prices’ and other countries declining to that point or past that point and waiting for their time to get back in the spotlight.

(Countries might decline past the ‘spotlight point’ because factories etc. benefit from synergy and investment tends to cluster, so even if several countries have favourable economic conditions only one of them might win the prize at any given time.)

In short, to my mind, the maximally fluid world looks like it would accelerate boom and bust for any given country (or its native population) rather than lead to ratchet growth spread globally. I think it would be hard to get public support for that - first world countries wouldn’t want to sign up for accelerated decline from their current position, and third world countries want to be able to protect their economic growth once they have it.

Yeah, I get the impression they want to reduce headcount and salary costs, so the admin and management and sales will get the hit rather than the programmers, but there may well be an eye on "do we really need to hire some kid out of college at a high salary when we can just get our pet AI/five Indian coders to do it for us cheaper?"

"up to a third of programming is now done by AI" does seem to be a straw in the wind. Yes? No? Just means they're not hiring new junior staff?

It is a bit like saying a third of programming is done by hibernate and spring. Even if true, doesn't mean what you think it means.

It's possible, but I'm skeptical -- AI isn't as bad as people say, but I don't think it's quite there yet, and more critically there's a massive space for additional programmer output -- and a lot of this stuff is happening at the same time that Microsoft is demanding vast increases in cheaper workers.

Cheap labor doesn't entirely explain it. There are thousands of Mexican, brazillian, eastern european, africans, and phillippine developers they could hire. And they could keep them in India and build offices for them there.

I still notice my confusion at the endless push toward the indiaification of the USA by these groups. Do we really want to bring over all of these people who will effectively never integrate because they won't have to?

Having native English speakers is actually a huge plus when they have to interface with American workers. Indian English isn't perfect but you get used to it fast, but it's much much harder when it comes to other countries. Eastern Europe is probably the second place to look, and it's relatively warm for outsourcing for smaller players, but I think the countries are just too small for the big players to want to get involved. If you go in there and hire 20,000 devs that's going to screw with the job market for the entire country.

If anyone thinks ChatGPT is ready to replace programmers then just like... ask it to build some software for you. Enough to run a sustainable business. It's ready to be an employee, ok then, go employ it. That's free money for you that's just sitting there for the taking.

LLMs have this uncanny valley problem where the more capable they become, the more ambitious the task I give them and the more time they ultimately waste me because I go down a rabbit hole chasing a solution that can't work but I don't figure it out for hours. Meanwhile it confidently tells me this is how you do it until the error is undeniable.

I feel like I was more productive with them a year ago than I am today.

It also astonishes me how relatively good they are at coding but kind of bad at everything else?

Ask it to walk you through plot points of a popular sci-fi book you're reading and it hallucinates left and right. It has likely been trained on the full text and also ten thousand book reports and reviews and it still can't keep its shit together.

I feel like I was more productive with them a year ago than I am today.

I don't think this is just you or even a mystery. I've noticed the same thing, but I was talking to a friend and he came up with what I think is an excellent theory.

Through about mid 2024 (this is a rough timeline), the major AI companies were focusing totally on model performance broadly defined. The idea was that whoever could "break out" with the absolute best model would capture a $1 trillion+ market. Then, as open source and/or cheaper models began to not only keep up with the Big Boys but, depending on how you evaluate them, actually surpass some of them, the realization dawned on OpenAI, Anthropic, and Gemini; model performance is a race to commoditization. Commodity products can't sustain valuation and growth desires for companies with tens of billions in investment.

What's happening now is that they're all re-using their tried and true playbook; build products for customer engagement. The models from the Big AI firms today, I believe, are developed to maximize engagement instead of developed for maximal performance. I don't mean that they intentionally dumb them down or force them to produce knowingly inaccurate responses. I think it's more in the structure of the response. Take software development for instance. A response nowadays for "how do I design an API for my database" comes out in a nice, concise little five step plan. The LLM will conclude by saying "let me know which section you want to dive into first!" It all feels so "on rails." You think, "shit, this might be pretty easy" and you start to whip something up. Flash forward several hours and ... well, you said it.

My memory seems to tell me that asking that same API question last year would've produced a fairly technical blueprint for designing APIs in general. I would've looked at it and thought, "okay, that helps, but it looks like this is still going to be work." And, here's the important part, I may have then gone to a different website to research good API design. I would've disengaged with the LLM.

It's no surprise to me that a lot of the recent hype cycle has been "LLMs are replacing google as the primary means of interacting with information on the internet." Google's cash comes from the fact that most people don't even navigate directly to the URL they're interested in but, pop open google and type "nytimes" and hit go. It is actually "the front page of the internet" (sorry, reddit). If you have that same situation with OpenAI/Anthropic/Gemini where people start at those chatbots everytime they want to do anything on the internet, it will support the user growth and engagement numbers that might be able to support the valuations of these companies (although I have some serious doubts about their unit economics).

I directed Sonnet to make some software. It's doable. It works. Gets revenue. Not sustainable as yet though.

I think it's a decent employee and very cost-efficient, albeit not perfect.

Is this personal software you build and sell on your own? Or is this part of a corporate / small biz code base.

The former.

Congrats! That "zero to one" of actually getting the damn thing out to customers is the hardest part.

I worry about the ladder effect. In that, devleopers will be pulling the ladder out behind themselves.

Say you need a low-level coder to help support a more experienced software developer. You might just tell the developer to use AI instead of hire a kid out of college. AI will be better than 80% of kids out of college after all.

But AI can't do what that Software Developer does, and perhaps it never will. Ten years later, you have seasoned developers retiring and who is there to replace them? All the kids with CS degrees had to turn to menial labor and no one got that experience needed to take over the Software Developer's position.

But if you're a future-oriented company who thinks long term, and you say, "I'll hire these CS people so they get trained," you are at a disadvantage against your competitors for years, and there's no guarantee that the guy you hired will stick with you after the job market for seasoned developers tightens.

Ten years later, you have seasoned developers retiring and who is there to replace them?

I imagine the presumption is "by the time the old warhorses retire, we will have developed AI that is even better than they ever were, so we'll just go on pulling ourselves up by our bootlaces".

I think, no matter what, there will need to be someone who is held accountable for the actions of AI. A human who can be jailed, fined, or fired if something goes wrong. But will that person be in a position to actually tell if the AI is producing bad product if they never gained the "on-the -ground" skills that people earn through practice?

I'm unlikely to be fined, jailed, or fired even if I write some seriously fucked up code. The CEO may be fired, and the company may be sued, but neither of those entities knows what my day to day looks like.

(From your link)

"It highlights the dangers of engineer overconfidence[2]: 428  after the engineers dismissed user-end reports, leading to severe consequences. "

This is AI-coding in a nutshell.

AI coding is neither necessary nor sufficient for engineers to dismiss end user concerns. I've seen this sort of thing going on for years in big companies, though fortunately not for anything life critical.

Sorry, I was unclear. I was agreeing with you. Furthermore, I was saying that vibe-coding / AI coding often falls into exactly the trap I quoted.

Got to ask- were they already reducing headcount?

Yeah my cynical view is that large corporations would've done layoffs either way, as they often do during a downturn or after overhiring (what happened during covid.) In my mind AI is just a convenient excuse. While I do think it improves productivity somewhat, I think the layoffs would've come either way.

but the little nugget about "up to a third of programming is now done by AI" does seem to be a straw in the wind. Yes? No? Just means they're not hiring new junior staff?

Jevon's paradox applies to programmers.