This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
At @self_made_human's request, I'm answering this. I strongly believe LLMs to be a powerful force-multiplier for SWEs and programmers. I'm relatively new in my latest position, and most of the devs there were pessimistic about AI until I started showing them what I was doing with it, and how to use it properly. Some notes:
LLMs will be best where you know the least. If you're working on a 100k codebase that you've been dealing with for 10+ years in a language you've known for 20+ years, then the alpha on LLMs might be genuinely small. But if you have to deal with a new framework or language that's at least somewhat popular, then LLMs will speed you up massively. At the very least it will be able to rapidly generate discrete chunks of code to build a toolbelt like a Super StackOverflow.
Using LLMs are a skill, and if you don't prompt it correctly then it can veer towards garbage. You'll want to learn things like setting up a system prompt and initial messages, chaining queries from higher level design decisions down to smaller tasks, and especially managing context are all important. One of the devs at my workplace tried to raw-dog the LLM by dumping in a massive codebase with no further instruction while asking for like 10 different things simultaneously, and claimed AI was worthless when the result didn't compile after one attempt. Stuff like that is just a skill issue.
Use recent models, not stuff like 4o-mini. A lot of the devs at my current workplace tried experimenting with LLMs when they first blew up in early 2023, but those models were quite rudimentary compared to what we have today. Yet a lot of tools like Roo Cline or whatever have defaulted to old, crappy models to keep costs down, but that just results in bad code. You should be using one of 1) Claude Opus, 2) ChatGPT o3, or 3) Google Gemini 2.5 pro.
Speaking from my own experience with literal top-of-class LLMs.
LLMs are good for getting overviews of public, popular, highly documented technical systems. They can meaningfully reduce ramp-up time there. But it’s not too significant for the overall job, for most jobs. I’d estimate ramp-up time to be a modest fixed cost that is already effectively ameliorated by existing resources like Stack Overflow. So maybe a 2x speed up on 2% of overall working time.
They are also good for writing repetitive boilerplate. Copy/paste features are cool and helpful. This takes maybe 1% of my overall working time. I just don’t wind up repeating myself that much.
They can be good for getting code coverage, but that does not equate to good testing. I can elaborate if needed, but figuring out which system properties are most likely to need explicit coverage is an art that requires a high-level perspective that an LLM will not have for the majority of serious projects. This is around 10% of my job.
For lesser-known or internal APIs (common at larger companies), the LLM will hallucinate at extraordinary rates. This is around 5% of my job.
For anything technical, like refactoring class hierarchies, the LLM will get way out of its depth and is likely to produce gibberish. This is around 4% of my job.
It simply will not understand the larger requirements of a project, and what would make one solution valid and another invalid. This is about 15% of my job as it relates to code, and maybe 8% as it relates to design specifications, and 20% as it relates to talking with other people about said requirements.
The rest of my job is code review and progress updates, which maybe could be automated but which feels a little cheap to do. So I stand to save about 2% of my working time with AI, which is pretty marginal. And on my team, you can’t tell any meaningful difference in output between the people who use AI and the ones who don’t, which ties into my general assertion that it’s just not that helpful.
Then again, I’m a backend engineer in a pretty gritty ecosystem, so maybe this isn’t true for other software roles.
If there's one place I doubt AI will improve much in the near future, it's stakeholder management. That's why I think even if AI becomes an astronomically better coder than the average SWE, that SWE's could just rebrand as AI whisperers and translate the nuances of a manager's human-speak into AI prompts. Maybe it'll get there eventually, but we're still a good ways off from non-technical people being able to use AI to get any software they want without massive issues arising. The higher up in the org you are, the bigger a % of your job that stakeholder management becomes. I think we agree on this point overall.
On less well-known systems and APIs, I think the hallucination issue is more of a skill issue (within reason, I'm not making an accusation here). I'm translating a bunch of SQR (a niche language you've probably never heard of) queries to an antiquated version of TSQL right now, and the AI indeed hallucinates every now and then, but it's in predictable ways that can be solved with the right system prompts. E.g. sometimes it will put semicolons at the end of every line thinking its in a more modern version of SQL, and I have to tell it not to do that which is somewhat annoying, but simply writing a system prompt that has that information cuts down that issue by 99%. It's similar for unknown APIs -- if the AI is struggling, giving it a bit of context usually resolves those problems from what I've seen. Perhaps if you're working in a large org with mountains of bespoke stuff then the giving an AI all that context would just overwhelm it, but aside from that issue I've still found AI to be very helpful even in more niche topics.
On the time saved, you might want to be on the lookout for the dark leisure theory for some folks, while for others the time savings of using AI might be eaten up somewhat by learning to use the AI in the first place. I agree that the productivity boost hasn't been astronomical like some people claim, but I think it will increase over time as models improve, people become more skilled at AI, and people using AI to slack off get found out.
Haha, I really, really don’t think there’s any dark leisure here. None of the best performers rest much at all, and I talk with them pretty openly about their habits. Plus, our direct manager is bullish on AI and got the most enthusiastic guy on the team to do an AI demo a few weeks back. Using AI as a force multiplier would get you a raise, not more work.
The more I have to babysit the LLM, the less time-efficient it is for me. I don’t know what everyone’s experience is, but typing out code (even SQL) is just not that time consuming. I know, logically, what I want to happen, and so I write the statements that correspond to that behavior. Reading code for validity, rewriting it to make it more elegant and obviously correct, that takes more of my time, and LLM output is (like a junior dev) unreliable enough that I have to read deeply for (unlike a junior dev) no chance of it improving future output. Plus, the code I write tends to be different enough that the prospect of reprompting the LLM repeatedly is pretty unpleasant.
That said, I absolutely use it for Bash, which is arcane and unfamiliar to me. I still have to go through the slow process of validating its suggestions and rewriting pieces to make them more proper, but the way you perform simple logical actions in Bash is so far outside my wheelhouse that getting pointed in the right direction is valuable. So if you’re in a position where you’re doing more regular and rote work with particularly obnoxious but well-documented languages, it makes sense we’d have different opinions and experiences.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link