This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I commented recently about my personal experience using LLMs for work-related math stuff. I found that it wasn't great at giving me a whole proof (or really, much of a part of a proof) without error, but it helped me with some idea generation and pointing me to tools that I wasn't familiar with. To be fair, I haven't yet gotten access to any of the ones that are supposed to be hooked up to automated theorem provers, so maybe they'll work better (I've signed up for one, but their system wasn't working at the time; starting this post prompted me to try again, and I was able to get in; maybe I'll find time to really test it soon).
I guess I'd just like to report some experience with LLMs for other computer stuff. I had an extremely minor issue with one of my PCs. I wondered if LLMs could help. Through the course of this, I tried using multiple different LLMs.
The good is that it did have some good ideas for how to get started, and possible causes of the issue. I may have caused a bit of a false start off the bat, because rather than really consider the multiple ideas that it gave me, I thought, "Yeah, I could totally see X being the problem; maybe I should just do that." It was easy for me to think that I could just do the likely fix; it's normally an easy thing to do, and there's zero harm if it wasn't actually the cause of the problem. However, it turned out that my specific system has a surprisingly stupid design, and it was going to be a much greater pain to do it. So I resigned myself to hoping that it was one of the other root causes suggested by the LLM in the meantime, and I'd come back to the first idea later if I could confirm that it really was that.
The extra good is that, in hindsight, I am very sure that it was, indeed, one of the other root causes. So thankfully, I didn't waste too much time on the false start. However, once I began to implement my preferred fix, something strange was going wrong.
This is where we get into the bad. In diagnosing what was going wrong with the attempted fix, it got allllll into mess that was actually pretty low probability. Suggested permissions issues, suggested problems with registry entries. A couple of them were low risk, and at the time, seemed like they could be plausibly related, and I did mess with a couple things. Others were the ugly. No, Mr. Bot, I am not going to just delete that registry value (especially after I did a little non-LLM side research on what that registry value actually does).1
In the end, when I told it that I was balking on doing what it wanted me to do, it suggested that I could, in the meantime, do one of the standard procedures in a different way. Of course, it thought that doing this would just be a step toward me ultimately having to delete that registry value. But I figured trying this alternate procedure at the very least couldn't hurt, and indeed, it helped by giving me an actual error code!
The LLM thankfully helped me decode it (likely faster than a google search), which allowed me to adjust my fix. This was actually the key step, after which, I was able to understand what I think was going on and manage later hiccups. Unfortunately, the LLM didn't grasp this. It still was set on, "Great! Now you're ready to delete registry values!" Sigh.
After I adjusted my fix, I was able to get another (unrelated) error code from another step in the process. This time, I actually tried a google search for the error code first, and it came up empty, but the LLM told me exactly what it was (and it made sense), which was very nice and convenient. One final adjustment, and I think I have it working just fine.
The only remaining bad point is that the LLM still didn't realize that we'd fixed the problem! It still was all, "...and now you're ready to delete stuff in the registry!!!" I told it multiple times that the thing that was broken which was motivating it to think of deleting the registry value was no longer broken. Didn't matter; it really wanted to nuke that thing.
It all still leaves me quite conflicted. It was great in doing some idea generation and decoding error messages. But man, does it leave me scared to think about all the people who are just giving LLMs free rein to take actual actions in their computer. I focused here on the registry key issue, but there were more things along the way that it came up with that left me thinking, "...no, I'm pretty sure I don't want to mess with that unless I've got a lot more information and confidence about what's going on." If I had just said, "Go fix this, Ralph Wiggums2," who knows what sort of bollocks it would have done to my system. This worries me, because I hear all these people talking about how great it is that they can just tell their LLM to go change whatever it thinks is necessary to go fix whatever problem on their computer... and they really think they're rapidly approaching a world (if they're not already there) where they'll be happy to give it full access to just do anything to it.
It also dovetails with the worries about vibe coding. Forget about changing some OS settings; they're actually choosing to run arbitrary code on their system that is generated by an LLM. Yes, some folks do rock solid sandboxing, but let's be honest, if you're making anything that you or anyone else is going to actually use, it's not going to stay in a sandbox for long. I listened to a podcast this week, where one of the hosts, midshow, was like, "Yeah, I had this LLM make this program. I'm gonna have it add email functionality." And he just did it, live on air. Sandbox? Schmandbox. It now sends emails. What's it actually doing along the way? Who knows? He didn't check any of the code; of course he didn't. He wanted to see it send an email while he was still live.
"Technical debt" is the phrase that went through my mind in thinking about these experiences together. Yes, I was poking around at permissions/registry; sometimes, those things genuinely just get messed up. I've had experiences where my permissions have just gotten borked for completely unknown reasons; sometimes, I've been able to fix them; sometimes, stuff like that happens and you get to the point of, "This thing has been running a long time, and who knows what the long history of stuff has been, when this or that may have gotten corrupted; better to just wipe the OS and install clean." The term is more traditionally used with coding, when stuff has just gotten glommed on, piece by piece, and at some point, it's better to just throw it all away and invest in a clean slate rather than continuing to maintain the old mess. You can glom on email functionality to your vibe code in a few sentences and about twenty minutes. You don't need to think about whether that may be accruing technical debt.
Maybe the LLMs will keep getting better, and it'll be even easier to clean slate stuff in the future, so the pain of accumulating technical debt won't be as bad. But man, I can't help but think that a lot of people are unknowingly setting themselves up, both in their systems and in their vibe code. That one day, they'll just say, "This is broken; I don't know why; it's a mess of stuff that LLMs have globbed onto it over years; just go fix it, Ralph," and it will just do whackier and whackier stuff to their system/code that is already so whacked out that it just doesn't fit the mold of training data used to train the LLM.
1 - FTR, it was actually super relevant to be at least looking around in the registry, and doing so helped me understand what was going on.
2 - For those who haven't heard yet, this is the name for a technique where you tell the LLM to do something, and you set up a loop to repeatedly prompt it to keep working and doing stuff "until it's DONE".
I think there are three things going on here, all with the same (somewhat inconvenient) solution:
The answer to all three problems is just to start a new session frequently and copy only the relevant and correct details into the new chat. It can be a pain if you're in the middle of something, but it gives the best results.
This is... somewhat redolent of good coding practices, I think; encapsulation and abstraction, at least. If you break a problem into smaller parts and keep the boundaries between those parts strict, it's easier for both humans and LLMs to conceptualize the totality of what they need at any given time. Ideally, structuring a project this way will not just result in better LLM performance but in more maintainable code too.
On the other side: having an LLM write code at all (rather than, say, directly making system calls) is already a big step towards legibility (and thus maintainability). Such a system is obviously insane, but it's perfectly possible for your program to be a particular internal state of an LLM. For that matter, it's perfectly possible (and indeed ubiquitous) for your 'program' to be the internal state of a human mind. By analogy, 'human vibe coding' is telling the human to design a set of legible policies rather than using their own judgment directly, which does actually have the expected advantages of consistency, comprehensibility, and interoperability.
I guess the takeaway is that we should look to normal management strategy for clues on how to manage LLMs, which might be obvious.
* This at least I think is mainly a training issue: most RLHF/DPO is done on single-turn responses.
More options
Context Copy link
I've never encountered these issues and have had LLMs help me diagnose tons of computer issues and fix them. I pay for Claude Max. I've used Claude for coding and it has always delivered workable code b/c I still use the ol' fashioned software development life cycle i.e. plan, code, test & iterate. I test all the code I can (same as I do as a product manager with my human coders). I force Claude to explain new code and how it works and add it to the documentation which I review. Occasionally I do code reviews where I force Claude to make diagrams of how the code is interacting and then either add it to documentation or ask Claude to revise the code. Again... all stuff I've had to do as a PM with human coders b/c I'm not a great coder and left to their own devices human coders will go more off the range than even the dumbest LLM.
I think the difference as soon as I encounter something unhelpful or hallucinated I simply start a new chat with a summary of what I've tried so far. I never compact context but simply start a new chat.
LLMs should be compared with customer service agents... if an agent was unhelpful would you stay on the line or just hang up and call a new one? I hang up until I get someone helpful... LLM sessions should be used similarly.
Frankly whenever I hear about these problems with LLMs I just think you have to treat it like a person... would you as an engineering manager just let your coders go out and code and never check in on them again? Would you continue using an unhelpful human agent instead of someone who was helpful? Would you just let some random person control your life?
Seems pretty simple to me...
@P-Necromancer I think I'd like to bundle these two, as they're getting at a similar thing.
I agree with what you both say. Plenty of humans will come up with ridiculous things to do, or even just things that might make sense but have problems, and if you're not supervising them appropriately, they may just do their things. But that's like, the essence of technical debt?
For the example of fixing some OS issue, imagine I didn't have really any technical knowledge of how things work (say, I don't really even know what the registry is unless a tech/LLM tells me something about it). Maybe I'd take my computer to a human tech. Could even be a corporate IT guy. Perhaps, knowing that I don't have a clue, I just give it to him. "Here's my problem; please fix it
RalphRufus."Who knows what he'll get up to? What stuff he'll mess with along the way. Things he'll try just because, and then maybe leave it in a changed state, even though it didn't progress toward a solution to the actual problem. This cruft can build up. After years of having this corporate IT guy and that corporate IT guy and the other corporate IT guy just doing who knows what, maybe at some point, things get bizarre enough that the next one says, "Dude, stuff is wild here; we probably should just wipe it and clean install."
That makes sense, and it's utterly routine in the world with humans. I hear my wife tell me about weird stuff that's broken on her work computer... and even weirder stuff that whatever IT guy she talked to did. She doesn't have a clue what's going on. I get it.
I also agree that as of right now1, the best is when you know enough about what's going on that you can get it to explain things and are able to then understand it, yourself. Get it to document things fully, provide a suite of tests, have a back-and-forth. It can provide tons of utility!2
...but, if you genuinely lack enough knowledge to be a competent participant of that back-and-forth, it still may let you "just do stuff". There can still be tons of utility here, as it may still get things right a lot, and folks who have had some problem that they've wanted to fix for ages and could never get the time with a competent human and certainly couldn't figure it out on their own will be able to fix many of those problems, and it will be wonderful. It may also, occasionally, along the way, build up technical debt.
Note that I'm not saying that this is some unique problem that is fundamentally different from dealing with humans. Instead, I'm now conceptualizing it in the same way that I conceptualize human-driven technical debt. I think that dovetails well with both of your descriptions. If there is a downside, it's probably that many folks who wouldn't have ever tried to fix that OS problem or make that code will now do it, and they might be building up technical debt while they're also accumulating utility. They may choose to do it a lot, and they may jump into it with both eyes shut. This may still be the right choice! They may still get more utility from all the wins than they lose from either discrete bad events or built-up cruft.
This is a conflict, a tension, which is why I said that I was, indeed, conflicted. I'm am still neither an "LLM good" or "LLM bad" person.
1 - I continue to take no position on the question of to what extent future progress will render this concern de minimis.
2 - To briefly respond to the 'shouldn't you just hang up on a human customer service agent who you can tell is going to be unhelpful', yes. Absolutely. I didn't bother with the specific issue of it getting hung up on deleting the registry value, because I was close enough that hearing it append its bad idea one more time wasn't important to me. I did mention that I used multiple LLMs, and that was part of it; I left out every twist and turn of the story, but yeah, I not only just scrapped the prior context; I even just jumped to different models. This is a useful skill to have, when dealing with humans and LLMs. Even when dealing with some human professionals, my life changed long ago when I realized that I could grasp some understanding of what their "box" of the world was, and once I realized that my situation was outside of their "box", I just moved on from them. But the concern here is that you have to have just enough knowledge about the thing to be able to gauge where their box is, when you're outside of it, or when they're going off the rails. There are a lot of people who don't have that with humans, and they're not going to have that with the many many more things that they're going to want to do with LLMs. I don't have that with all sorts of different humans or things that I might want to do with LLMs.
More options
Context Copy link
More options
Context Copy link
Managing context is kinda new skill. I have figured out that at some point you have to start from scratch - just copy paste relevant context and delete old chat.
LLM are expertise multipliers. if you have expertise they are extremely useful. And I think that people do keep the reigns of the agents too loose.
Anyway - just out of curiosity was it a paid tier of first tier model? And for technical debt - no matter how bad the situation, never do a clean sheet design. You usually have to deal with the same crap and you have wasted time rewriting. There are exceptions of course, but usually it is because writing code is easier than understanding code.
More options
Context Copy link
Without telling us what "The LLM" you were using, your complaints are about as useful as if in your post the string "The LLM" was replaced by "a human". But i notice this is a common feature of those who seek to dimmish the utility of LLMs, never mentioning which model, and how much reasoning.
Yeah, I chose not to, because of course, the goalposts will be moved to, "You should have used my preferred LLM instead." I just mentioned that I used multiple different ones, multiple different companies. Thinking always. Not $200/mo. Of course, someone will just say, "You won't have any problems if you pay $200/mo for my preferred LLM." Maybe? I even note that they will perhaps get better! Yes, they're all getting better, even the cheaper ones. They get better as do the expensive ones. But will expensive ones still produce technical debt? Why do you think they will or will not? I don't know if they will! I'm saying that I don't know. You seem to be implying, but not even stating that you know (or how you know) that they certainly won't, if only you pay enough or wait an unspecified period of time.
I'd note that a common feature of your style of comment is that you immediately accuse your interlocutor of "dimmish (sic) the utility of LLMs". But I didn't do that! I said that there were ways in which they provided quite a bit of utility! Imagine having a discussion about any other technology like this. "You know, this nuclear science stuff is pretty cool. Can provide a lot of energy for cheap. Miiiight be worried about some possible dangers that might come up, like, ya know, bombs or stuff." "Why don't you tell us exactly what device you've been using in your own experiments?!?! Why are you trying to dimmish the utility of nuclear science?!?!" Like, no dawg, you just sound like you're not paying attention.
@Poug made a valid point. I've wanted to hit my head against a wall for years, when people used to complain about "ChatGPT" being useless, and they were using GPT 3.5 instead of 4. The same pattern has consistently repeated since, though you seem to be a more experienced user and I'm happy to take you at your word. It is still best practice to disclose what model you used, for the same reason it would be bad form to write an article reviewing "automobiles" and pointing out terrible handling, mileage and build quality, without telling us if it was a Ferrari or a Lada.
I'll put in another example here.
I work for a company that is running an agentic coding trial with Gemini 3 Pro. At present, the only developer who has claimed to see a productivity boost from code assist is one who is terrible at her job, and from our perspective, all it has done is allowed her to write bad code, faster.
The rest of us have regular conversations about what we're doing wrong. Everybody and their dog is claiming a notable performance boost with this technology, so we're all trying to figure out what our god-damned malfunction is.
It feels like the goalposts and blame both slide to fit how accommodating the developer is.
Maybe my employer just has a uniquely terrible codebase, but something tells me that's not the case. It's old, but it's been actively maintained (complete with refactoring and modernization updates) for almost two decades now. It's large, but it's not nearly so big as some of the proprietary monsters I've seen at F500 companies. It's polyglot, but two of the three languages are something the agent is supposedly quite good at.
None of us are silicon valley $800,000/yr TC rock stars, but I stand by my coworkers. I think we're better than average by the standards of small software companies. If a half dozen of us can't get a real win out of it other than the vague euphoria of doing something cool, what exactly is the broader case here? Is it genuinely that something like 20 guys on nootropics sharing an apartment in Berkeley are going to obsolete our entire industry? How is that going to work when it can't even do library upgrades in a product that's used by tens of thousands of people and has a multi-decade history?
Because right now, I'm a little afraid for my 401(k), and with each passing day it's less because I'm afraid that I'll be out of a job and more that I have no idea how these valuations are justified.
More options
Context Copy link
This is a viable criticism if someone is using a shitty ancient free model. The average paying ChatGPT customer on 5.2 or whatever it is is getting a decent model and so their criticisms can’t be as easily dismissed as a year ago.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Common problem these days, once an ai makes a mistake, it stubbornly continues along that path or alternatively goes schizo. Sometimes you just need to start a new chat.
I don't think that that is a pattern peculiar to AI....
More options
Context Copy link
More options
Context Copy link
I'm banging my head off the desk (metaphorically) here at those examples: the paperclip AI won't have to be persuasive enough to talk its way out of the box, we will just happily hand the keys and all our bank account details and the deeds to the house to it and wave it on its merry way.
It's not AI being smart that will be the problem, it's humans being stupid.
More options
Context Copy link
More options
Context Copy link