This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I commented recently about my personal experience using LLMs for work-related math stuff. I found that it wasn't great at giving me a whole proof (or really, much of a part of a proof) without error, but it helped me with some idea generation and pointing me to tools that I wasn't familiar with. To be fair, I haven't yet gotten access to any of the ones that are supposed to be hooked up to automated theorem provers, so maybe they'll work better (I've signed up for one, but their system wasn't working at the time; starting this post prompted me to try again, and I was able to get in; maybe I'll find time to really test it soon).
I guess I'd just like to report some experience with LLMs for other computer stuff. I had an extremely minor issue with one of my PCs. I wondered if LLMs could help. Through the course of this, I tried using multiple different LLMs.
The good is that it did have some good ideas for how to get started, and possible causes of the issue. I may have caused a bit of a false start off the bat, because rather than really consider the multiple ideas that it gave me, I thought, "Yeah, I could totally see X being the problem; maybe I should just do that." It was easy for me to think that I could just do the likely fix; it's normally an easy thing to do, and there's zero harm if it wasn't actually the cause of the problem. However, it turned out that my specific system has a surprisingly stupid design, and it was going to be a much greater pain to do it. So I resigned myself to hoping that it was one of the other root causes suggested by the LLM in the meantime, and I'd come back to the first idea later if I could confirm that it really was that.
The extra good is that, in hindsight, I am very sure that it was, indeed, one of the other root causes. So thankfully, I didn't waste too much time on the false start. However, once I began to implement my preferred fix, something strange was going wrong.
This is where we get into the bad. In diagnosing what was going wrong with the attempted fix, it got allllll into mess that was actually pretty low probability. Suggested permissions issues, suggested problems with registry entries. A couple of them were low risk, and at the time, seemed like they could be plausibly related, and I did mess with a couple things. Others were the ugly. No, Mr. Bot, I am not going to just delete that registry value (especially after I did a little non-LLM side research on what that registry value actually does).1
In the end, when I told it that I was balking on doing what it wanted me to do, it suggested that I could, in the meantime, do one of the standard procedures in a different way. Of course, it thought that doing this would just be a step toward me ultimately having to delete that registry value. But I figured trying this alternate procedure at the very least couldn't hurt, and indeed, it helped by giving me an actual error code!
The LLM thankfully helped me decode it (likely faster than a google search), which allowed me to adjust my fix. This was actually the key step, after which, I was able to understand what I think was going on and manage later hiccups. Unfortunately, the LLM didn't grasp this. It still was set on, "Great! Now you're ready to delete registry values!" Sigh.
After I adjusted my fix, I was able to get another (unrelated) error code from another step in the process. This time, I actually tried a google search for the error code first, and it came up empty, but the LLM told me exactly what it was (and it made sense), which was very nice and convenient. One final adjustment, and I think I have it working just fine.
The only remaining bad point is that the LLM still didn't realize that we'd fixed the problem! It still was all, "...and now you're ready to delete stuff in the registry!!!" I told it multiple times that the thing that was broken which was motivating it to think of deleting the registry value was no longer broken. Didn't matter; it really wanted to nuke that thing.
It all still leaves me quite conflicted. It was great in doing some idea generation and decoding error messages. But man, does it leave me scared to think about all the people who are just giving LLMs free rein to take actual actions in their computer. I focused here on the registry key issue, but there were more things along the way that it came up with that left me thinking, "...no, I'm pretty sure I don't want to mess with that unless I've got a lot more information and confidence about what's going on." If I had just said, "Go fix this, Ralph Wiggums2," who knows what sort of bollocks it would have done to my system. This worries me, because I hear all these people talking about how great it is that they can just tell their LLM to go change whatever it thinks is necessary to go fix whatever problem on their computer... and they really think they're rapidly approaching a world (if they're not already there) where they'll be happy to give it full access to just do anything to it.
It also dovetails with the worries about vibe coding. Forget about changing some OS settings; they're actually choosing to run arbitrary code on their system that is generated by an LLM. Yes, some folks do rock solid sandboxing, but let's be honest, if you're making anything that you or anyone else is going to actually use, it's not going to stay in a sandbox for long. I listened to a podcast this week, where one of the hosts, midshow, was like, "Yeah, I had this LLM make this program. I'm gonna have it add email functionality." And he just did it, live on air. Sandbox? Schmandbox. It now sends emails. What's it actually doing along the way? Who knows? He didn't check any of the code; of course he didn't. He wanted to see it send an email while he was still live.
"Technical debt" is the phrase that went through my mind in thinking about these experiences together. Yes, I was poking around at permissions/registry; sometimes, those things genuinely just get messed up. I've had experiences where my permissions have just gotten borked for completely unknown reasons; sometimes, I've been able to fix them; sometimes, stuff like that happens and you get to the point of, "This thing has been running a long time, and who knows what the long history of stuff has been, when this or that may have gotten corrupted; better to just wipe the OS and install clean." The term is more traditionally used with coding, when stuff has just gotten glommed on, piece by piece, and at some point, it's better to just throw it all away and invest in a clean slate rather than continuing to maintain the old mess. You can glom on email functionality to your vibe code in a few sentences and about twenty minutes. You don't need to think about whether that may be accruing technical debt.
Maybe the LLMs will keep getting better, and it'll be even easier to clean slate stuff in the future, so the pain of accumulating technical debt won't be as bad. But man, I can't help but think that a lot of people are unknowingly setting themselves up, both in their systems and in their vibe code. That one day, they'll just say, "This is broken; I don't know why; it's a mess of stuff that LLMs have globbed onto it over years; just go fix it, Ralph," and it will just do whackier and whackier stuff to their system/code that is already so whacked out that it just doesn't fit the mold of training data used to train the LLM.
1 - FTR, it was actually super relevant to be at least looking around in the registry, and doing so helped me understand what was going on.
2 - For those who haven't heard yet, this is the name for a technique where you tell the LLM to do something, and you set up a loop to repeatedly prompt it to keep working and doing stuff "until it's DONE".
I think there are three things going on here, all with the same (somewhat inconvenient) solution:
The answer to all three problems is just to start a new session frequently and copy only the relevant and correct details into the new chat. It can be a pain if you're in the middle of something, but it gives the best results.
This is... somewhat redolent of good coding practices, I think; encapsulation and abstraction, at least. If you break a problem into smaller parts and keep the boundaries between those parts strict, it's easier for both humans and LLMs to conceptualize the totality of what they need at any given time. Ideally, structuring a project this way will not just result in better LLM performance but in more maintainable code too.
On the other side: having an LLM write code at all (rather than, say, directly making system calls) is already a big step towards legibility (and thus maintainability). Such a system is obviously insane, but it's perfectly possible for your program to be a particular internal state of an LLM. For that matter, it's perfectly possible (and indeed ubiquitous) for your 'program' to be the internal state of a human mind. By analogy, 'human vibe coding' is telling the human to design a set of legible policies rather than using their own judgment directly, which does actually have the expected advantages of consistency, comprehensibility, and interoperability.
I guess the takeaway is that we should look to normal management strategy for clues on how to manage LLMs, which might be obvious.
* This at least I think is mainly a training issue: most RLHF/DPO is done on single-turn responses.
More options
Context Copy link
More options
Context Copy link