site banner

Culture War Roundup for the week of June 9, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

They're already highly useful as a Super Google

If you're careful, they are. But that care requires twice as much checking: instead of just having to verify that the web page you find knows what it's talking about, you have to verify that the AI correctly summarized what it's talking about, and God help you if you just believe the AI about something for which it doesn't cite sources. But even Google's cheap "throw it in every search" AI seems to be much less likely to bring up unrelated web pages than the previous Google option of "let the search engine interpret your query terms loosely", and it's much less likely to miss important web pages than the previous Google option of "wrap most of your query in quotes so the stupid engine doesn't substitute unrelated-in-your-context words for your actual query terms", so it's still very useful.

The one thing I've repeatedly found to be most useful about current LLMs is that they're great at doing "dual" or "inverse" queries. If I knew I wanted the details of Godunov's Theorem, even a dumb search engine would have been fine to bring up the details of Godunov's Theorem - but when all I could recall was that I wanted the details of "some theorem that proves it's impossible to get higher order accuracy and stability from a numerical method for boundary-value problems without sacrificing something", but I didn't even recall the precise details, I wrote a wishy-washy paragraph for Claude and in the reply its first sentence gave me exactly the name of the theorem I wanted to search for. I can't imagine how much longer it would have taken to find what I was looking for with Google.

AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).

I'm currently not allowed to use a top-of-the-line model for my job (even though I mostly work on things that aren't ITAR or classified, we've got a blanket limitation to an in-house model for now), but I'm definitely worried that I'll have a skill issue when the rules get improved. What do you do to get AI help with a large code base rather than a toy problem? Point it to a github repo? Copy-and-paste a hundred thousand lines of code to make sure it has enough context? Paste in just the headers and/or docs it needs to understand a particular problem?

Use an AI-integrated IDE like Cursor or Windsurf (now bought by OpenAI sigh).

Your query looks like ‘I have an error that look like paste text and I think it’s being caused by @Object1 not being destroyed properly during garbage collection’.

The IDE gives the codebase structure to the model, which queries the object you mentioned, its headers, etc. then does a search of the repo for where it’s used, then…

But I don’t think I’ve ever worked on a codebase that you would consider large and of course this only works for a monorepo.

I'm also not allowed to use the best models for my job, so take my advice (and, well, anyone else's) with a grain of salt. Any advice you get might be outdated in 6 months anyway; the field is evolving rapidly.

I think getting AI help with a large code base is still an open problem. Context windows keep growing, but (IMO) the model isn't going to get a deep understanding of a large project just from pasting it into the prompt. Keep to smaller components; give it the relevant source files, and also lots of English context (like the headers/docs you mentioned). You can ask it design questions (like "what data structure should I use here?"), or for code reviews, or have it implement new features. (I'm not sure about large refactors - that seems risky to me, because the model's temperature could make it randomly change code that it shouldn't. Stick to output at a scale that you can personally review.)

The most important thing to remember is that an LLM's superpower is comprehension: describe what you want in the same way you would to a fellow employee, and it will always understand. It's not some weird new IDE with cryptic key commands you have to memorize. It's a tool you can (and should) talk to normally.

What do you do to get AI help with a large code base rather than a toy problem?

Two things mainly:

  1. Have a good prompt that has the nuances of the crappy, antiquated setup my work is using for their legacy systems. I have to refine this when it runs into the same sorts of errors over and over (e.g. thinking we're using a more updated version of SQL when we're actually using one that was deprecated in 2005).

  2. Play context manager, and break up problems into smaller chunks. The larger the problem that you're getting AI to do, the greater the chance that it will break down at some point. Each LLM has a certain max output length, and if you got even close to that then it can stop doing chain-of-though to budget its output tokens, which makes its intelligence tank. The recent Apple paper on the Tower of Hanoi demonstrated that pretty clearly.