site banner

Culture War Roundup for the week of June 9, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

At this point, I don't even know what an AGI is. The word has just been semantically saturated for me.

What I do know, based on having followed the field since before GPT-2 days, and personally fucked around since GPT-3, is that for at least a year or so, SOTA LLMs have been smarter and more useful than the average person. Perhaps one might consider even the ancient GPT 3.5 to have met this (low) bar.

They can't write? Have you seen the quality of the average /r/WritingPrompts post?

They can't code? Have you seen the average code monkey?

They can't do medicine/math/..? Have you tried?

The average human, when confronted with a problem outside their core domain of expertise, is dumb as rocks compared to an LLM.

I don't even know how I managed before LLMs were a thing. It hasn't been that long, I've spent the overwhelming majority of my life without them. If cheap and easy access to them were to magically vanish, my willingness to pay to get back access would be rather high.

Ah, it's all too easy to forget how goddamn useful it can be to have access to an alien intelligence in one's pocket. Even if it's a spiky, inhuman form of intelligence.

On the topic of them being cheap/free, it's a damn shame that AI Studio is moving to API access only. Google was very flustered by the rise of ChatGPT and the failure of Bard, it was practically begging people to give Gemini a try instead. I was pleasantly surprised and impressed since the 1.5 Pro days, and I'm annoyed that their gambit has paid off, that demand even among normies and casual /r/ChatGPT users increased to the point that even a niche website meant for powerusers got saturated.

Why do you consistently assume that people who don't share your views of LLM capabilities just haven't seen what they can do/what humans can do? For example:

They can't code? Have you seen the average code monkey?

Yes I have (and of course, I've used LLMs as well). That's why I say LLMs suck at code. I'm not some ignorant caricature like you seem to think, who is judging things without having proper frame of reference for them. I actually know what I'm talking about. I don't gainsay you when you say that an LLM is good at medical diagnoses, because that's not my field of expertise. But programming is, and they simply are not good at programming in my opinion. Obviously reasonable people can disagree on that evaluation, but it really irks me that you are writing like anyone who disagrees with your take is too inexperienced to give a proper evaluation.

I join the chorus of people who don't quite understand what your problem is with LLMs. What kind of code do you write? The tools are at the point where I can give them a picture of a screen I want along with some API endpoints and it reliably spits out immediately functioning react code. I can then ask it to write the middleware code for those endpoints and finally ask it to create a sproc for the database component. It's possible you hover high above us react monkeys and barely even consider that programming but surely you understand that's the level like at least half of all programmers operate on? I had copilot do all these things today, I know that it can do these things. So where is the disconnect? It's truly possible there is some higher plane of coding us uninspired 9-5 paycheck Andy's can only obliquely perceive and this is your standard for being able to program but it'd be nice if you could just say that to resolve the confusion.

As somone who's been working in the field of machine learning since 2012 and generally agrees with @SubstantialFrivolity's assesment, I think that what we are looking here is a bifurcation in opinion between people looking for "bouba" solutions and those looking for "kiki" solutions.

If you're a high-school student or literature major with zero background in computer science looking to build a website or develop baby's first mobile app LLM generated code is a complete game changer. Literally the best thing since sliced bread. (The OP, and @self_made_human's comments reflect this)

If you're a decently competent programmer at a big tech firm, LLMs are at best a mild productivity booster. (See @kky's comments below)

If you are decently competent programmer working in an industry where things like accuracy, precision, and security are core concerns, LLMs start to look anti-productive as in the time you spent messing around with prompts, checking the LLM's work, and correcting it's errors, you could've easily done the work yourself.

Finally if you're one of those dark wizards working in FORTRAN or some proprietary machine language because this is Sparta IBM/Nvidia/TMSC and the compute must flow, you're skeptical of the claim that an LLM can write code that would compile at all.

If you're a high-school student or literature major with zero background in computer science looking to build a website or develop baby's first mobile app LLM generated code is a complete game changer. Literally the best thing since sliced bread.

You have to contend with the fact that like 95+% of employed programmers are at this level for this whole thing to click into place. It can write full stack CRUD code easily and consistently. five years ago you could have walked into any bank in any of the top 20 major cities in the united states with the coding ability of o3 and some basic soft skills and be earning six figures within 5 years. I know this to be the case, I've trained and hired these people.

If you are decently competent programmer working in an industry where things like accuracy, precision, and security are core concerns, LLMs start to look anti-productive as in the time you spent messing around with prompts, checking the LLM's work, and correcting it's errors, you could've easily done the work yourself.

I did allude that there might be a level of programming where one needs to see through the matrix to do but in SF's post and in most situations I've heard the critique in it's not really the case. They're just using it for writing config files that are annoying because they pull together a bunch of confusing contexts and interface with proprietary systems that you need to basically learn from institutional knowledge. The thing LLMs are worst at. Infrastructure and configuration are the two things most programmers hate the most because it's not really the more fulfilling code parts. But AI is good at the fulfilling code parts for the same reason people like doing them.

In time LLMs will be baked into the infrastructure parts too because it really is just a matter of context and standardization. It's not a capabilities problem, just a situation where context is splined between different systems.

Finally if you're one of those dark wizards working in FORTRAN or some proprietary machine language because this is Sparta IBM/Nvidia/TMSC and the compute must flow, you're skeptical of the claim that an LLM can write code that would compile at all.

If anything this is reversed, it can write FORTRAN fine, it probably can't do it in the proprietary hacked together nonsense installations put together in the 80s by people working in a time where patterns came on printed paper and might collaborate on standards once a year at a conference if they were all stars. but that's not the bot's fault. This is the kind of thinking that is impressed by calculators because it doesn't properly understand what's hard about some things.

I feel like I'm taking crazy pills here. No one's examples about how it can't write code are about it writing code. It's all config files and vague evals. No one is talking about it's ability to write code. It's all devops stuff.

This is the kind of thinking that is impressed by calculators because it doesn't properly understand what's hard about some things.

Ironically I considered saying almost this exact thing in my above comment, but scratched it out as too antagonistic.

The high-school students and literature majors are impressed by LLMs ability to write code because they do not know enough about coding to know what parts are easy and what parts are hard.

Writing something that looks like netcode and maybe even compiles/runs is easy. (All you need is a socket, a for loop, a few if statements, a return case, and you're done) Writing netcode that is stable, functional, and secure enough to pass muster in the banking industry is hard. This is what i was gesturing towards with "Bouba" vs "Kiki" distinction. Banks are notoriously "prickly" about thier code because banking (unlike most of what Facebook, Amazon, and Google do) is one of those industries where the accuracy and security of information are core concerns.

Finally which LLM are you using to write FORTRAN? because after some brief experimentation niether Gemini nor Claude are anywhere close.

What do you imagine is the ratio just at banks between people writing performant net code and people writing crud apps? If you want to be an elitist about it then be my guest, but it's a completely insane standard. Honestly the people rolling out the internal llm tooling almost certainly outnumber the people doing the work you're describing.

I do not think that expecting basic competency is an "insane standard" or even that elitist. Stop making excuses for sub-par work and answer the question.

Which LLM are you using to write FORTRAN?

What sort of problem did you ask it to solve?

In your effort to declare LLMs as incapable programmers you're excluding 95%+ of the profession, not literature majors. not high school students. Professional programers with CS and SE degrees. All I've been asking is for you to acknowledge that. If your standard is quant on a hft desk then great for you. I'm sure you're an excellent programmer. You'll probably have a job for six months longer than me.

More comments