This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Alright, AI bros, follow-up from last week. I was able to secure access to Claude Opus 4.6 at my job, and I gave it the same prompt that I had given to Sonnet. It overlooked the authentication part of the HTTP client library completely this time in what it generated. In a follow-up I asked it to extract out the common logic for the authentication portions specifically. It didn't do that, instead it generated a class with two helper methods.
The first helper method was just a thin wrapper around System.Text.Json for deserializing the response. There's an optional flag to pass in for when case insensitive deserialization is needed, and nothing else.
The second helper method was something for actually making the HTTP calls. The strangest part with this one is that it has two delegates as parameters, one for deserializing successful responses, the other for handling (but not deserializing) error responses. It didn't do anything to split out handling of the 2 different ways to authenticate at all.
The issues with what was generated (for both the API client as a whole, and for the authentication part of the code specifically) are numerous, here are a small handful that I identified:
It assumes that an HTTP 200 code is the only successful response code, even though some endpoints return 202, 207, and more.
It assumes that all endpoints return plaintext or JSON content, even though several return binary data, CSV data, etc.
It didn't do null checking in several places. I assume it was mostly trained on C# code that either didn't do null checks correctly, and/or on code that doesn't use the nullable reference type feature that was added in C# 8 (back in 2019). Regardless, the null checks are missing/wrong regardless of whether nullable reference types are enabled or disabled. Also it always checks nulls with == or != null. This works 99% of the time, but best practice is to use "is null" and "is not null" for the rare cases where the equality operator is overloaded. Once again, I assume this is because most of the training data uses == and !=.
It doesn't handle url query parameters (nor path parameters), it assumes everything is going to use a JSON body for the request.
It uses the wrong logging templates for several of the logging calls. For example, the logs for an error response use the log template for logging the requests that are sent. Even more troubling is that it removed all the logic for stripping user secrets out of these logs.
There are quite a few more issues, but overall my experience with Opus was even worse than my experience with Sonnet, if anything. AI bros still in shambles. I definitely have zero fears that AI will replace me, though I'm still definitely fearful that retarded C-suite execs will think it can replace me.
My post from last week about using Claude Sonnet: https://www.themotte.org/post/3654/culture-war-roundup-for-the-week/426666?context=8#context
Edit: Just saw a very relevant post over on Orange Reddit about this very topic: https://news.ycombinator.com/item?id=47660925
This AI bro vs (idk what to call the opposition) schism on this site is very funny
I feel like both sides are talking passed each other in many ways, and also have no interest in bridging the epistemic gaps.
About me
I'm firmly in the "AI bro" camp I guess. I do not code, nor do I know how to do code aside from simple programming 101 type stuff, which is all I need(ed) to make VBA scripts work in excel. I will never copy/paste another line of Stack Overflow VBA to jank together a macro again, and that makes me very happy.
Adoption is slow, but it's gradually happening at my employer $MULTI_NATIONAL_FINANCE_CO. It is very clear to me that I will see (and already have seen) large productivity gains, especially as agent scaffolds are made for things other than coding.
LLMs are both extremely powerful and very jagged. I think a huge amount of their "jaggedness" is due to their nature as LLMs, and are very unlikely to get to ASI/some versions of AGI*. My best guess is they'll be as disruptive as the ~computer (i.e. the information age) was from 19XX-now, perhaps slightly smaller given "AI impact on human civilization" is kind of a subset of "computer impact on human civilization".
*Notwithstanding some kind of paradigm change in algorithm/AI approach. Which is always possible, but we're pretty clearly on the LLM-tech tree path for the next bit.
Vague Predictions
I am sure many white collar jobs will disappear entirely, many will be insulated for any number of reasons (ranging from genuine limits to retarded bureaucracy and everything in between) and will remain unchanged for a while, and some, like mine, will keep their core identity but day to day tasks will shift a lot and who knows what happens to employment (too many factors to guess per job).
Coding
It is clearly revolutionizing coding. This cannot be denied. GitHub commits are now going parabolic, so people are "building things". Much of which is slop. I am one of those people, I now have a small but growing fleet of personal tools. I'm sure they are coded awfully, I've never looked and wouldn't understand if I did. I don't care, they work for me.
There are much more accomplished coders on twitter, etc, who are also reporting massive changes to their lives. Many of them are incentivized to say such things and over exaggerate, but I doubt it's a massive coordinated lie or mass delusion. So there is truth there.
The more sensible ones will even agree that AI code is on average mediocre to bad, and AI can't do high precision high quality specialized code like a cracked human can. AI will even take your amazing high precision high quality specialized code and slop it if you're not careful. Many of them, like Karpathy, have just given up and accepted the slop as a price of doing business. Because they're accomplishing what they want with the code too. It works.
It's assumed that AI performance will improve massively from where it is today. It has so far, it's a pretty safe assumption right now. It's rumored that the new Claude model beat expectations on performance vs scaling laws. AI model hype is always a large % bullshit, but we'll find out the real capabilities soon, and no matter what they will be better than they are now.
I don't think LLMs are going to bring us the ASI digital god of Sam's wet dreams/nightmares. I think they are going to profoundly change our service economies regardless.
Your situation
I don't know your codebase or the thing you're getting it to do. I don't know anything about HTTP.
I seriously doubt you're trying to set the AI up for success at all. I can't code and I'm probably using more AI coding best practices than you are, and all my git commits are titled "lol".
It's also very possible that it's not worth the time to set up AI "properly" to fix this. There's a very real possibility it's much faster, if more tedious, to just do it yourself. But this is one task. N=1. There are things AI can do for you today, that's a guarantee.
The bubble
The usual retort of "skill issue" is "well if I have to set it up and use best practices then AI is a bubble". I think that's a strawman, because I am not stuck in a reflexive yes/no binary where if you like AI you can't also think it's a bubble. It could be a bubble, I don't know (or care). It's incredibly easy for an asset to be over-financed and you never know if you've done enough capex until you do too much (at any scale). What I care about is the AI tools I can access which are excellent and also flawed.
Maybe AI needs to be that good out of the box to justify the trillions in capex. It probably does. But does that matter here? Neither you nor I control capex spend or can predict how long the scaling laws will hold for.
I don't care if AI is a bubble - we'll all find out and predictions of this scale/magnitude are essentially worthless. If you have alpha and guess right, all power to you, but the bubble conversational branch strikes me as a fool's errand. And it's irrelevant to "can LLMs do things for you?".
Closing thoughts
We have LLMs here right now that are massively changing basically any digital task you point them at. It's not easy, and it doesn't work everywhere, but it's insane when it does.
It's cognitively exhausting. It's a new way of thinking + every time new models/tools come out you change many things you were previously doing. So many assumptions and bottlenecks change. It's genuinely not easy or obvious always how to implement it. We are learning this in real time as a culture.
It's so exciting, and I hope to soon quit my job at $MULTI_NATIONAL_FINANCE_CO to capture more of the value of my labor, which is about to increase a lot (probably lmao, could also go to 0).
If you want to refuse or deny the power of these tools you can. You can set about finding examples of them sucking to point and laugh. But you're letting your bias blind you, and leaving a lot of value on the table. You can tell your computer to do stuff and it can now, it's awesome.
Also noting that in your HN link the inventor of Claude Code is asking ppl for feedback/providing explanations live as I type this.
This might be a huge part of the divide between doubters and believers.
The code coming back might be ugly, buggy, insecure, and probably completely impossible to scale.
But if it works, how much does the 'average' user care?
Yet those who care for the quality of the code or product it might grate when they look and see the inelegance of the solutions and the lack of foresight.
Apply this to the AI art debate, too. Sure a trained eye will notice deficiencies and shortfalls. But the average user notices that they can produce a logo or a cute cartoon portrait in 15 seconds for pennies.
Me, I'm now basically using the LLMs to do final review on any work I don't feel 100% competent on, since its attention to detail is now impeccable and of course it never gets tired or complains.
Sometimes it hits some nitpicks I genuinely find stupid because in actual practice its an irrelevant detail for the actual outcome of the matter. But it catches things, so it almost feels like it'd be malpractice to not use the tool.
Anyway, its broke through to normies, AI agents are going to be huge among small busineses, I see people who are otherwise technologically inept with Grok AND ChatGPT on their phones lock screens. They are already relying on this tech to a degree that might startle you. Genie ain't going back in the bottle.
Get psychologically (and financially) prepared to adapt, that's the only advice that I can truly offer right now.
Love this uncertainty. On the one hand, I could 10x my productivity and cut my rates by half and still be making crazy money for myself. Seriously, the number of basic and intermediate tasks that GPT can do for me is freeing up time to engage with the higher leverage tasks that I enjoy and get paid the most for.
But if it gets just a little better then my role as an expert intermediary becomes redundant. I myself become a wrapper for the LLM, I'm just giving the stamp of approval to outputs that are already 99% perfect, and getting paid to eat the blame if something does go wrong 1% of the time. And competition with other humans in this role will drive my marginal profit down to pennies.
I hate this uncertainty.
In my experience the average user starts to care right around the same time that heir credit card number and mother's maiden name end up for sale to the highest bidder.
No one is going to vibe code their own SAAS to replace Salesforce et al
Salesforce and other huge boys with giant moats will enjoy higher labor efficiency. May experience serious pain due to higher competition > margin pressure but hard to predict.
Mid-cap software will knife fight each other over margins as competitors grow like weeds.
Small-cap/VC/PE idek lol, really excited to watch this space.
I'm super curious to see what happens when a given VC can invest in 5x as many startups per unit of $capital. I assume startups will scale faster. Do VCs stretch themselves thin with more companies in a portfolio? Do funds get bigger or smaller? Are there more or less actual VCs? Is it easier or harder to get a VC fund going?
That last bit is the most interesting part to me.
Right now, my understanding is that VC is extremely hard to get because a handful of AI darlings have sucked all the air out of the room. If they IPO soon, VCs should theoretically have freed up capital to deploy as the OpenAIs/Anthropics of the world start to show a return.
If I believe the argument, then it should result in a much larger number of smaller investments, since labor is ostensibly the biggest cost of software startups and that cost should plummet.
More options
Context Copy link
More options
Context Copy link
I don't think the normies are THAT far along that they'd trust it with their financial information.
But not too far out, either.
They might trust a Vibe-coded website, though.
As I understand it any website taking customers' financial information will usually use a third party's software rather than roll their own.
If Paypal et al. are vibe coding without regard to security we are in for some pain.
Block is vibe coding now
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link