This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Alright, AI bros, follow-up from last week. I was able to secure access to Claude Opus 4.6 at my job, and I gave it the same prompt that I had given to Sonnet. It overlooked the authentication part of the HTTP client library completely this time in what it generated. In a follow-up I asked it to extract out the common logic for the authentication portions specifically. It didn't do that, instead it generated a class with two helper methods.
The first helper method was just a thin wrapper around System.Text.Json for deserializing the response. There's an optional flag to pass in for when case insensitive deserialization is needed, and nothing else.
The second helper method was something for actually making the HTTP calls. The strangest part with this one is that it has two delegates as parameters, one for deserializing successful responses, the other for handling (but not deserializing) error responses. It didn't do anything to split out handling of the 2 different ways to authenticate at all.
The issues with what was generated (for both the API client as a whole, and for the authentication part of the code specifically) are numerous, here are a small handful that I identified:
It assumes that an HTTP 200 code is the only successful response code, even though some endpoints return 202, 207, and more.
It assumes that all endpoints return plaintext or JSON content, even though several return binary data, CSV data, etc.
It didn't do null checking in several places. I assume it was mostly trained on C# code that either didn't do null checks correctly, and/or on code that doesn't use the nullable reference type feature that was added in C# 8 (back in 2019). Regardless, the null checks are missing/wrong regardless of whether nullable reference types are enabled or disabled. Also it always checks nulls with == or != null. This works 99% of the time, but best practice is to use "is null" and "is not null" for the rare cases where the equality operator is overloaded. Once again, I assume this is because most of the training data uses == and !=.
It doesn't handle url query parameters (nor path parameters), it assumes everything is going to use a JSON body for the request.
It uses the wrong logging templates for several of the logging calls. For example, the logs for an error response use the log template for logging the requests that are sent. Even more troubling is that it removed all the logic for stripping user secrets out of these logs.
There are quite a few more issues, but overall my experience with Opus was even worse than my experience with Sonnet, if anything. AI bros still in shambles. I definitely have zero fears that AI will replace me, though I'm still definitely fearful that retarded C-suite execs will think it can replace me.
My post from last week about using Claude Sonnet: https://www.themotte.org/post/3654/culture-war-roundup-for-the-week/426666?context=8#context
Edit: Just saw a very relevant post over on Orange Reddit about this very topic: https://news.ycombinator.com/item?id=47660925
I have no idea why Claude Code is working so badly for you. I work at a FAANG-level company, and a huge amount of our code is written by Claude. Garry Tan is in AI psychosis, but Claude Code is easily the biggest productivity unlock in CS since I started my career.
Few recommendations:
highormax.I've noticed that the quality of the codebase plays a huge role in the model's ability to write effective code.
For ex:
Ideally, all endpoints will already be typed. The model should not have to guess the request-response types.
Unless there is a specific regression in Claude Code, I don't know why claude failed at your task. It should have worked.
Also, if you're looking for a model that prioritizes meticulousness, then I'd use codex. Codex has a tendency to autistically cover all of your bases, that benefits the sort of problem you're work with (again, Use in high or xhigh mode).
I'm not @ChickenOverlord, but I'm also seeing unimpressive results. Maybe we can get to the bottom of it.
I've tried Claude (via Claude Code), Gemini (via Gemini CLI), and GPT (via codex).
In all of them, I've used their equivalent of Claude.md/Agents.md to lay ground rules of how we expect the agent to behave. Multiple people have taken multiple shots at this.
We always use plan mode first.
Our documentation is markdown in the same repository, so that should be useful and accessible.
We're using Java, which is strongly typed and all our endpoints are annotated with additional openapi annotations that should provide even more metadata.
We're using a pretty basic bitch tech stack, but it's not spring boot. All three models regularly fight us on that fact.
We have a four levels of validation, each with their own entry point in the build scripts. These are described in a readme.md in the root of the project. The first is a linter. The second is unit tests and code coverage. The third is a single end to end test. The fourth is all end to end tests. We have instructed the models to use these validation targets to check their work.
Despite all this, we see common failure modes across all models we've tested.
I'm sure there are more, but these immediately come to mind. There are four of us trying to make these things work, and we all keep running into the same problems again and again. It's not just me - even people with dramatically different writing styles and thought processes are seeing the same thing. I feel like I'm taking crazy pills, because a lot of people I know in real life are experiencing the same pain, but on the Internet it seems like I'm a huge outlier.
What's the disconnect here?
It works a lot better if you bend to the AI and use a stack it likes. Why this specific Java stack?
I hope that isn't a serious suggestion. A programming tool which only works if you use a particular framework is a shitty tool.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link