The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:
-
Requests for advice and / or encouragement. On basically any topic and for any scale of problem.
-
Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.
-
Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.
-
Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

Jump in the discussion.
No email address required.
Notes -
Someone posted an article last week that I initially registered intending to respond to. It was throwing around a lot of breathless talk about "complete coding supremacy over humans" and don't get me wrong it's not like it wasn't getting pushback, but it also wasn't quite getting rotten eggs thrown at it the way I would have expected. I had previously gathered the impression that AI coders were still at the level of an enthusiastic but sloppy apprentice.
They very much are. They still make syntax errors from time to time such that the code won't compile, let alone getting the logic of the code right. You have to carefully review all AI-generated code for mistakes (which negates the time savings), or you will get buggy code sooner or later.
More options
Context Copy link
My impression is that the SOTA agents (Claude Code or Codex) can now, if given a sufficiently detailed specification and sufficient tools to validate their work, provide code that fully satisfies that spec for the vast majority of moderately long coding tasks now. This is a big jump in capabilities and obviously very powerful.
This does come with a lot of caveats that imo make the claims of 100% AI written code misleading at best; the creation of "sufficiently detailed" specs (which is famously difficult) fed to the agent and validation of the generated code is still dependent on engineering skill, and one-shotted code will generally be of worse quality than code written for the same purpose hand-crafted by a skilled engineer.
In the short term I think this is actually bullish for SWE, coding agent output is still bottlenecked by engineering skill and I don't believe we've yet saturated the demand for software. In the mid-long term it's unclear if there will still be value in having engineers in the loop, but imo end-to-end automation of software engineering is/requires AGI so my job isn't high up on my list of concerns in that scenario.
More options
Context Copy link
Not anymore. They're basically as good as a senior software engineer now, except they finish 100x faster. And never need to rest.
At this point anyone not using them is resisting out of inertia. Or fear. Well placed fear, perhaps.
Or because they don’t work on the kinds of apps and problems you do and detest people who insist on trying to gaslight and force them into using something you happen to like.
It's funny.
Every time I point out that I get sub par results, I'm told I'm holding it wrong.
Gemini 3 wouldn't even generate syntactically valid Java 100% of the time.
Opus 4.5 is better, but it still regularly insists that I'm using spring boot when I'm not using spring boot, and no amount of "prompt engineering" or markdown files seems to fix that.
I may be incompetent, but right now it sure feels like I'm being gaslit.
Ah, but you’re not using FancyModel 4.97-q35-r2 so it’s No True AI.
This just sounds like you guys want people to care about your observations of obsolete models because ayy why should I gotta pay money to have an opinion?
If the free tools suck (and they do), it's pretty unreasonable to expect anyone to pay to see if the paid tool is good. That's a waste of money 9/10 times, and there's not a compelling reason to believe that LLMs are the 1 time in 10.
More options
Context Copy link
I've been trying the frontier models. My employer has actually paid real, honest to God money for them. We have an entire group of people that cuts through developers, marketing, sales, and management trying to get value out of them.
So far the only group that is consistently seeing a productivity improvement is the team that deals with RFPs.
On the development side, there are areas where it is, in fact, uncannily good (eg: converting between file formats), but the actual output we're seeing outside those cases can't yet justify the expense for us.
More options
Context Copy link
More options
Context Copy link
If only the fancy paid models are so great then it appears the AI corps are bad at advertising when they don't hand out free tokens of the good models instead of the shitty models.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Claude Pro, or does it need to be Max?
I've been using AI for asking questions, researching basic infos and summarizing them, for boilerplate texts, that kind of stuff. The free version so far has been good enough for the most part. We also want to save money, so I'm a bit reluctant to have a new expense on my list.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link