The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:
-
Requests for advice and / or encouragement. On basically any topic and for any scale of problem.
-
Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.
-
Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.
-
Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

Jump in the discussion.
No email address required.
Notes -
Well. I don't know what the rest of you do but every elite dev I know that had been skeptical and unimpressed by AI agents has been converted in the last month.
Android apps. Firmware for wearables. UNIX TUI clients. Code analysis tools. Web apps. Flight simulators. Gaussian splatters. One guy writing a functional formula language for a network message bus. 3D games for PCs. A Signal clone that doesn't require phone numbers. Bots to run trading strategies.
If you're in a niche where this hasn't happened to you yet, bless. It's probably better for your mental health to not cross this threshold.
I'm not vouching for Cursor or Copilot or the general chat experience. But Claude Code and OpenAI Codex, within the last two weeks, running the latest best models, are what are scaring the shit out of me. Before this I was mostly in "meh, loses coherence too fast, maybe in 5 years they'll figure it out" mode. No. It's here now.
People were saying this back in December as well. Can you explain what differences you're seeing compared to three weeks ago that is indicative of a paradigm shift?
Originally, I had kind of given up on claude code a few months ago because it was wasting more time than it was saving me. It would lose the plot pretty quickly even though my instructions and the goal were still well within the context window. I considered this an architectural limit of LLMs.
But as of the last two weeks, holy shit. claude codex (and codex), they just grind away at problems. They don't lose the plot. They back out and try different approaches. They run micro experiment to test assumptions. They'll run the tools with --help and --version and check the man page. They'll step through the code of the installed version of meson to see why the config file is not behaving the way it expects. I just give them like one simple prompt and it'll chug away for 15-30 minutes just trying shit like an overly caffeinated engineer. They'll run builds and look at errors and fix them until it's clean.
And again, they don't lose sight of the goal. It's amazing.
More options
Context Copy link
More options
Context Copy link
IMO a lot of this effect is that you can get a lot more out of agents if you already know what you're doing, and a lot of people assume that the median person knows a lot more about their own niche than they actually do (see XKCD 2501). Right now you still need a lot of specialist knowledge to make the most out of your agent.
Even in your original post, you need to have enough knowledge to understand when to use io_uring, when you should drop down to assembly, and understand when and why you might want a JIT for your app. It's not like my mother is doing any of that even if I give her access to Claude Code lol.
As I mentioned in my other post I am quite unsure whether the reverse centaur phase will last, but at the point where someone completely non-technical is outputting the same amount and quality of software as a SWE, requirements and all, I think this is already AGI and either everyone or no-one is cooked.
It's really not worth worrying about that scenario too much imo, at least from an employment perspective.
I'm not really convinced I need to be more than "experienced computer user with good taste" in the end. Probably not much more than a product manager.
I was mentioning io_uring or JITs etc as examples of extreme customizations the agents can do. But if you just tell it what you want and then complain "it's not fast enough, make it faster" it will, on its own, come up with and implement optimizations. The standard ones like better algorithms or pre-computing stuff, but they will get increasingly sophisticated and extreme if you keep saying "make it faster".
My specialized knowledge helps us not get painted into a corner as much, but does it matter when the paint dries instantly and it can repaint the entire house in minutes?
My biggest advantage for now may be that I can approximate in my head the theoretical limit for how fast something could be physically driven on the hardware, so I'll know when to stop saying "make it faster". I'll also know that when the coding agent says "I'm going to bake in hardware assumptions and weaken consistency models" that it might be worth stopping as well.
But this edge won't last for too long.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link