site banner

Wellness Wednesday for February 11, 2026

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1
Jump in the discussion.

No email address required.

What sort of work do you do that lets you create small apps like this in a couple hours with only human attention being the limiting factor?

Following up here because I'm also interested.

We haven't been seeing much value at work, but we're also a 2.5 million line, polyglot legacy SaaS/on-premise hybrid application.

I think the key to getting good results is figuring out how to get a verifiable success/failure signal back into the LLM's inputs. If you've got an on-premise application and as such have no access to logs and such from the customer, I expect the place you'll see the most value is a prompt which is approximately "given [vague bug report from the user], come up with a few informed hypotheses for what it could be by looking at the codebase, and then, for each hypothesis (and optionally "and also my pet hypothesis of XYZ" if you have a hypothesis) , iteratively create a script which would reproduce the bug on this local instance of the stack if the hypothesis were correct [details of local instance]".

As an added bonus, the code to repro a bug is hard to generate but easy to verify, and generally nothing is being built on top of it so if the LLM chooses bad or weird abstractions it doesn't really matter.

I use claude code on a pretty big and complicated code base. The main use so far has been to make it possible the sort of things I would normally not have time to work on (i.e. CLI tool to make and certain packets with certain UDP multicast configurations. Wouldn't be worth the 1-2h I would have to put on this normally, but I can just tell claude and it is ready in a couple minutes), and "explain me this class/function/logic", "review code I wrote for possible mistakes", "find me examples of this pattern in this repo" etc. These sort of tasks, it has been really excelling at.

The biggest value we're finding is data migrations for new customers. It's almost a perfect use case for it - every customer is unique and every migration is a one-off, so there's no real long-term maintenance concern, and the normal procedure for errors during the run is to Just Start Over, which means we don't suffer from a downward quality spiral when the agent goes off the rails.