@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users  
joined 2022 September 06 20:44:12 UTC
Verified Email

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Verified Email

What the fuck did I just watch?

$623/mo on food at home plus another $393/mo away from home. Even the <15000 group spends $625/mo across all food, $416 of which is groceries, so I stand by $800/mo not being extravagant if you cook all your meals. Probably a bit of room to budget but not that much.

If you're eating most of your meals at home, that's about $3 per person per meal. You can eat reasonably well on that, but it doesn't seem exorbitant. I spend about twice that for a family of 3 (because groceries are approximately free compared to rent and taxes, so why not optimize for quality rather than price).

I think the key to getting good results is figuring out how to get a verifiable success/failure signal back into the LLM's inputs. If you've got an on-premise application and as such have no access to logs and such from the customer, I expect the place you'll see the most value is a prompt which is approximately "given [vague bug report from the user], come up with a few informed hypotheses for what it could be by looking at the codebase, and then, for each hypothesis (and optionally "and also my pet hypothesis of XYZ" if you have a hypothesis) , iteratively create a script which would reproduce the bug on this local instance of the stack if the hypothesis were correct [details of local instance]".

As an added bonus, the code to repro a bug is hard to generate but easy to verify, and generally nothing is being built on top of it so if the LLM chooses bad or weird abstractions it doesn't really matter.