@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users  
joined 2022 September 06 20:44:12 UTC
Verified Email

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Verified Email

... now you're just threatening me with a good time. Can we advocate that the states ignore the FDA too while we're here?

This sets up some pretty fucked incentives. Setting up fucked incentives has historically not gone well.

And the guy behind ClawdBot / MoltBook (or whatever its called now) has openly discussed how his own deployment of ClawdBot was thinking and executing ahead of him.

I will point out that MoltBook had exposed it's entire production database for both reads and writes to anyone who had an API key (paywalled link, hn discussion).

And this is fairly representative of my experience with AI code on substantial new projects as well. In the process of building something, whether it's something new or something legacy, the builder will need to make thousands of tiny decisions. For a human builder, the quality of those decisions will generally be quite tightly correlated to how difficult it is for a different human to make a good decision there, and so, for the most part, if you see signs of high-thoughtfulness polish in a few different part of a human-built application that usually means that the human builder put at least some thought into all the parts of that application. Not so for "AI agents" though. One part might have a genuinely novel data structure which is a perfect fit for the needs of the project and then another part might ship all your API keys to the client or build a SQL query through string concatenation or drop and recreate tables any time a schema migration needs to happen.

That's not to say the "AI coding agent" tools are useless. I use them every day, and mostly on a janky legacy codebase at that. They're excellent for most tasks where success is difficult or time-consuming to achieve but easy to evaluate - and that's quite a lot of tasks. e.g.

  • Make an easy-to-understand regression test for a tricky bug: "User reports bug, expected behavior X, observed behavior Y. Here's the timestamped list of endpoints the user hit, all associated logs, and a local environment to play around in. Generate a hypothesis for what happened, then write a regression test which reproduces the bug by hitting the necessary subset of those endpoints in the correct order with plausible payloads. Iterate until you have reproduced the bug or falsified your hypothesis, If your hypothesis was falsified, generate a new hypothesis and try again up to 5 times. If your test successfully reproduces the bug, rewrite it with a focus on pedagogy - at each non-obvious step of setup, explain what that step of setup is doing and why it's necessary, and for each group of logically-connected assertions, group them together into an evocatively-named assert() method."
  • Take a SQL query which returns a piece of information about one user by id and rewrite it to performantly return that information for all users in a list
  • Review pull requests to identify which areas would really benefit from tests and don't currently have them
  • Review pull requests to identify obvious bugs

I think about this a lot, but I also catch myself thinking about how easy it must have been in the 90s to find alpha in X, and then realize that with the knowledge I have now it would be easy, but that obtaining that not-yet-common knowledge would have been much harder in the 90s. I'm sure that there's similar alpha available today if you know where to look for it, but if it was easy to find, it wouldn't be alpha.

Even Disney World has MBA'd itself into a place I would no longer remotely describe as the "happiest place on earth".

I actually went to Disneyland with my wife and daughter a couple months ago, and I was shocked by how much it wasn't MBA'd. The tickets were cheaper (inflation-adjusted) than they were when I was a kid, the food was decently good and not horribly expensive (~$20 / meal for decent bbq with big enough portions that we only needed one full meal plus a few snacks during our entire time from park open to park close), there weren't really any of the rigged carnival games that are optimized to make it seem like you just barely missed the big prize and should just try One More Time that you see in other amusement parks, and the lines didn't shove ads in your face (again, unlike other amusement parks). Possibly I just went in with sufficiently low expectations that I was pleasantly surprised.