site banner

The Motte is very very slow

Is it just me or lately The Motte has been getting very slow to load and occasionally timing out?

9
Jump in the discussion.

No email address required.

It's not just you. Working on it. :)

(Should be better now, I'll let it sit for half an hour or so and see how it's going. More work may be needed.)

(Edit: Sigh, not solved. Still working on it.)

(This may now be fixed; I think we were just running on bad hardware.)

It's been kinda slow for a long, long time (intermittenly most of last year?).

Not a huge deal because once it loads you usually don't have a problem but sometimes it'd take 10seconds to render etc. Or so I remember.

Some of this might actually be fixed, or at least, in a position where it can be fixed. We had a problem for a while where updating the software was impossible because it caused crashes and largescale fixes were sketchy due to how badly tested it was. Thankfully, with the rise of AI, I can (and have) just point Claude at the codebase and say "write a ton of tests kthx" and now it's a lot less questionable to do work on.

It actually found a few bugs while writing the tests which was pretty funny.

Wow, that is truly amazing. And, hilariously, it's going to be a very powerful datapoint for our constant "are LLMs actually useful?" debates.

LLMs are useful if you check what they're saying.

Which is what makes them not useful. If you had an employee whose work you had to check every single time, you'd fire him. Why should a machine be held to a lower standard?

If you had an employee whose work you had to check every single time, you'd fire him.

Where I work, that would mean firing everybody - no work gets deployed without at least a second person reviewing and approving the proposed changes. That's a fairly common Quality Assurance practice everywhere, sometimes because an application is critical enough that human failure rates are intolerable, sometimes because a deployment is large enough that even the cost of a tolerable mistake multiples out to be larger than the cost of double-checking to reduce mistake frequency.

AI currently doesn't count as a "second person" for us, but just as a review of human-written code typically takes much less time than writing it did, two reviews (the reviewer plus the "author") of AI-written code can go faster than hand-writing plus review. The last time I reviewed AI-assisted code, the "tell" that AI was used wasn't that there was anything wrong with the code, it was that the documentation was better-written than you generally get from a junior human developer. We apes tend to want to just write the fun stuff and shy away from the tedious stuff.

Why should a machine be held to a lower standard?

Do you know anyone who'll help e.g. write a C/C++ reader for a simple HDF5-based format for ... well, I think that was before we got a work ChatGPT account and I used a free AI that time, but call it $200/month for ChatGPT Pro? I'd never used that API before, and the docs for an API I'd never used before weren't as clear or voluminous as I'd have liked (damn it, everyone else shies away from the tedious stuff too), but searching up and reading better tutorials would have taken an hour or so; double-checking LLM output took five minutes.

If you had an employee whose work you had to check every single time, you'd fire him.

In most of the programming jobs I've been in, code reviews are considered mandatory for all programmers. Everyone's work is checked every single time, and yet we don't all get fired. Humans make mistakes, and we've set up systems to better solve that issue. So do computers pretending to be humans. Nothing out of the ordinary here.

There's a lot of cases where figuring out how to solve a problem is far more complicated than verifying the solution, and those are cases that LLMs are fantastic with.

Absolutely. Doing code reviews (even comprehensive line-by-line ones) is a lot less effort for me than writing the code in the first place.

This matches my experience- LLMs move the programming workload from writing to reviewing/mentoring.