site banner

The Motte is very very slow

Is it just me or lately The Motte has been getting very slow to load and occasionally timing out?

9
Jump in the discussion.

No email address required.

It's not just you. Working on it. :)

(Should be better now, I'll let it sit for half an hour or so and see how it's going. More work may be needed.)

(Edit: Sigh, not solved. Still working on it.)

(This may now be fixed; I think we were just running on bad hardware.)

It's been kinda slow for a long, long time (intermittenly most of last year?).

Not a huge deal because once it loads you usually don't have a problem but sometimes it'd take 10seconds to render etc. Or so I remember.

Some of this might actually be fixed, or at least, in a position where it can be fixed. We had a problem for a while where updating the software was impossible because it caused crashes and largescale fixes were sketchy due to how badly tested it was. Thankfully, with the rise of AI, I can (and have) just point Claude at the codebase and say "write a ton of tests kthx" and now it's a lot less questionable to do work on.

It actually found a few bugs while writing the tests which was pretty funny.

Wow, that is truly amazing. And, hilariously, it's going to be a very powerful datapoint for our constant "are LLMs actually useful?" debates.

They are undoubtably useful, especially in a domain like writing tests. If you are writing tests post-facto, you have a very well defined problem with a constrained domain, and a massive corpus to consult in terms of style, strategy, etc, etc. We live in a world where hundreds of millions (and I'm probably lowballing) of tests have been written, including tests for some of the hardest math and logic problems known to man. Every leet code website and programming competition out there uses a suite of test cases to verify correctness of submissions.

Honestly writing software tests is almost the optimal use case for an LLM.

I write lots of tests at work, and it's all incredibly domain-specific and trying to use LLMs for it just leads to laughably incorrect test expectations. Hence why I have trouble imagining it just working out of the box.

Makes sense, I think it all comes down to the corpus of public information available. If similar work can be scraped from one of the half billion repos on github its probably great, otherwise it likely isn't. Most of my work is either web stuff, infra as code, or business logic governed by public policy (rather than internal rules) so it's pretty ideal.

I imagine it would be significantly worse if I were still in global finance, but we often had the same issue as the LLMs. Lack of information and often having to switch tasks while we wait a week for a request for some bespoke implementation details to go up the chain, over to IBM, and back.