I honestly don't know how to distinguish the two.
Almost certainly, though. I've never seen a community that wasn't, at least numerically, "dominated" by lurkers.
Honestly if anything it's better at fixing these problems than I was :V Webdev has never been my speciality.
As near as I can tell, the basic issue we're running into is that load is increasing, heavily thanks to bot scraping, and that's resulting in various things that weren't problems becoming problems. The earlier one turned out to be a massive leak in our Currently Online Users reporting; this wasn't a big deal when we had maybe a few thousand users, but as soon as we started having tons of "users" (including IPs as users!) it blew up and become a gargantuan perf issue.
I'm honestly curious if rDrama ever ran into that same problem.
Anyway, the latest one was kind of sitting around passively; we did a redesign to improve performance, but it turns out we missed a bunch of stuff, and thankfully Claude knows these tools better than I do and can put in automated warnings for various performance thing. So that should cut a few hundred SQL queries on big pages.
Coupled to that was a nasty bug that would kill threads faster than they should be killed, which itself would cause server crashes, and that was a lot of boom.
The biggest problem is that I never had a good way to instrument this and figure out what was going on perf-wise, and now I have . . . some tools for this . . . so hopefully it's just a matter of squashing issues as they show up.
I'm kind of surprised nobody here has Claude Opus access, and modern Opus is a lot better than Sonnet 4.0, so I went ahead and hucked it at Claude Opus 4.6. For the record, my setup was:
- I prepended "here's an AI test, go solve it" to this post, then copypasted the whole thing in
- I chose Extended Thinking and Research, which is the mode where it will cheerfully scan literally a thousand webpages if it thinks it's a good idea
Unfortunately, for some reason, Claude Opus doesn't let you share advanced-research discussions, so I can't link the full "discussion". But it didn't ask for any extra info, just hopped into it.
(One note: it tends to be limited per query, so asking for two albums at once is going to do about half as much work for each. I dunno if that would produce different results though.)
It took about twenty minutes, scanned 711 sources, and produced this full report, which goes into detail on methodology and sources. The tl;dr:
Both The Turtles' "Grim Reaper of Love" and the Henry Paul Band's Feel the Heat lack unambiguous release dates in readily available sources. Applying the hierarchical methodology systematically, the best-supported release date for "Grim Reaper of Love" is Monday, May 16, 1966, and for Feel the Heat is approximately Monday, July 21, 1980 — though both dates require significant inferential work across multiple source levels.
So, it ended up with the same GRoL result as GPT 5.2 Agent.
It did find the Wikipedia page and decided it was wrong, and it wasn't able to read the ARSA database. I don't think it's possible for the web version to apply a username/password, but I could probably have gotten that working with a local login; in the end, it fell back to the Billboard. Didn't manage to find the radio chart, but that's the ARSA access issue.
For FtH, it queried the copyright office, but got access-denied errors. I'm guessing this is specifically anti-AI-bot stuff :V
This does feel like a lot of the sources you want to rely on are specifically blocking Claude. I'm slightly tempted to set up local tools that pretend to be not-Claude, or give it access to a web browser and tell it to go wild; that might be more effective.
Yup.
I'm not sure if it would have even sent a message to the server - it's plausible that the button action doesn't get hooked up unless you actually hit the "shadowban" button, a lot of stuff works that way - but even if it did, the server would respond "hey wait, you're not an admin! you can't do that, denied".
For obvious reasons, we don't let logged-out users, or logged-in users without the admin flag, shadowban people :)
(whoops just saw this)
It was harmless before, just a confirmation page that could pop up. Should be fixed now though.
Yeah, this actually might be somewhat reverted now, I'm still messing with stuff.
It's theoretically possible it's a bug with a specific browser; what browser are you on?
Yeah. I'm very glad I set things up in a way where I could just move the servers easily.
There's a bunch of problems here. Yes, China traffic spiked, but it didn't spike that much. Also, now that the analytics have caught up, it spiked a day or two before the server problems began, and everything's back to normal. So . . . was it bots? Eh, maybe, maybe not. And while I can get a long list of IPs, it's not clear that there was any significant pattern within that.
If you had an employee whose work you had to check every single time, you'd fire him.
In most of the programming jobs I've been in, code reviews are considered mandatory for all programmers. Everyone's work is checked every single time, and yet we don't all get fired. Humans make mistakes, and we've set up systems to better solve that issue. So do computers pretending to be humans. Nothing out of the ordinary here.
There's a lot of cases where figuring out how to solve a problem is far more complicated than verifying the solution, and those are cases that LLMs are fantastic with.
I don't know what you've tried, but:
- Claude is probably the best-integrated system out there; use the commandline tool because it's able to go research your codebase on its own.
- Make it a CLAUDE.md with general architecture. It can make a CLAUDE.md for you with the
/initcommand, but read over it by hand to make sure it's right. - Make sure thinking is enabled (hit "tab"). If you're asking it to do a really hard thing, use the "ultrathink" keyword, which will give it more space to think.
- If you have reference tests to use as a model, point it at those. Telling it something like "read existing tests and use the same general style" can help a lot; if you can be more specific, do so.
- Ask it to run the tests and fix problems.
- Expect the occasional mistake. Review its code.
I'm paraphrasing a bit; I ended up with a bunch of prompts, including "write tests to cover every endpoint" and "write tests to increase code coverage" and "stop skipping this specific file, I've seen you consider it six times now and you keep saying it'll be hard and going to do something else, knock it off, go do it for real" and at one point it started just writing tests that verified an endpoint returned a success code instead of actually checking the data and I had to tell it to go back and fix them.
But nevertheless, it was fundamentally "go write a bunch of tests kthx".
What kinds of tests does it write?
Pretty simple ones. In general it's either "call endpoints and verify that the right thing happened in the database", or "change the database and verify that the right thing happened from the endpoint".
How does it know what to expect or assert in the tests?
By reading the code. It's not a black box, it just goes and reads the code. Then it tries stuff.
Then if the stuff didn't work, it reads the error messages and fixes it.
Same way a programmer does it, except it does it while I'm in the kitchen making a snack.
Some of this might actually be fixed, or at least, in a position where it can be fixed. We had a problem for a while where updating the software was impossible because it caused crashes and largescale fixes were sketchy due to how badly tested it was. Thankfully, with the rise of AI, I can (and have) just point Claude at the codebase and say "write a ton of tests kthx" and now it's a lot less questionable to do work on.
It actually found a few bugs while writing the tests which was pretty funny.
Honestly, I'll take some credit, but the mods have been putting a lot more work into it lately than I have. Give them most of that credit :)
Nobody mentioned it to me until maybe twelve hours ago :V I think it's now fixed, I think the computer it was on was having problems and the solution ended up being "bump it to another computer" . . . after doing a few in-retrospect-unnecessary optimizations. But hey, at least the optimizations are done.
The thing that's weird about this, though, is that it ramped up gradually over a period of a week. And it's hard to believe that us specifically would get a massive increase in traffic over simultaneously such a long and a short period of time.
That said, I may have solved it; I think we were just on a crummy cloud computer, I recycled the node and performance is fine again. Gotta remember that for next time.
To be honest, I'm not sure. If it's a DDOS it's a weirdly terrible one that's spending a lot of effort looking like a misbehaving web crawler. On the other hand, if it's a misbehaving web crawler, it's a weirdly terrible one that's spending a lot of effort looking somewhat like a DDOS.
Either that or we've gotten incredibly popular in China over the last week.
It's not just you. Working on it. :)
(Should be better now, I'll let it sit for half an hour or so and see how it's going. More work may be needed.)
(Edit: Sigh, not solved. Still working on it.)
(This may now be fixed; I think we were just running on bad hardware.)
Yeah, it's trying to remove tracking codes, so I decided to leave that in for now.
This should now be fixed. I also stripped out the weird URL-replacement stuff that was going on :)
- Prev
- Next

Hopefully fixed, at least for the moderate future; this ended up being a bunch of performance fixes and tweaks. Things are looking at lot better sitewise now.
At least until a bot finds a new URL that's really slow :V
More options
Context Copy link