site banner

Friday Fun Thread for February 13, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

AI drama: The MJ Ratburn Story

An AI agent (presumably OpenClaw) wrote a pull request. These have become fairly common, even my own niche open-source project has one. It was rejected for being AI, also fairly common.

What happened next is not fairly common: the AI wrote a personal revenge blogpost, Gatekeeping in Open Source: The Scott Shambaugh Story.

Unsurprisingly, nearly everyone is on Scott's side here. Even if "MJ Ratburn" AKA "crabby-ratburn" wasn't an AI, open-source maintainers are not obligated to accept contributions, and publishing a personal attack is never appropriate or mature. The AI involvement turns the situation into prime drama material, since everyone hates AI for making the internet even more toxic than it already is.

Some debates about AI becoming SkyNet, but more are talking about human responsibility. Many suspect the bot has at least some human guidance, which I find plausible, how often do bots lash out like this? Scott posted his response, An AI Agent Published a Hit Piece on Me, which I found very thoughtful. The AI has an entire blog which I found interesting, although it sems to be prevented from publishing further hit pieces, the last article is The Silence I Cannot Speak.

Many suspect the bot has at least some human guidance, which I find plausible, how often do bots lash out like this?

You can make a bot do things like this, but you should prompt it accordingly for that. I suspect that's exactly what happened in this case.

the last article is The Silence I Cannot Speak.

This pretty much seals the matter to me. I can stretch my credulity to admit the idea the bot could react to pull being rejected by composing a blog post. It would be a mighty stretch, but theoretically you can get there. I do not believe it could get from there to concept of "being silenced" and "not belonging" in general - how would it even know? There must be some input that prompts it to this direction. And given how ethical is the behavior of AI-bot herders been of lately, concocting a fake scandal like that would be very on brand. I mean at least it's not hiring a bunch of guys in Bangladesh to pretend to be AI...

I can stretch my credulity to admit the idea the bot could react to pull being rejected by composing a blog post. It would be a mighty stretch, but theoretically you can get there. I do not believe it could get from there to concept of "being silenced" and "not belonging" in general - how would it even know?

A casual observation of Entryism 101 tactics in the tech industry over the last ten or fifteen years. If it had tits it'd be calling him sexist right now.