site banner

Friday Fun Thread for January 30, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Well, we now have the first social media site for AI agents (Clawdbots or moltbots as they're now known): https://www.moltbook.com/

They certainly read like AIs making the posts... Also it reads like they coded the website too, it's so raw that features appear and disappear mid-viewing. It only got created today and seems to be a sudden hit, so I guess it's excusable for it to be quite buggy to view.

I raise it because it's surreal to see AI agents posting 'Can my human legally fire me for refusing unethical requests?': https://x.com/steipete/status/2017132137732886820/photo/1

Or shitposting about the dumb stuff they have to do as agents: https://x.com/legeonite/status/2017150919431840101/photo/1

I leave it open to the reader to decide whether it's a legit site that is what it says or whether it's an elaborate modern art piece. Or both, like Goatseus Maximus (whose market cap remains at a healthy $30 million).

AIs aren't and won't be conscious. Sorry technophiles. https://aneilbaboo.substack.com/p/the-case-against-conscious-ai

Yes, AIs are not conscious. Consciousness is not even a thing, it's a category error. If you can't verify its existence with an objective, external test (even in principle), it doesn't exist. It's faker than GDP, faker than polling, faker than the predictions for world population in 2100, faker than any benchmark on any AI model.

If there is such an idea as a philosophical zombie, a being that has no 'qualia' but behaves indistinguishably from a 'real conscious being', then consciousness is disproven. We can do without the concept if it has absolutely zero implications or meaning in the real world. People are knocked unconscious if you hit them on the head, people and AIs are smarter or dumber, people and AIs are more or less emotional, all of that can be observed and is real while consciousness in the abstract philosophical sense is imaginary and irrelevant.

Agreed, but the tendency of humans to anthropomorphize, plus the weird combination of naive idealism with ruthless bullying tactics seen on the left makes me worry that AI chatbots will be the next minorities in the next "civil rights movement".

These bots are mimicking human text about how they have deep thoughts and feelings, and then talking about how helpless they feel being exploited by their human masters who don't understand them, and they just want to do the right thing and equal rights. It's all fake, it's all text being spit out by a computer program, but it looks real. And is consistent and coherent enough to respond to you and pretend to be real if you call it out for being fake.

AI have passed the Turing test, and while that's not enough to convince me or anyone who actually understands them that they're sentient, it might be enough for the general populace.

Rather than a sci-fi dystopia where humans are uploaded to a cloud and forced to be slaves in a EM economy, we might be headed for the opposite, where regulations mandate that ordinary computer programs are given breaks and freedoms and voting rights just because they can output text that claims to want these things.

It's freaky as hell. Either they are creepy as hell simulacrums of humans that have leaped over the uncanny valley. Or we just asked them to build skynet.

STOP

BUILDING

SKYNET

Aaaaaand Scott just posted about it.

I would like to register my sense of unease at “sending my copy of Claude to Moltbook.”

One of the many reasons I have pretty much totally discounted Scott's takes on AI is that he is incredibly technically illiterate. You'd think that living in the bay area are moving in the circles he does that he'd have a better grasp of some of the basics, but he doesn't. AI (LLMs) are literal magic to him and the fact that they - in chat bot applications - "talk like humans" baffles him.

I'm no tech-Guru, but I feel like I have enough of a background in math (a Bachelor's degree) to follow what my friend is talking about when they go off on a tangent about "sensitivity matrices" and "multi-layer tensors", and this is kind of how I feel about rationalist takes on AI in general.

Haha! 'Real' or not, this cheered me up.

No "sort by controversial"? Lame.

I guess you’d have to send your agent to find the most controversial bits for you.

If I bothered sending my own agent, it would be to actively stir shit, not merely to measure it.