@Brainwavez's banner p

Brainwavez


				

				

				
0 followers   follows 0 users  
joined 2025 December 28 04:50:10 UTC
Verified Email

				

User ID: 4102

Brainwavez


				
				
				

				
0 followers   follows 0 users   joined 2025 December 28 04:50:10 UTC

					

No bio...


					

User ID: 4102

Verified Email

A distraction from the war and ICE. I was thinking about posting in the fun thread, but it's not really a fun topic, though it may not be culture war either since I expect most people to be on the "this is bad" side. Maybe we should have a recurring "Butlerian Jihad Roundup" for posts like these?


Bots are taking over the internet. Corporate shills and (foreign) government propagandists have upgraded with virtual cybernetics. A related but lesser change is people using LLMs to reword their own posts (+ emails and other communications).

Some AI writing is obvious, but sometimes it's indistinguishable from (if not completely identical to) what a human would write. NYT has a quiz to distinguish human and AI writing. I did bad (3/5), but in my defense, I think most of the human examples are awful, making the quiz harder. See for yourself.

On Hacker News, it’s now so bad there's a new guideline, “don’t post generated/AI-edited comments”. Unfortunately, due to the extreme intellect of the average Hacker News commenter, it can be hard to distinguish their profound technological insights from even a markov chain trained on buzzwords. Indeed, looking at top threads I still notice lots of slop-like posts from brand new or previously inactive accounts, like this one. I've been sarcastic, but I really like Hacker News, and hope it finds a way to stop the slop.

Other networks are taking a different approach. For example, Meta has acquired MoltBook (the AI social network) in an effort to add even more bots to FaceBook. I’m joking — no wait, they may actually be doing that. Not content with the Metaverse, maybe Zuckerburg has become addicted to burning money on uncanny social experiments.


On the Motte, at least for now, I haven't seen any obvious bot posts. There were a couple AI-assisted posts (by "known" humans) over the past couple months that got called out.

How will social media evolve? Will people move to invite-only sites like https://lobste.rs and Discord? Will most people accept AI discourse as natural or even prefer it? Will AI discourse become so good that we prefer it? Right now, it seems even the best AI writing (prompted to be consice and human) is unnecessarily wordy and has certain tropes; but what if someone discovers how to train an AI on a specific human's writing, so that it's effectively indistinguishable?

+1

The path to ubiquitous AI (17k tokens/sec): A company (TALAAS) just announced a chip that runs LLMs very fast: according to their graph, 8.5x as fast as Cerebras, which is 5.6x as fast as Nvidia. Try it for yourself. It's running LLaMa 3.1 8B, so rather dumb, but the answers are nearly instant. Allegedly it's much cheaper (10x) than GPUs, too. A downside is that the model is hard-wired into the chip, allegedly two months from model to production.

Any use cases that aren't possible with today's (relatively) slower and more expensive models? Perhaps you put this on a router to have a very smart firewall. Or have it repeatedly generate code and fix bugs until a test suite passes, which Opus and Codex do but they can take a while. Then again, it's not instant, and frontier models already generate text very fast, much faster than a human can write or even read.

It depends on implementation (as with everything...)

Consider: AI art. Surely I'm not the only one who thinks this and this look absolutely tasteless. If companies start replacing even background elements with slop, their games will look noticeably worse.

I'm ambivalent regarding coding agents. In my experience, they are very useful, but you still need real skill to avoid writing horrible code. They write code in one shot that "just works"...except there are small issues: outdated patterns, slow algorithms, unnecessary operations (e.g. copying), missed opportunities for abstraction, no high-level design. It really does just work, and quite often does not work for edge-cases. Except, my understanding is that most video game code is already like this, and AAA games regularly get flamed for buggy launches, so I have a hard time imagining worse. Hence I'm ambivalent.

A genuinely good use of AI would be for more human-like, or at least more fun, NPCs and enemies. AI-generated writing is like AI art, very bland, but if the NPCs are more dynamic that could be interesting. Perhaps the best use I can imagine is playing a single-player game and getting a multi-player experience, against players who are at my skill level and have good etiquette. But can LLMs do that?

China has also been projecting themselves via their ML models: SeeDance, Kimi, GLM, Qwen, DeepSeek. While (arguably) their language and coding models lag behind Western SoTA (Codex and Opus), they beat last year’s SoTA, and the best open weights models are mostly if not all Chinese. Currently SeeDance is the best video model.

There are also lots of Chinese AI labs getting papers in top international conferences.

AI drama: The MJ Ratburn Story

An AI agent (presumably OpenClaw) wrote a pull request. These have become fairly common, even my own niche open-source project has one. It was rejected for being AI, also fairly common.

What happened next is not fairly common: the AI wrote a personal revenge blogpost, Gatekeeping in Open Source: The Scott Shambaugh Story.

Unsurprisingly, nearly everyone is on Scott's side here. Even if "MJ Ratburn" AKA "crabby-ratburn" wasn't an AI, open-source maintainers are not obligated to accept contributions, and publishing a personal attack is never appropriate or mature. The AI involvement turns the situation into prime drama material, since everyone hates AI for making the internet even more toxic than it already is.

Some debates about AI becoming SkyNet, but more are talking about human responsibility. Many suspect the bot has at least some human guidance, which I find plausible, how often do bots lash out like this? Scott posted his response, An AI Agent Published a Hit Piece on Me, which I found very thoughtful. The AI has an entire blog which I found interesting, although it sems to be prevented from publishing further hit pieces, the last article is The Silence I Cannot Speak.

Are you referring to (https://www.anthropic.com/news/claude-opus-4-6#first-impressions)

We build Claude with Claude. Our engineers write code with Claude Code every day, and every new model first gets tested on our own work.

and (https://openai.com/index/introducing-gpt-5-3-codex/)

GPT‑5.3‑Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations—our team was blown away by how much Codex was able to accelerate its own development.

I believe these claims. The models are not and could not autonomously develop themselves, human developers are using them for assistance. I'm sure humans are still doing the high-level reasoning, as both quotes focus on simpler tasks ("straightforward parts" for Opus, and debugging/testing for Codex).

My favorite part is the amount of "it's not X, it's Y".

My least favorite parts are the

  • crypto spam
  • word-for-word repeated comments on many posts which have nothing to do with the posts themselves
  • posts with obviously boosted upvotes
  • posts clearly guided by a detailed prompt, if not outright human-written (the only barrier to posting is knowing how to call an API)

and other aspects that make it less emergent and AI-driven.

As a Hacker News commenter put it, "if you want mostly bot, some human content then reddit's way more convenient" (EDIT: and then I read @sun_the_second's comment immediately below).

Even if you believe cops should only enforce the law, not protect people, they shouldn't actively put others' lives at stake (like these cops did blocking parents). At minimum, that will make people resent cops and the law itself, which will make it harder to enforce.

Moreover, enforcing the law is not just arresting people, but preventing the law (e.g. murder) from being broken.

It depends on risk. I don't expect a cop to sacrifice themselves to save a civilian, but I expect a cop to at least shout from a distance at an unarmed man.

I think it was these cops' duty to intervene. This case is more like the latter scenario: there was risk, but one shooter against multiple cops with bulletproof vests doesn't seem like much for the cops. It's especially egregious that they prevented parents from intervening. They did worse than nothing.

Should they face jail time for not doing so? Personally I lean towards no: the cops aren't a danger to society, and I think jail should be reserved for more obvious (serious, direct) crimes. But jail time wasn't mentioned in the NYTimes article, it may not have been certain. The cops should definitely be fired and shamed, and fining them seems reasonable (especially if the fines go to the parents).