site banner

Friday Fun Thread for February 20, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

The path to ubiquitous AI (17k tokens/sec): A company (TALAAS) just announced a chip that runs LLMs very fast: according to their graph, 8.5x as fast as Cerebras, which is 5.6x as fast as Nvidia. Try it for yourself. It's running LLaMa 3.1 8B, so rather dumb, but the answers are nearly instant. Allegedly it's much cheaper (10x) than GPUs, too. A downside is that the model is hard-wired into the chip, allegedly two months from model to production.

Any use cases that aren't possible with today's (relatively) slower and more expensive models? Perhaps you put this on a router to have a very smart firewall. Or have it repeatedly generate code and fix bugs until a test suite passes, which Opus and Codex do but they can take a while. Then again, it's not instant, and frontier models already generate text very fast, much faster than a human can write or even read.

This is so cool but immediate gut reaction is that this

A downside is that the model is hard-wired into the chip

Is a deal-breaker in a world that is massively constrained by fab (right now literally anything wafer based (i.e. RAM too) not just CPU/GPU) capacity

In a world where fab capacity is plentiful and machine time cheap (i.e. no bidding wars for output) I can see this being an amazing optimization, especially if we hit scaling walls and base models stop being updated so fast.

But right now with constant model updates and 0 fab space? This isn't going into mass production

We should be using AIs to build more fabs, then. Have them design faster RAM and processors, better fabrication methods with higher yields.

If that was trivial, we'd already be living in the singularity and/or matrix.