BurdensomeCount
Thou Shalt Read BC's Writings!
The neighborhood of Hampstead is just at present exercised with a series of events which seem to run on lines parallel to those of what was known to the writers of headlines and "The Kensington Horror," or "The Stabbing Woman," or "The Woman in Black." During the past two or three days several cases have occurred of young children straying from home or neglecting to return from their playing on the Heath. In all these cases the children were too young to give any properly intelligible account of themselves, but the consensus of their excuses is that they had been with a "bloofer lady." It has always been late in the evening when they have been missed, and on two occasions the children have not been found until early in the following morning. It is generally supposed in the neighborhood that, as the first child missed gave as his reason for being away that a "bloofer lady" had asked him to come for a walk, the others had picked up the phrase and used it as occasion served. This is the more natural as the favorite game of the little ones at present is luring each other away by wiles. A correspondent writes us that to see some of the tiny tots pretending to be the"bloofer lady" is supremely funny. Some of our caricaturists might, he says, take a lesson in the irony of grotesque by comparing the reality and the picture. It is only in accordance with general principles of human nature that the "bloofer lady" should be the popular role at these al fresco performances.
User ID: 628
Hey, do me now. I know I can do this myself but I'm feeling too lazy right now.
And I think that the big hyperscalers grossly underestimate how much optimizations are left in the pipeline.
Strongly agree on this. Deepseek V4 already brought down the output cost per million tokens below $1 (they say it's a promotion, but they keep extending it) for a model that's perfectly good enough for all "normal person" uses. I expect further optimisations will bring this cost down to $0.01 or so per million output tokens (with two more zeros in front for the input cost) within 5 years or so for models that are as capable as the stuff out there today (see how Qwen 3.6 27B today which you can run locally if you have a decent GPU outperforms Opus 4.0 from less than 12 months ago and which used to cost $75 per million output tokens).
For the vast majority of tasks you don't need the smartest model out there, you just need one which is good enough, and once the baseline for "good enough" is established Chinese competition will drive down the marginal price for "good enough" tokens to the point that some companies are going to be left nursing huge losses.
Can confim - Having nukes is great!
- Prev
- Next

No cookies for either of us then, the model has revealed that we're splitting the same biscuit.
More options
Context Copy link