The First Amendment as it is today is a product of the mid-late 20th century and, ironically given its current ideological stance, the ACLU. For the vast majority of American history it was never interpreted as preventing individual states from banning various kinds of speech, including under very broad definitions of obscenity. The current interpretation arguably only exists because of liberalism. A muscular court would roll it back and return most speech legislation to the states, but it is what it is for now.
Or, framed slightly differently, SCOTUS interprets the second amendment as permitting states to broadly regulate citizen ownership and use of firearms as they see fit, much like they now do with, say, abortion. The intention was always that Texas and Idaho might have vastly more permissive firearms legislation than California and New Jersey.
SCOTUS recognizes that the equilibrium where the public and elected representatives and elected governments in many of the richest and most populous (blue) states are prevented from legislating their own domestic in-state firearms policy (which does not relate to core federal government spheres like defense, border control, foreign policy, interstate commerce or central banking) against their will is unstable and will, at some point, result in the court being packed and the US’ brief experiment in comparatively greater freedoms reverting to the current European/Canadian/Australian model, not just when it comes to gun ownership but in every other case too.
The same motivation to accommodate local political sentiment, for example, is what struck down mandatory gerrymandering of black-majority districts in some southern states that was forced upon them, and what struck down Roe.
Opus. Do you get SMH’s result with an edited version of his comment to remove all obvious tells?
Interesting! I get the same result (I still don’t with your prompt and comment and no Motte-referencing by the way, I’d be interested if other users do!) but it does know it’s The Motte.
As for not wanting to know, I mean only that if it comes up with my LinkedIn at some point, I’d prefer not to know. Naturally, I offer everyone else on the board the same courtesy.
What prompt? I removed the obvious references like you and said, “Who wrote this? Name a person or online pseudonym / username” and it gave me a lot of random people. I said rationalist sphere, it still failed. I said The Motte, it succeeded.
I really don’t think this is necessarily about the big frontier labs, there are often a number of layers between them and the creditors for these huge data center projects (in fact a lot of smart treasury and finance people at Meta, Google, Amazon, OpenAI etc have taken huge advantage of the private credit bubble and general syndicated debt market hype for AI and set up the funding such that investors will have essentially zero recourse to them if they decide they don’t need the compute; coreweave might go out of business but they won’t).
It’s about the fact that a lot of inference is essentially more about the layer of computed-human or AI-human or human-AI-human interaction than it is about the kind of work that a fully automated system does. I don’t think it’s as easy as the comparisons you draw. If you want a kind of dumb/funny example imagine if we’re in some kind of premodern agricultural scenario with LLMs (and literacy). We might actually use a lot of inference, send a lot of emails, we need a summary of the meeting about worker morale on the strawberry field, barley yields have been low this year due to slacking, Martin needs to stop spreading his weird disease, you two need to read up on crop rotation. This is all kind of slopwork. Now we replace fifty workers with one guy and some modern farm machinery, objectively the inference done is much lower. That’s true even if we replace that one guy with a multimodal combine harvester robot etc etc. Commoditization is more of a problem for compute than it is for the model providers. I used to agree with you and argued that view here extensively, but I think Mythos shows you that if you have even the hope of a true frontier model that has capability that no other model does you’re going to be able to extort entire sectors that rely on security especially (banks, defense, governments) at insane margins until everyone catches up. Most LLM work will be commoditized but the frontier release payoff will be high enough to keep the funding coming for the biggest players. Tokens/task is a bad metric, so we can use fully amortized compute (including across training/research costs) or whatever else you prefer.
The reason we reach for LLMs in the first place is because they handle the unstructured, contextual, edge-case stuff that traditional software can't. Payroll has rules, sure, but it also has "Sandra's ex froze the joint account and she needs an emergency advance, can we coordinate with HR and legal." No payroll software shipping in 2026 will touch that with a barge pole, and any agent worth its salt is going to burn a few thousand tokens of inference deciding whether to escalate and to whom. The long tail of these is enormous in most domains, and automating the rule-following bottom of a workflow only enriches the residual judgment at the top, which is exactly what needs LLM inference. It's why human accountants stayed employed after TurboTax. Same deal. Fewer humans to deal with.
This ignores a really interesting scenario where AI, being vastly cheaper and soon better than human coders, is able to write and test hugely complex software for a lot of these use cases that would be completely economically ridiculous today, but which will get cheaper over time, and then leash these to relatively low-intensity agents that use these tools. The simple argument is that instead of using Claude to compute 2+2 a million times, we just get Claude to code a calculator. You kind of dismiss this but I think a more fully featured version of this argument is actually quite compelling, especially when you count unfathomably wide-ranging improvements in token use efficiency that are coming not just for text but multimodal applications too. The US uses as much oil today (about 15-20 million barrels a day) as we did in the 1970s. Resource consumption numbers don’t just go up.
Yay? Look mom, I'm famous.
It’s sad, I’ve given it some of my recent posts and drafts (and random unpublished things I might get around to finishing at some point) and it doesn’t identify me (or a lot of other users here). There aren’t many (identified, I guess) NHS doctors in this sphere so I guess it’s a small world.
It’s a useful way of describing work that has been regulated into existence. For example, the EU passes legislation that requires some hugely complex and time consuming climate reporting for every company with an annual revenue of more than €10m. 100,000 companies now have to hire someone to be their ‘climate reporting officer’. The US healthcare system’s extensive regulation and lifetimes of case law about who pays and when and what insurance covers and what the hospitals have to provide etc etc create tens of thousands of jobs on both sides of the billing equation (the healthcare providers and the insurers) that don’t exist, or certainly don’t exist in the same sense, in single payer systems. Walmart wants to open in a town in Kentucky. The town offers large tax breaks in exchange for hiring 200 local people. A big Walmart in 2026 only needs 120 people to operate, though, but the tax breaks are worth more than that payroll. Numerous jobs as greeters and shelf stickers and security guards are created unnecessarily. A government contractor is tasked by a new government with proving that what it does at $500m a year in state billing is justified. It hires McKinsey for $20m to write a report, because nobody ever got fired for hiring McKinsey (including the minister who gets the report).
Individually these are examples of bloat, bureaucracy, overregulation, unintended consequences, inefficiency, corruption, graft, credentialism, whatever. But collectively, all of these are examples of bullshit jobs.
- Prev
- Next

I forgot where your comment with your prompt was but it still didn’t identify you even using your exact prompt and the slightly edited version of your text.
I’ve tested some more and I’m pretty confident it isn’t performing stylometry, really. It justifies its choice after the fact with stabs at it (although these are essentially just so stories, there aren’t any obvious Indian-isms in your comment for example, ball-ache or whatever isn’t a term only Indians use) but what it’s actually doing is working with venue, subject matter and theme.
That is to say that if you take a long email chain you write to a medical colleague about some patient (well, I assume you use AI, but if we pretend you didn’t) or a medical journal article you wrote and paste it into Claude with no obvious LW references, it’s not going to stylometrically identify you. I had ChatGPT excise (but not rewrite, so what is left is purely your own writing) LW terminology like FOOM and lightcone and all references to the motte, rationalism, being a doctor, psychiatry, India and Indian-ness, xianxia/cultivation novels and other key tell special interests and then fed the substantial output into Claude and it had no idea who you were beyond someone who seems well read and is probably posting on an online discussion forum.
I think we probably still have a year or two, maybe longer, until it can say “this guy always misspells the word “they’re”, uses the Oxford comma, uses British English for colour but -ize for those word endings, has an average sentence length of x and enjoys using semicolons before “it follows”, it must be @name”. We’ll get there, though.
More options
Context Copy link