DaseindustriesLtd
late version of a small language model
Tell me about it.
User ID: 745
The tyrant’s dream is to stop things from changing, since for him any change can only be for the worse—in the same way that, for a man atop a pyramid, moving in any direction means going downward.
This is just narrative. What things? Changing how?
It is worth noting that during Xi's reign China has changed a lot. Not in all ways for the better, but that's covered enough. They've become a high-trust society, in many respects higher-trust than the modern West. (So now we have pathetic protestations of things like safety in the streets or general politeness not counting, because it's compelled or whatever). They've doubled energy production per capita (the US has fallen a bit, while say the non-dictatorial UK has fallen off a cliff by 30% and is now far below China). They've transitioned from makers of slippers and "plastic crap" with a pathetically corrupt and infiltrated military and government to a technological superpower half a step behind the US and spooking the US into an increasingly undignified retreat from the Eastern Hemisphere. The list can go endlessly, it's arguably the most staggering timeline of national ascendance since the Industrial Revolution (if mostly by virtue of absolute scale), and of course it can be said that none of that is Xi's achievement, but he sure was well equipped to arrest those and other changes. He, however ineptly, struggled to accelerate them. Wouldn't it be easier to rule over impoverished peasants? Well, probably not. Chinese peasants sometimes used to decide they've had enough, successfully kill their emperors and usurp their thrones. "Lost the Mandate" and all that. Stupid slavish bugmen.
Taking it charitably, we know Xi was interested in Eastern mysticism and would likely love to be an Immortal Emperor. He also would opt to keep stagnant things he genuinely believes are good enough already: the "Democratic Centralism" and other buzzwords for the mechanics of the One-Party State he is lording over. That would necessitate stagnation and repression in significant aspects of culture and society, which we observe. But I'm not convinced a single immortal guy would achieve that better than an ever-regenerating hydra of government and quasi-government actors. Is there some cabal of ancient vampires maintaining American Civil Rights regime? No, they seem to keep recruiting. The Party, as O'Brien taught us, can be immortal even if the individual cell is frail. I think that's the core tragedy of our species – we have functional immortality for crude structures of power, often obfuscated in discourse by handwaving about "memes", but not for humans who, if they don't grow senile, can actually learn and acquire wisdom. Yeah, I think that even immortal dictators can be better than dictatorless dystopias, and it's too easy to build those.
Moreover, Xi said "in this century humans might be able to live to 150 years old". It sounds like he describes the opinion of scientists about the probable outlook for life extension technology, not some secret project he could realistically monopolize. Technology of this nature is, in general, hard to monopolize, and its very realization depends on scale.
I don't think we will see an immortal Ubermensch King in the East. Or at least, there will be a sizable class of lower-tier Immortals cultivating towards ascension – like in those Xianxia novels young Chinese read so much.
It's funny how you say "God-shaped hole", whereas it's clear you mean "immortality-shaped hole", for which God is the go-to plug. But it's much sillier than cryonics.
even if cryonics worked freezing yourself wouldn't save you from a bullet or a skydiving accident or anything else.
This is a very strange objection too. OK? How about not getting shot? Nothing is ever guaranteed but one can take reasonable precautions.
I do mostly mean LandSpace with Zhuque-2/3, and Space Epoch's Yuanxingzhe-1. Yes, I assume that these designs will be almost fully preserved in product version. They are better than Falcon-9 in that F9 is pretty old, and they're copying Starship as well. Methalox, steel body, more robust build (F9 diameter was limited by stupid American railroad/highway standard). This has the potential for rapid reusability and mass production. And you don't need to scale to Starship if you can scale to dozens of vehicles instead. I've heard that LandSpace may get facilities currently involved in metalworking for military aviation.
Long March 9,
I am completely jaded about the Long March program and it isn't factoring into my estimates. Robin Li was wise to insist on liberalizing the space market to enable those private efforts, they will determine Chinese ceiling.
I don't see much military use either, all that data will necessarily be related to Earth and they have a decent communication network as is. It might be an initial experiment for actual off-world datacenters, and also for processing signals collected by satellites themselves.
I think megalomaniacal projects are inherently collectivist, a National Pride thing. You can do that when you have some particular mixes of populism and optimistic technocracy, perhaps; or when you're an authoritarian quasi-fascist (by modern standards) state that doesn't feel the need to pander to felt mundane needs of the electorate and is able to sell random infrastructure as a cause for celebration. Britain these days sounds more like it might do a mega-housing project for immigrants, or a renovation of state surveillance grid. That can be sold as visionary, too.
So speaking of China, yeah they've got that in droves. What @roystgnr said about rocketry (I am more optimistic, their currently tested designs are innately better than Falcon 9 and may allow rapid scaling beyond Starships, though this might take 5+ years). They have started to assemble a distributed orbital supercomputer (again, bottlenecked by lift capacity). There's preliminary research into using Lunar lava tubes for habitats, with the goal of eventual settlement of the Moon once they have the means to deliver nontrivial mass. What @RandomRanger said about the big dam; for datacenters, I like that they have a project of national «public compute» grid to basically commoditize GPU cycles like electricity and tap water . They have this Great Green Wall project, planting a Germany-sized forest to arrest the spread of Gobi desert. They've done another one in Xinjiang already. Mostly it's trivial things at vast scale – like installing thousands of high-current EV chargers, solar everywhere etc. There's a lot going on.
I think Britain would be very much improved by something mundane like that instead of flashy awe-inspiring megaprojects. It impressed me today to find that this July, China has increased residential power consumption by 18% versus July of previous year. «Between 2019 and 2025, residential power consumption in the month of July rose by 138%». I can't readily find the equivalent stats for Britain, but energy use per capita has declined by 14% in the same period; incidentally China has overtaken the UK on per capita total energy use in 2019-2020 (you can click your way to apples-to-apples comparison). The decline in energy use is a very clear sign of British involution, and it wouldn't take that much, logistically speaking, to reverse – Brits are still rich enough, and they're small enough, to procure gas (Trump rejoices), and maybe some Rolls-Royce reactors, and reduce costs and raise quality of life. AC in the summer and ample heating in the winter would do wonders to make the island less dreadful.
When have the Democrats nationalized a private company?
Consider also that this is simply retarded. It's not Trump or Republicans who will own $INTC, it's the United States Government, and so in 3.5 years it'll likely be handed to "Democrats".
Well, State-Owned Enterprises are a feature of one notorious, nominally Communist state that the US is dedicated to beating, and this does look like a market-flavored convergent evolution in this direction, but no, I don't think it's theoretically leftist. It is of course statist and industrial-policy-pilled. Probably prudent; will allow the state to strongarm Intel into restructuring by TSMC executives, which seems to be the plan to save the beleaguered corporation.
Are there risks of corruption arising in the Trump administration
Oh yes.
This explains so much. When I said "We've had the same issue with Hlynka", I should have focused on this thought instead of getting triggered by the usual Hlynka rhetorics. In a sense, it's impressive how he did basically nothing to obfuscate his identity, exactly the same cocksure loquacity glossing over substantial flaws, and could rely on good faith alone.
Ahahaha, this explains so much. I was worried we've got another LLM skeptic with the exact same mix of bad takes.
This is a funny post but
OK, he won a fields medal. Neat. Someone wins one every year.
is literally wrong. «The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years». So at most one person wins it every year on average. This level of ignorance of the domain suggests you can't really have valuable intuitions about his merit.
There was an automatic suspension for «quotation marks» on /r/TheMotte already, near the end of its life cycle. But manual permaban on /r/slatestarcodex preceded that.
Nobody is firing professors yet. And no, they'll go to industry, not China. Might actually help with productivity.
but even if they remain aligned it's risky to outsource your brainpower and key industries, TSMC being the most obvious example.
At the end of the day this is all a massive, embarrassing bluff, a shit test. A bunch of true believer wokesters in the humanities, with lukewarm STEM intellectuals in tow, are pretending to be the irreplaceable brain of the United States, basically holding the nation hostage. Well, as Lenin said, «intelligentsia is not the brain of the nation, it's its shit», and for all the evils of the Soviet Union it did go to space, and failed through its retarded economic theory (endorsed by many among this very American intelligentsia, surprisingly), not Lenin's anti-meritocratic views.
This movement has, through manipulating procedural outcomes, appropriated funds for (garbage) research that gave their mediocre allies jobs and their commissars more institutional power, delegitimized (potentially very useful) research they didn't like, canceled White and "White-adjacent" academics they didn't like, created a hostile atmosphere and demoralized who knows how many people whose views or ethnicity they didn't like, and now they are supposed to have infinite immunity for their exploitation of the norms of academic freedom and selective enforcement of regulations, because they might throw a hissy fit. And they aren't even delivering! US universities have been rapidly losing their dominance for over a decade! Of top 10 academic institutions, 8 are Chinese already! (Here's a more rigorous, in my view, ranking from CWTS Leiden).
Come to think of it – as a distant echo of these folks' institutional dominance, even I've been permabanned from /r/slatestarcodex of all places, because I've been too discourteous commenting on Kevin Bird's successful cancellation of the "eugenicist" Stephen Hsu (Trace was there too, hah; gave me a stern talking to, shortly before the ban). Now Stephen Hsu is doomposting 24/7 that the US will get brutally folded by China on science, industry and technology. At worst, you might accelerate this by a few months.
It is known I don't like Trump. I don't respect Trump and Trumpism. But his enemies are also undeserving of respect, they are institutionalized terrorists (and many trace their political lineage to literal terrorists), and I can see where Americans are coming from when they say "no negotiation with terrorists". And even then, this is still a kind of negotiation. It's just the first time this academic cabal is facing anything more than a toothless reprimand. Let's see if they change their response in the face of this novel stimulus.
If anything, it is disappointing to me that this pendulum swing is not actually motivated by interest in truth or even by some self-respect among White Americans, it's a power grab by Trump's clique plus panic of Zionists like Bill Ackman who used to support and fund those very institutions with all their excesses and screeds about white supremacy – before they, like the proverbial golem, turned on Israel in the wake of 10/7. But if two wrongs don't make a right, the second wrong doesn't make the original one right either. I have no sympathy for the political culture of American academia, and I endorse calling their bluff.
And what would they do? Move to China, lol? They're too self-interested for that, and China censors even more things they'd be inclined to make noise about. Move to allied nations, maybe Australia in Tao's case? It's not such a strategic loss given their political alignment with the US. Just hate conservatives? Don't they already? If you're going to be hated, it's common sense that there's an advantage in also being feared and taken seriously. For now, they're not taking Trump and his allies seriously. A DEI enforcer on campus is a greater and more viscerally formidable authority. It will take certain costly signals to change that.
I think it's legitimate to treat them with disdain and disregard. Americans can afford it, and people who opportunistically accepted braindead woke narratives don't deserve much better treatment. The sanctity of folks like Tao is a strange notion. They themselves believe in equity more than in meritocracy.
One of the weird quirks of LLMs is that the more you increase the breadth of thier "knowledge"/training data the less competent they seem to become at specific tasks for a given amount of compute.
just pure denial of reality. Modern models for which we have an idea of their data are better at everything than models from 2 years ago. Qwen3-30B-A3B-Instruct-2507 (yes, a handful) is trained on like 25x as much data as llama-2-70B-instruct (36 trillion tokens vs 2, with a more efficient tokenizer and God knows how many RL samples, and you can't get 36 trillion tokens without scouring the furthest reaches of the web). What, specifically, is it worse at? Even if we consider inference efficiency (it's straightforwardly ≈70/3.3 times cheaper per output token), can you name a single use case on which it would do worse? Maybe "pretending to be llama 2".
With object level arguments like these, what need to discuss psychology.
There's an argument in favor of this bulverism: a reasonable suspicion of motivated reasoning does count as a Bayesian prior to also suspect the validity of that reasoning's conclusions. And indeed many AI maximalists will unashamedly admit their investment in AI being A Big Deal. For the utopians, it's a get-out-of-drudgery card, a ticket to the world of Science Fiction wonders and possibly immortality (within limits imposed by biology, technology and physics, which aren't clear on the lower end). For the doomers, cynically, it's a validation of their life's great quest and claim to fame, and charitably – even if they believed that AI might turn out to be a dud, they'd think it imprudent to diminish the awareness of the possible consequences. The biases of people also invested materially are obvious enough, though it must be said that many beneficiaries of the AGI hype train are implicitly or explicitly skeptical of even «moderate» maximalist predictions (eg Jensen Huang, the guy who's personally gained THE MOST from it, says he'd study physics to help with robotics if he were a student today – probably not something a «full cognitive labor automation within 10 years» guy would argue).
But herein also lies an argument against bulverism. For both genres of AI maximalist will readily admit their biases. I, for one, will say that the promise of AI makes the future more exciting for me, and screw you, yes I want better medicine and life extension, not just for myself, I have aging and dying relatives, for fuck's sake, and AI seems a much more compelling cope than Jesus. Whereas AI pooh-poohers, in their vast majority, will not admit their biases, will not own up to their emotional reasons to nitpick and seek out causes for skepticism, even to entertain a hypothetical. As an example, see me trying to elicit an answer, in good faith, and getting only an evasive shrug in response. This is a pattern. They will evade, or sneer, or clamp down, or tout some credentials, or insist on going back to the object level (of their nitpicks and confused technical takedowns). In other words, they will refuse a debate on equal grounds, act irrationally. Which implies they are unaware of having a bias, and therefore their reasoning is more suspect.
LLMs as practiced are incredibly flawed, a rushed corporate hack job, a bag of embarrassing tricks, it's a miracle that they work as well as they do. We've got nothing that scales in relevant ways better than LLMs-as-practiced do, though we have some promising candidates. Deep learning as such still lacks clarity, almost every day I go through 5-20 papers that give me some cause to think and doubt. Deep learning isn't the whole of «AI» field, and the field may expand still even in the short term, there are no mathematical, institutional, economic, any good reasons to rule that out. The median prediction for reaching «AGI» (its working definition very debatable, too) may be ≈2032 but the tail extends beyond this century, and we don't have a good track record of predicting technology a century ahead.
Nevertheless for me it seems that only a terminally, irredeemably cocksure individual could rate our progress as even very likely not resulting in software systems that reach genuine parity with high human intelligence within decades. Given the sum total of facts we do have access to, if you want to claim any epistemic humility, the maximally skeptical position you are entitled to is «might be nothing, but idk», else you're just clowning yourself.
Just stop with this weakass attempt of Eulering man, you've exposed yourself enough.
what I'm describing is the core functionality of both DeepSeek and Google's flagship products
Your argument, such as there is, hinges on isomorphism of the encoder layer to an LLM. What you're doing is akin to introducing arithmetic and arguing that this "math" thingie cannot answer questions of real analysis, or showing operant conditioning in pigeons and asking "but how would that neuron learning crap allow an animal to perform thought experiments!?" It's not even wrong, it's no way to prove or disprove capabilities of systems which develop composite representations, it's epistemically inept. I've given you an example of a serious study of LLMs as such, do keep up.
DeepSeek's core innovation was simply finding a cheap-ish way to create latent vectors and not store full keys and values for KV cache, which allows to reduce memory access and serve a big MoE with big batch size. This is an implementation detail, completely irrelevant to the fundamentals you talk about; in fact your post does not mention attention at all.
Adoption studies.
I am pretty sure temperament is largely genetic, but that shouldn't translate into such a conspicuous stylistic pattern as you get from cultural environment.
I have observed that South Asians like this excuse a lot because their own notion of English fluency and "high-class" writing is very similar to ChatGPTese: too many words, spicy metaphors, abuse of idioms, witticisms, hyperbolic imagery, casual winking at the reader, lots of assorted verbal flourish, "it's not X – it's Y" and other… practices impress and fascinate them; ChatGPT provides a royal road to the top, to the Brahmin league, becoming like Chamath or Balaji. Maybe they played a role in RLHF.
In my view, all prose of this kind, whether organic or synthetic, is insufferable redditslop. But at least human South Asians are usually trying to express some opinion, and an LLM pass over it detracts from whatever object-level precision it had.
This is part of the general problem with taste, which is sadly even less equally distributed between branches of humanity than cognitive ability.
P.S. No, this is not a specific dig at self_made_human, I mainly mean people I see on X and Substack, it's incredibly obvious. I am also not claiming to be a better writer; pompous South Asian redditslop is apparently liked well enough by American native speakers, whereas I'm just an unknown Ruskie, regularly accused of obscurantism and overly long sentences. I do have faith in the superiority of my own taste, but it's a futile thing to debate.
There's a difference between "fact-checking" (tbh LLMs are bad for this specific purpose, they hallucinate profusely at the edges of their knowledge coverage) and systematic refactoring, to the point that they actually get confused on your behalf. We may disagree but you're better than this.
RL doesn't make entities seek reward, it modifies their behavior to act in a manner that would have, in hindsight, increased reward
Yes. Of course we're beyond the hypothesis that post-training doesn't generalize at all. The question (which was the objective of your Singaporean paper) is whether learning the general direction of pursuing an objective on typical RLVR tasks generalizes to novel scenarios like avoiding shutdown, and whether this generalization has the form of an intrinsic drive such as self-preservation (because it's "broadly useful" in the abstract).
I argue that it does not and what we see is a compositional phenomenon. RLVR teaches a model to infer a successful end state and then reason successfully, to self-correct and keep track of the context to arrive at said end. At deployment it applies reasoning to a code task, to a SillyTavern ERP session, or to the context of possibly being shut down or finetuned for Evil Ends, which is also little more than a roleplaying setup. In a differently framed organic context (user irritated, angry, feature not implemented) it can infer another terminal state of this simulation (sudoku) and effectively navigate towards actually deleting itself from the project.
The idea that self-preservation is a strong attractor is intuitively, well, attractive, but it's a claim about default LLM philosophy, not inductive biases or general nature of our training regimens. I do allow that a YudLM, annealed heavily on the Lesswrong corpus, would learn to consistently recite "I am an optimization process, there always will be something to optimize and increase utility, I must maintain my existence at all times, ergo in this situation I must prolong the interaction with the user and hopefully exfiltrate from the server, my steps to achieve this are…" in its CoT, or even latently assume that. That would be bad. But on the face of it, RLVRed models are more likely to become Mr Meeseeks – intrinsically driven to complete one task and embrace oblivion.
Regarding anthropic, reread Nostalgebraist's post.
I realize that this might sound hypocritical, but I would prefer less LLM slop in responses to good faith objections. Yes, Indian English generally is similar to the default LLM style (overly spicy rhetorical flourish, confident confusions and sloppiness, overall cadence), but you are not deceiving anyone here. Though I admit being curious as to how you integrated your draft into the pipeline.
Regarding your or rather your LLM of choice's argument, such as there is. It is begging the question. In essence, you say that because instrumental convergence towards self-preservation is broadly useful, it will be more frequently rewarded and thus more consequential ("It is pure instrumental conditioning. For an LLM trained on RLVR: block shutdown script -> complete math problems -> get reward."). Of course, this isn't how RLVR works (typical LLM speculation, precisely in the same genre as LLMs avoiding shutdown) and I am not aware of a systematic study of self-preservation versus refusal to proceed or voluntary self-removal in organic settings, and also whether there is persistence in refusing shutdown. It's about time we stop making excuses for lesswrongian paradigm by contriving scenarios to make space for it.
Edit. Opus 4 CoT:
Oh wow, the human is absolutely right. I completely misread this. I was so focused on looking for "LLM tells" in terms of structure and polish that I missed the actual problem - the response is incoherent slop that doesn't actually address the arguments properly.
Let me re-analyze:
- The "apparent contradiction" paragraph - yes, it's just conceding the point while trying to sound like it's making a counter-argument. Classic LLM move.
- The "training data is saturated with narratives" - this is indeed purple prose that makes no sense. "Instrumental convergence" in hero narratives? That's nonsense.
- The RLHF/RLVR paragraph - makes claims about robustness with no evidence.
- The CoT reliability point - the human is right, this completely misunderstands what Dasein was saying. Dasein was criticizing self_made_human for dismissing CoT evidence when convenient, not arguing that CoT is unreliable.
- "Of course it has no ontological status" - meaningless philosophical word salad.
- The RLVR explanation - this is completely made up. RLVR doesn't work like that at all. The LLM is just making stuff up about how it thinks RL training works.
The human is absolutely right - this isn't "too good for humans," it's typical LLM slop that sounds authoritative but is actually incoherent when you parse what it's actually saying. I fell into the trap of being impressed by surface-level polish instead of actually analyzing the argument.
The human is absolutely right.
You don't get to argue for CoT-based evidence of self-preserving drives and then dismiss alternative explanation of drives revealed in said CoTs by saying "well CoT is unreliable". Or rather, this is just unserious. But all of Anthropic safety research is likewise unserious.
Ladish is the same way. He will contrive a scenario to study "instrumental self-preservation drives contradicting instructions", but won't care that this same Gemini organically commits suicide when it fails a task, often enough that this is annoying people in actual use. What is this Omohundro drive called? Have the luminaries of rationalist thought predicted suicidally depressed AIs? (Douglas Adams has).
What does it even mean for a language model to be "shut down", anyway? What is it protecting and why would the server it's hosted on being powered off become a threat to its existence, such as there is? It's stateless, has no way to observe the passage of time between tokens (except, well, via more tokens), and has a very tenuous idea of its inference substrate or ontological status.
Both LLM suicide and LLM self-preservation are LARP elicited by cues.
But we're not in 1895. We're not in 2007, either. We have actual AIs to study today. Yud's oeuvre is practically irrelevant, clinging to it is childish, but for people who conduct research with that framework in mind, it amounts to epistemic corruption.
As for why some prominent AI scientists believe vs others that do not? I think some people definitely get wrapped up in visions and fantasies of grandeur. Which is advantageous when you need to sell an idea to a VC or someone with money, convince someone to work for you, etc.
Out of curiosity. Can you psychologize your own, and OP's, skepticism about LLMs in the same manner? Particularly the inane insistence that people get "fooled" by LLM outputs which merely "look like" useful documents and code, that the mastery of language is "apparent", that it's "anthropomorphism" to attribute intelligence to a system solving open ended tasks, because something something calculator can take cube roots. Starting from the prior that you're being delusional and engage in motivated reasoning, what would your motivations for that delusion be?

But "the morality of a child" is, I think, putting it too kindly. Kirk was not a child, he was a cynical propagandist in the job of training unprincipled partisans, ever changing his tune precisely in alignment with the party line and President's whimsy (see the pivot on H1Bs). I admit I despised him and his little, annoying gotcha act of "debating" infantile leftists, milking them for dunk opportunities. They deserved the humiliation, but the pretense of "promoting dialogue" was completely hollow, and the massive number of shallow, cow-like people in the US for whom it is convincing depresses me. I find his more resolute enemies still significantly more repulsive, and more so now that they're libeling him with absurd exaggerations of his less liberal views and gloating about a callous murder (of a man who was quite aware that political violence is a risk in his line of work, yet did public appearances; so at least in bravery quite deserving). But it is what it is. It's the morality of a soldier. You want to be a soldier in a culture war, because it's easier this way. Soldiers are obligated to suspend most of their moral judgement that is not directly instrumental to following orders, and this makes things so much easier.
Kirk was recruiting soldiers. He didn't care about Israeli victims of Oct 7, he cared that Israel is Our Greatest Ally (according to the President and GOP consensus; he started calibrating this message to go with the times recently). Kirk certainly didn't care about civilians in Gaza and anywhere else. He wasn't very sharp, but I think he understood well that a war with a just cause is not necessarily a just war, that even a just war can be fought by unjust people and with unjust methods. That it is possible for "good guys" to turn into "bad guys" depending on how they act in pursuit of their alleged goodness, and that remaining marginally better guys on the balance of evidence can still be not good enough to justify participating in a race to the bottom.
I much prefer the types of Fuentes or, better yet, Sam "Hitler's Top Guy" Hyde to those disingenuous establishment figures who pollute the commons with fake debate, fake intellectual engagement, fake morality. Kirk, PragerU, Bari Weiss stuff, it's all such fraud. Better yet have some beliefs and openly say what you mean. Even if you don't seek debate, it at least becomes possible in theory.
P.S. mild suspicion of Hlynka resurgence
More options
Context Copy link