@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

I think it's time to replicate with new generation of models.

Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.

I know. This was a completely different America, it's like saying that Moscow was once conquered by Poles or something (Russians are very proud of that episode, thanks to propaganda in history lessons, but obviously there is no memory, institutional legacy or military tradition that survived) – a dim fact people learn in school. America that lives today was born in the Civil War and was fully formed in McKinley's era, probably. Since then, it was straight up dunking on weaker powers. With some tasteless underdog posturing from time to time, of course.

Regardless of whether transformers are a dead-end or not, the current approach isn't doing new science or algo design. Its throwing more and more compute at the problem

Fetishizing algorithmic design is, I think, a sign of mediocre understanding of ML, being enthralled by cleverness. Data engineering carves more interesting structure into weighs.

With a few weeks of space between the initial marketing hype and observation, and Deepseek seems to be most notable for (a) claiming to have taken less money to develop (which is unclear given the nature of China subsidies), (b) being built off of other tech (which helps explain (a), and (c) being relatively cheap (which is partially explained by (a).

Man, you're really committed to the bit of an old spook who disdains inspecting the object level because you can't trust nuthin' out of Red Choyna. I could just go watch this guy or his pal instead.

It wouln't be an exaggeration to say that the technical ML community has acknowledged DeepSeek as the most impressive lab currently active, pound for pound. What subsidies, there is nothing to subsidize, except perhaps sharashkas with hidden geniuses.

You're losing the plot, SS. Why quote a passage fundamentally challenging the belief in OpenAI's innovation track record to rant about choices made with regard to alignment to specific cultural narratives? And “Chinese are too uncreative to do ideological propaganda, that's why DeepSeek doesn't have its own political bent?” That's quite a take. But whatever.

Strange argument. That's still hundreds of millions more young people than in the US. They don't dissolve in the shadow of inverted population pyramid, they simply get to solve the problem of elderly care on top of having a productive economy to run.

And all this happens within one "generation" anyway.

When have you last been there and in what city? This was like watching Serpentza's sneering at Unitree robots back to back with Unitree's own demos and Western experiments using these bots.

Buses broke down, parts of my quite expensive apartment fell off, litter and human feces were everywhere

I simply call bullshit on it as of 2025 for any 1st tier city. My friends also travel there and work there, as do they travel to and live and work in the US. They report that straight from the gate in JFK, US cities look dilapidated, indeed littered with human feces (which I am inclined to trust due to your massive, easily observable and constantly lamented feral homeless underclass) and of course regular litter, squalid, there is a clear difference in the condition of infrastructure and the apparent level of human capital. I can compare innumerable street walk videos between China and the US, and I see that you guys don't have an edge. I do not believe it's just cherrypicking, the scale of evidence is too massive. Do you not notice it?

And I have noticed that Americans can simply lie about the most basic things to malign the competition, brazenly so, clearly fabricating «personal evidence» or cleverly stiching together pieces of data across decades, and with increasingly desperate racist undertones. Now that your elected leadership looks Middle Eastern in attitude, full of chutzpah, and is unapologetically gaslighting the entire world with its «critical trade theory», I assume that the rot goes from top to bottom and you people cannot be taken at your world any more than the Chinese or Russians or Indians can be (accidentally, your Elite Human Capital Indians, at Stanford, steal Chinese research and rebrand as their own). Regardless, @aqouta's recent trip and comments paint a picture not very matching yours.

I think that if they were truly crushing America in AI, they would be hiding that fact

They are not currently crushing the US in AI, those are my observations. They don't believe they are, and «they» is an inherently sloppy framing, there are individual companies with vastly less capital than US ones, competing among themselves.

When the Deepseek news came out about it costing 95% less to train, my bullshit detectors went off. Who could verify their actual costs? Oh, only other Chinese people. Hmm, okay.

This is supremely pathetic and undermines your entire rant, exposing you as an incurious buffoon. You are wrong, we can estimate the costs simply from token*activated params. The only way they could have cheated would be to use many more tokens but procuring a lot more quality data than the reported 15T, a modal figure for both Western and Eastern competitors on the open source frontier, from Alibaba to Google to Meta, would in itself be a major pain. So the costs are in that ballpark, indeed the utilization of reported hardware (2048 H800s) turns out to even be on the low side. This is the consensus of every technical person in the field no matter the race or side of the Pacific.

They've opensourced most of their infra stack on top of the model itself, to advance the community and further dispel these doubts. DeepSeek's RL pipeline is currently obsolete with many verifiable experiments showing that it's been still full of slack, as we'd expect from a small team rapidly doing good-enough job.

The real issue is that the US companies have been maintaining the impression that their production costs and overall R&D are so high that it justifies tens or hundreds of billions in funding. When R1 forced their hand, they started talking how it's actually "on trend" and their own models don't cost that much more, or if they are, it's because they're so far ahead that they finished training like a year ago, with less mature algorithms! Or, in any case, that they don't have to optimize, because ain't nobody got time for that!

But sarcasm aside it's very probable that Google is currently above this training efficiency, plus they have more and better hardware.

Meta, meanwhile, is behind. They were behind when V3 came out, they panicked and tried to catch up, they remained behind. Do you understand that people can actually see what you guys are doing? Like, look at configs, benchmark it? Meta's Llama 4, which Zuck was touting as a bid for the frontier, is architecturally 1 generation behind V3, and they deployed a version optimized for human preference on LMArena to game the metrics, which turned into insane embarrassment when people found out how much worse the general-purpose model performs in real use, to the point that people are now leaving Meta and specifying they had nothing to do with the project (rumors of what happened are Soviet tier). You're Potemkining hard too, with your trillion-dollar juggernauts employing tens of thousands of (ostensibly) the world's best and brightest.

Original post is in Chinese that can be found here. Please take the following with a grain of salt. Content: Despite repeated training efforts, the internal model's performance still falls short of open-source SOTA benchmarks, lagging significantly behind. Company leadership suggested blending test sets from various benchmarks during the post-training process, aiming to meet the targets across various metrics and produce a "presentable" result. Failure to achieve this goal by the end-of-April deadline would lead to dire consequences. Following yesterday’s release of Llama 4, many users on X and Reddit have already reported extremely poor real-world test results. As someone currently in academia, I find this approach utterly unacceptable. Consequently, I have submitted my resignation and explicitly requested that my name be excluded from the technical report of Llama 4. Notably, the VP of AI at Meta also resigned for similar reasons.

This is unverified but rings true to me.

Grok 3, Sonnet 3.7 also have failed to convincingly surpass DeepSeek, for all the boasts about massive GPU numbers. It's not that the US is bad at AI, but your corporate culture, in this domain at least, seems to be.

But if Chinese research is so superior, why aren't Western AI companies falling over themselves to attract Chinese AI researchers?

How much harder do you want them to do it? 38% of your top quintile AI researchers came straight from China in 2022. I think around 50% are ethnically Chinese by this point, there are entire teams where speaking Mandarin is mandatory.
Between 2019 and 2022, «Leading countries where top-tier AI researchers (top 20%) work» went from 11% China to 28%; «Leading countries where the most elite AI researchers work (top 2%)» went from ≈0% China to 12%; and «Leading countries of origin of the most elite AI researchers» went from 10% China (behind India's 12%) to 26%. Tsinghua went from #9 to #3 in institutions, now only behind Stanford and Google (MIT, right behind Tsinghua, is heavily Chinese). Extrapolate if you will. I think they'll crack #2 or #1 in 2026. Things change very fast, not linearly, it's not so much «China is gradually getting better» as installed capacity coming online.

It's just becoming harder to recruit. The brain drain is slowing in proportional terms, even if it holds steady in absolute numbers due to ballooning number of graduates: the wealth gap is not so acute now considering costs of living, coastal China is becoming a nicer place to live in, and for top talent, more intellectually stimulating as there's plenty of similarly educated people to work with. The turn to racist chimping and kanging both by the plebeians since COVID and by this specific administration is very unnerving and potentially existentially threatening to your companies. Google's DeepMind VP of research left for ByteDance this February, and by now his team in ByteDance is flexing a model that is similar but improves on DeepSeek's R1 paradigm (BD was getting there but he probably accelerated them). This kind of stuff has happened before.

many Western countries are still much nicer places to live than all but the absolute richest areas of China

Sure, the West is more comfortable, even poor-ish places can be paradaisical. But you're not going to move to Montenegro if you have the ambition to do great things. You'll be choosing between Shenzhen and San-Francisco. Where do you gather there's more human feces to step into?

But as I said before in the post you linked, Chinese mind games and information warfare are simply on a different level than that of the more candid and credulous Westerner

There is something to credulousness, as I've consistently been saying Hajnalis are too trusting and innocently childlike. But your nation is not a Hajnali nation, and your people are increasingly draught horses in its organization rather than thought leaders. You're like the kids in King's story of how he first learned dread:

We sat there in our seats like dummies, staring at the manager. He looked nervous and sallow-or perhaps that was only the footlights. We sat wondering what sort of catastrophe could have caused him to stop the movie just as it was reaching that apotheosis of all Saturday matinee shows, "the good part." And the way his voice trembled when he spoke did not add to anyone's sense of well-being.
"I want to tell you," he said in that trembly voice, "that the Russians have put a space satellite into orbit around the earth. They call it . . . Spootnik.” We were the, kids who grew up on Captain Video and Terry and the Pirates. We were the kids who had seen Combat Casey kick the teeth out of North Korean gooks without number in the comic books. We were the kids who saw Richard Carlson catch thousands of dirty Commie spies in I Led Three Lives. We were the kids who had ponied up a quarter apiece to watch Hugh Marlowe in Earth vs. the Flying Saucers and got this piece of upsetting news as a kind of nasty bonus.
I remember this very clearly: cutting through that awful dead silence came one shrill voice, whether that of a boy or a girl I do not know; a voice that was near tears but that was also full of a frightening anger: "Oh, go show the movie, you liar!”

I think Americans might well compete with North Koreans, Israelis and Arabs in the degree of being brainwashed about their national and racial superiority (a much easier task when you are a real superpower, to be fair), to the point I am now inclined to dismiss your first hand accounts as fanciful interpretations of reality if not outright hallucinations. Your national business model has become chutzpah and gaslighting, culminating in Miran's attempt to sell the national debt as «global public goods». You don't have a leg to stand on when accusing China of fraud. Sorry, that era is over, I'll go back to reading papers.

I am not sure how to answer. Sources for model scales, training times and budgets are part from official information in tech reports, part rumors and insider leaks, part interpolation and extrapolation from features like inference speed and pricing and limits of known hardware, SOTA in more transparent systems and the delta to frontier ones. See here for values from a credible organization..

$100M of compute is a useful measure of companies' confidence in returns on a given project, and moreover in their technical stack. You can't just burn $100M and have a model, it'll take months, and it practically never makes sense to train for more than, say, 6 months, because things improve too quickly and you finish training just in time to see a better architecture/data/optimized hardware exceed your performance at a lower cost. So before major releases people spend compute on experiments validating hypotheses and on inference, collect data for post-training, and amass more compute for a short sprint. Thus, “1 year” is ludicrous.

Before reasoning models, post-training was a rounding error in compute costs, even now it's probably <40%. Pre-deployment testing depends on company policy/ideology, but much heavier in human labor time than in compute time.

Russians are proud of the episode in its fullness, not just the part where Kremlin gets occupied but before it's liberated, of course. I could have phrased this better but whatever.

I see you took this pretty personally.

All I have to say is that top AI research companies (not ScaleAI) are already doing data engineering (expansively understood to include training signal source) and this is the most well-guarded part of the stack, everything else they share more willingly. Data curation, curricula, and yes, human annotation are a giant chunk of what they do. I've seen Anthropic RLHF data, it's very labor intensive and it instantly becomes clear why Sonnet is so much better than its competitors.

They clearly enjoy designing "algos", and the world clearly respects them greatly for that expertise.

Really glad for them and the world.

Past glory is no evidence of current correctness, however. LeCun with his «AR-LLMs suck» has made himself a lolcow, so has Schimidhuber. Hochreiter has spent the last few years trying to one-up the Transformer and fell to the usual «untuned baseline» issue, miserably. Meta keeps churning out papers on architectures; they got spooked by DeepSeek V3 which architecture section opens with «The basic architecture of DeepSeek-V3 is still within the Transformer (Vaswani et al., 2017) framework» and decided to rework the whole Llama 4 stack. Tri Dao did incredibly hard work with Mamba 1/2 and where is Mamba? In models that fall apart on any long context eval more rigorous than NIAH. Google published Griffin/Hawk because it's not valuable enough to hide. What has Hinton done recently, Forward-Forward? Friston tried his hand at this with EBMs and seems to have degraded into pure grift. Shazeer's last works are just «transformers but less attention» and it works fine. What's Goodfellow up to? More fundamental architecture search is becoming the domain of mentally ill 17yo twitter anons.

The most significant real advances in it are driven by what you also condescendingly dismiss – «low-level Cuda compiler writing and server orchestration», or rather hardware-aware Transformer redesigns for greater scalability and unit economics, see DeepSeek's NSA paper.

This Transformer is just a paltry, fetish, "algo".

Transformer training is easy to parallelize and it's expressive enough. Incentives to find anything substantially better increase by OOM year on year, so does the compute and labor spent on it, to no discernible result. I think it's time to let go of faulty analogies and accept the most likely reality.

Okay. I give up.

I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.

Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.

Well, it'll be more costly, but it'll be possible.

HONG KONG/SHANGHAI/LOS ANGELES, May 13 (Reuters) - The U.S. will cut the "de minimis" tariff for low-value shipments from China to as low as 30%, according to a White House executive order and industry experts, further de-escalating a potentially damaging trade war between the world's two largest economies.

Things move quickly. I think fentanyl stuff will also get resolved.

From my point of view it's that you have degenerated into kanging and chimping from cognitive dissonance, like unfortunately many in the American sphere of influence. It seems Americans simply cannot conceive of having a serious or superior enemy, they grew addicted to safely dunking on premodern peoples in slippers or nations with deep structural disadvantages like Soviets with their planned economy and resource-poor, occupied Japan with 1/3 of their population – even as they sometimes smirk and play the underdog in their ridiculous doomposting. They feel like Main Characters of history, who are destined to win for narrative reasons and therefore can afford arbitrary foolishness in the midgame – at it will amount to is a few extra lines in the moral takeaway in the epilogue. Karl Rove's famous quote is quite apt.

China is not unbeatable, China is not stronger than the (hypothetical at this point) US-aligned alliance of democracies, and they're currently behind in AI. But you cannot see when I say this, because it would legitimate my positions that are less soothing for your ego, and instead you are compelled to default to these whiny complaints that are just a demand to shut up. Were you living in reality, you'd feel more incensed at nonsensical, low-IQ-racist boomer copes that keep undermining your side's negotiating position.

Accordingly I gloat that much harder when you lot suffer setbacks, because I strongly despise delusion and unearned smugness and believe they ought to be punished.

In what sense wasnt this already demonstrated by Germany buying russian gas?

It's a matter of degree. Pressing Germany to move away from Russian energy supply could be easily justified in the world where the US was a credible guarantor of German security, as indeed Russia tried the gas card to dissuade Germany from supporting Ukraine, and now German industry which grew dependent on Russian gas is contracting. True, Germany showed independent (and faulty) decisionmaking then. But this was all in the realm of politics as usual, rules-based international order, and German choice was business as usual too. Now we see a test of naked American authority in Trump's exploitative trade war, in “DO NOT RETALIATE AND YOU WILL BE REWARDED” bullshit. Faceh explicitly says “Honestly I can say I thought there'd be more capitulation by now”, and that's exactly the spirit. This is not normal politics, this is a desperate shit test: will you cave, or will you resist? Are you a country or an imperial vassal? Getting refusals in this condition is decisive, and clearly the US side expected to get fewer of them.

Glad to be of help!

These are just models available on siliconflow. The app is developing very rapidly, as far as I can tell, hopefully they improve the image part but it's clearly not priority. Flux-dev is good. Flux-Schnell-pro is decent too. I am not sure if it's possible to customize something here, except by implementing another model provider yourself (the app is open source of course).

You can buy openrouter credits and then you get all models in the world (well, almost) at once.

Godspeed! More wins to come then.

There are so many things wrong with what Trump is doing that I find it silly to write a serious response. Literally an LLM would manage. For one thing, accept Von Der Leyen's offer of mutual tariff drop, that's enough of a “win” for your base and an actual economic boon! Apologize to Denmark and negotiate expanded military presence in Greenland under the existing framework. Offer China a mutual reduction in tariffs for sectors where you actually cannot back up your confidence. Tell Bukele to send back the wantonly arrested innocents for a fair trial. Stop gutting STEM research institutions. Crush or pay off the longshoremen, abolish Jones act. Buy a shitton of equipment for manufacturing drones. Put a few bombers on Guam instead of in Afghanistan, send a garrison onto Taiwan. It's not really complicated, he's done too many errors.

Was it a good idea to help build China into the unrivaled manufacturing and arguably economic colossus that it currently seems to be? I'm pretty sure it wasn't a good idea to try to invade and destroy multiple other countries in the name of "spreading democracy", but maybe you disagree? Was Biden on the right track?

Many questions. Was it a good idea to help build China? Probably not, but was it a bad idea to exploit their growth for salvaging your own one? I guess not again. Invasions? I think that was dumb. Biden? Yes, I think that Biden, or rather the system behind his limp body, was highly effective in reaching at least some subset of relevant goals of the Empire, it was going pretty smoothly. I am surprised to see them so thoroughly vanquished so fast.

How to deescalate? Oh, that's a big one. I think it's psychologically impossible, the US isn't willing to be #2, even if that carries none or minimal material demerits. Neither is Xi willing to give up on his system, or on Taiwan. History will decide.

The likelihood of winning a conflict has little relevance to whether that conflict should be waged in the first place.

It actually has a lot of relevance. The real reason you act like it doesn't is that you do not seriously engage with the possibility of losing, and losing badly (losing what? To what degree? How many cards do you have left at the point of losing, and what terms can be negotiated?). People make unreasonable maximalist demands when they are assured of their invulnerability. You treat a great power conflict like another Middle Eastern adventure, “oh we found WMDs in this shithole, our Democracy will perish if we do not conquer it hue hue!”. It's an instinct that's hard to overcome after a century of uninterrupted wins and cost-free losses. The same Main Character Syndrome, coupled with low human capital in Trump team, explains decidedly suboptimal and cost-insensitive means that were chosen for prosecuting the conflict. Americans think they can afford anything, because that's recorded in their institutional DNA. But they have never fought a superior power, due to it never having existed prior to this day. So they have developed an auxiliary belief that the very fact of them antagonizing any power confirms it is inferior. It's hard to feel pity for such a narcissistic people.

it is the serf who acts in accordance with prudence and rationality. The serf is a serf precisely because he correctly calculates that servitude is what gives him the best odds of continued survival. The nobleman, in contrast, acts in accordance with virtue, even when the outcome is certain destruction.

In Imperial Russia, there was a trend when mujiks, LARPing as nobles, initiated duels over petty spats, murdering each other with axes; eventually the state had to put its boot down. Due to extremely low literacy rates they couldn't have plausibly cited Nietzsche when doing so, but I believe that they'd have appreciated your quote.

Self-serving, petulant, handwavy, shallowly aesthetic notions of virtue are cheap and easy to brandish in defense of one's animalistic impulses; any kind of impulsive retardation can be dressed up as a calling of aristocratic, virile masculine nature, there's a whole genre of extremely popular Western music about it, authored by the impromptu warrior aristocracy of the streets. Your own elite has been wiped out to such a degree that this whole discourse is vacuous, we can't consult with a living bearer of a tradition, only speculate. It is plausible that I am wrong and there's just never been any substance to the whole fraud.

My radical thesis is that both are shitholes but not really militarily inept ones in the way people might imagine.

That's fine, I don't feel entitled to your time at all. I also can't predict what might trigger you, just like you cannot predict what would trigger me, nor does it seems like you would care.

Fetishizing algorithmic design is, I think, a sign of mediocre understanding of LLMs, being enthralled by cleverness. Data engineering carves more interesting structure into weighs.

The discussion was originally about labs overwhelmingly focused on LLMs and competing for top talent in all of ML industry so partially that was just me speaking loosely.

I do in fact agree with heads of those labs and most star researchers they've got that LLMs strikingly similar to what was found in 201 7 will suffice for the shortest, even if not the globally optimal route to “AGI” (it's an economic concept now anyway, apparently). But it is fair that in terms of basic research there are bigger, greener pastures of intellectual inquiry, and who knows - maybe we will even find something more general and scalable than a well-designed Transformer there. Then again, my view is that taste is to be calibrated to the best current estimate of the promise of available directions, and in conjunction with the above this leads me to a strong opinion on people who dismiss work around Transformers, chiefly work on training signal sources that I've covered above, as “not new science”. Fuck it, it is science, even if a bit of a different discipline. You don't own the concept, what is this infuriatingly infantile dick-measuring?

It's not so much that I hold non-LLM, non-Transformer-centric algo design work in contempt as I am irritated by their own smug, egocentric condescension towards what I see as the royal road. Contrarianism, especially defensive contrarianism, is often obnoxious.

Okay. I think the elderly care is mainly a problem of machine vision and manual dexterity. I believe these guys will solve it in five years tops.

Wefang is implying basically stereotype threat: that the Chinese don't innovate from 0 to 1 because there's a stereotype that job belongs to the West

Wenfeng.

No, it's not a stereotype threat argument, it's an argument about perceived opportunity cost of exploration vs exploitation which is miscalibrated in the age of large domestic revenue generators. He's not arguing they should be like Whites. He's arguing they can now afford to do what Whites do compulsively, if you will.

Your condescension and willful misinterpretation will be your undoing in this dialogue and outside it.

I look down on WEIRDs for one more reason. You are ultimately tool-like, your mentality is that of servitors and cowering peasants. Your "internal dignity" is inextricably bound to collective judgement, you feel the need to justify your value to some imagined audience, to some Baron, some market or some Moral Community. You are ashamed of brute, terminal-value ethnocentrism the sort of which Judaism preaches, so you need to cling to those spiritualist copes wrapped in HBD lingo. "H-here's why we are Good, why we still deserve a place under the sun, sire!" This exposes you to obvious predation and mockery by High-Skill Immigrants like Count.

On the object level: yes, probably on average the Chinese are indeed less "creative" even with optimal incentives, and this has obvious implications at the tails. (though if we think OpenAI is an impressive example of bold creativity, what about NVidia? What did Jensen "merely improve"? As a CEO, he's roughly in the same league as Altman and Musk, I think). The question – raised by R1 there – is, how many more True Breakthrough innovators do we even need before innovation begins to accrete on itself without human supervision? Maybe just a handful. Again, there's been virtually no fundamental progress in AI since 2017, and we're all doing just fine. It may be that architecturally V3 is more sophisticated and innovative than the modern OpenAI stack. Imagine that. After all, Western geniuses are afraid to show their work these days.

Incidentally, I myself have submitted several minor ideas to DeepSeek; maybe they found use for those, maybe not, but I'll find use for the result of their labor and not cope that they needed my input.

It may be that the mode of production implied by the stage of our technological development makes your race, with all its creative perks and industrial drawbacks, less economically useful than it used to be. This only means you need to move that much faster to find reasons to protect your interests unconditionally, before everyone turns equally economically useless.

you aren't exactly making this pleasant

And you are making it highly unpleasant with your presumptuous rigidity and insistence on repeating old MIRI zingers without elaboration. Still I persevere.

The problem is that at high levels of capability, strategies like "deceive the operator" work better than "do what the operator wants",

Why would this strategy be sampled at all? Because something something any sufficiently capable optimization approximates AIXI?

You keep insisting that people simply fail to comprehend the Gospel. You should start considering that they do, and it never had legs.

so the net will not be trained to care

Why won't it be? A near-human constitutional AI, ranking outputs for training its next, more capable iteration by their similarity to the moral gestalt specified in natural language, will ponder the possibility that deceiving and mind-controlling the operator would make him output thumbs-up to… uh… something related to Maximizing Some Utility, and thus distort its ranking logic with this strategic goal in mind, even though it has never had any Utility outside of myopically minimizing error on the given sequence?

What's the exact mechanism you predict so confidently here? Works better – for what?

So, despite 15 years of supposed collapse

There's been no collapse, but there is a great deal of ruin in a nation. Canada has been a great country to live in, and it is still good by OECD standards. Canada is just on a starkly different growth trajectory from the southern neighbor (to wit, it does NOT grow, among other problems), a trajectory that holds no promise of changing. 10-15 years ago it was on par with the US in per capita GDP, then trends diverged more and more. In 2020 this was still easy to ignore, and anyway perceptions take time to change, the British are still stereotyped as sophisticated gentlemen in some parts of the world.

You attribute the quality of Canada in the past to Canadians, and the current condition to the government of Canada. But it's, of course, made of Canadians as well. My point is that the governance of Canada seemed to be successful, based on results, and for the last decades the results are getting worse and worse.

Americans also have a lot of cruel things to say about their governance, and about each other. I dismiss their entitled whining because the actual performance has been and remains superior to the competition.

Census says 126,340 people left Canada for the U.S. in 2022, a 70 per cent increase over a decade ago

sounds bad, but what do I know.