@Shirayuki2's banner p

Shirayuki2


				

				

				
0 followers   follows 0 users  
joined 2026 February 13 11:56:52 UTC

new account of Shirayuki, lost old password


				

User ID: 4180

Shirayuki2


				
				
				

				
0 followers   follows 0 users   joined 2026 February 13 11:56:52 UTC

					

new account of Shirayuki, lost old password


					

User ID: 4180

Japan

Losing the war and being occupied by the allies was probably the best thing that could have happened to Japan's economy, as opposed to dragging them down.

The American occupation broke the Japanese military-industrial complex and forced them into exports and free markets, funneled them large sums of capital via Korean war procurement, American market access and technology transfers, and extended Japan the American security umbrella. The Japanese economic miracle wouldn't have been possible without losing the war and getting dragged into a modern economy by force.

India

While they never went full communist like China, there's a good argument that India could be decades ahead of where they are now without Nehru and Gandhi fucking around with poorly implemented socialism. The over-emphasis on heavy industry, licensing and central planning, failing to implement any real land reforms, and essentially being closed off to trade prior to 1991 were practically the completely opposite conditions as to what made the four Asian tigers so successful.

While I agree that India has different issues to East Asia, having a 50 year disadvantage on Japan and a 10 year disadvantage on China in liberalization did them no favors either.

Just wanted to say thanks for posting this.

I've been in a bit of a depressive spiral for the last few days and this post really helped.

My understanding of calvinism vs middle knowledge doctrines was that the main difference is that Calvinism treats free will as compatibilist while Molinism treats free will as libertarian.

Under either doctrine God must still have foreknowledge of children being born with genetic defects that kill them a few days out of the womb and foreknowledge of those destined to die during natural disasters; otherwise how could He be omniscient?

Under theism, you can consistently make choices that go against your self-interest and sacrifice both self-interest and prudence for the sake of morality

Sure, but the point I'm trying to make here is that moral epistemology becomes incoherent, if you abrogate omnibenevolence as understood by human moral intuition. If the meaningless suffering that pervades the world is all part of His plan, then there is no reason to morally privilege doing anything over, say, kicking as many puppies as possible. For all I know that kid drowning to death over there is all just part of the plan, and He instead wants me to kick the shit out of more puppies instead of saving the kid.

Maybe skeptical theists can bite that bullet, but it's not a bullet that I feel capable of accepting myself.

While I think Buddhism in general has useful lessons in reducing suffering even for an atheist, I don't understand at all how western, secular Buddhism can be logically sound without being a suicide cult.

If you accept dukkha, samudaya and nirodha, but you don't believe in the Right View, in the tenets of karma or rebirth, it seems immediately clear that that instead of the Eighfold Path, the much more efficient method to achieving nirodha is the Singlefold Path of a bullet into your skull.

I have to admit I find theodicies appealing to skeptical theism abhorrent, and personally I've never managed to get over the problem of evil.

Why should man do anything good, why should man do anything at all, if human moral intuition is meaningless and the most profane acts can be justified under an unfalsifiable appeal to the greater good?

Rats and ants may not understand the motions of men, but neither should rats and ants have any reason to worship men except for the last argument of kings, the threat of pure brute force and violence.

If God's vision is that children should die screaming in unimaginable pain and that the Ichneumonidae should eat caterpillars from the inside out, then frankly in the footsteps of Ivan Karamazov, I don't want anything offered to me by such a god.

US-China great power competition looks a lot less like god-fearing capitalists vs godless communists and a lot more like godless capitalists with Anglo-Hispanic characteristics vs godless capitalists with Chinese characteristics.

America is not god-fearing in any meaningful sense compared to the America of fifty years ago, and China is not communist in any meaningful sense compared to the China of fifty years ago.

It isn't really arguable that modern society is failing to provide the former to a far larger portion of the population than it did in the past

I agree, but this doesn't really have anything to do with inequality. Most of South America and Africa have vastly higher Gini indexes and much more blatant, corrupt wealth inequality than any developed country yet retain much higher TFR's, while the social democratic Nordic countries living under the law of Jante have amongst the lowest TFR's worldwide. Being rich, free & educated, having the optionality in life to do anything in life at the expense of having children, social atomization and access to smartphones seem like much more causal factors to plummeting rates of family formation.

At the same time, a lot of the visible concentrations of wealth in modern society are nakedly and undeniably antisocial.

I agree that a lot of the aesthetics of the modern wealthy are off-putting, but as I mentioned earlier, "powerful people act in upsetting ways" is not a solvable problem as long as the fundamental ability to concentrate power through technology exists at all. Nobody remembers the man that Luigi killed and nothing changed whatsoever. If it's not the current crop of people seizing the reins of power to enrich themselves, it'll simply be someone else stepping up in their stead.

Furthermore, inequality is a proxy for an uneven distribution of power.

I suppose my view is that wealth isn't power, power is power. Any coalition capable of unseating the billionaire class would by definition hold more power than the current wealthy. I'm not sure it really makes much difference whether it's a Langley spook, Hague bureaucrat, tech billionaire, or CCP party member that holds the reins to ultimate power and status.

As long as technology exists you'll get centralization of power, but as long as centralized technological power doesn't exist you get Haiti or South Sudan, the Hobbesian life in a state of nature.

relative prosperity gates access to some goods that are essential for happiness, like housing and a mate

Extension du domaine de la lutte. The progressive ideal of re-distributing wealth is at least logically possible, but it's fundamentally impossible to re-distribute everyone a big house in the best locations and a high status mate. If being better than others is essential to happiness, then perhaps that is humanity's punishment for eating the forbidden fruit.

Sorry to hear that.

Unfortunately there's a lot of this going on in tech and white collar work as a whole, really, where the LLM's really can't do the work, but some executive assumes they can and so people get chopped in anticipation, or where the company is struggling due to macro-economics or just plain bad management and people get chopped using AI as an excuse.

Best of luck with your other work or with starting the new career.

I was planning to write up a larger top-level effort-post on this topic, but since you've already made the top-level I'll post the notes I was drafting.

For the last few days, I've been reading about the Sam Altman attack drama and the warehouse fire attack that happened recently, and I've been finding the reactions pretty scary. General sentiment on HN is something along the lines of "Altman deserved it" and even among my general leftish acquaintance bubble the vibe is along the lines of "they shouldn't have missed" or "we need more of this fuck the rich" which doesn't really bode well for the stability of society.

Whether or not you believe the more bombastic claims of AI CEO's, I do think it's clear that at minimum AI is going to exacerbate the trend of technology centralizing power, wealth and status, even as absolute material standards have continued to improve beyond the wildest dreams of 99.9% of humanity in the past. For better or for worse, human happiness seems to be tied only lightly to absolute material standards and heavily tied to relative status, position, and feelings of fairness, and the internet and social media are super-stimuli for the human sense of status calibrated towards the Dunbar number.

Ruling out FOOM levels of societal disruption, I can think of a few ways that this plays out.

Left-wing communist populist marxist social democratic total victory: public outcry reaches all-time highs, perhaps with some peasant revolts sprinkled in, and the AOC/Mamdani coalition gets voted in to dismantle the AI labs, big tech and the icky billionaires. Leaving aside the fact that this would annihlate the economy and living standards by proxy, I'm not really convinced that with mass internet and social media there's any gini index or amount of redistribution that would leave the status anxious public satisfied. First they came for the billionaires and then they came for the homeowners.... Certainly comparable democratic countries with half of the gini index of America are still constantly flooded with rhetoric about eating the rich.

Right-wing AI strongman technofeudal democratic backsliding: political violence becomes normalised as a part of day to day life and as a response, perhaps after a significant assassination or riot, a strongman or group of technocrats use the violence as an excuse to seize absolute power, abetted by AI in part or in full. The lumpenproles are kept under control via mass surveillance, drones and guns or killed off entirely. The worst ending, but one that seems depressingly realistic looking at the history of inequality and failed revolutions.

Nothing ever happens: whether mass unemployment happens or not, most people end up with sinecures or welfare to keep them relatively pacified. Social media and concentrating wealth inequality continues to make people miserable even as absolute material conditions begin to reach sci-fi levels, and competition for zero-sum goods like housing in desirable areas and prestigious educations and sinecures becomes even more red in tooth and claw in the vein of the East Asian countries. Political violence gets somewhat more normalised, perhaps to Latin American or 20th century standards, but it's limited to isolated incidents.

Generally I consider myself libertarian and think that billionaires are good, actually, but I do think that inequality and society's response to inequality is likely to be one of the defining questions of the 21st century. While Sam Altman is the most visible face of AI to normies, pure game theory dictates that technological progress will continue with or without the consent of any individual person, company or nation-state, if the capability exists someone (or something...) is going to be the one that holds those reins to wealth, status and power, and as long those reins are held then the holder will inevitably be the target of the green-eyed masses. I don't think we yet have the social technology to deal with this and it's not clear that we ever will; I've seriously been thinking lately whether this might be one way that the Fermi Paradox manifests.

I'd say I'm both simultaneously. I think it's unlikely that scaling LLM's gets to AGI, so I'm a skeptic in that sense, but it is significantly more progress in AI than I ever expected to see in the 2020's.

With that in mind it does seem likely to me that AGI is achieved in my lifetime, and I think if it does happen then humanity is doomed for all the old Bostrom/Yudkowsky reasons.... don't see what I could do about it though, so realistically it doesn't really change my life very much.

And certain professions like SEO slop writer, translator, and others are definitely disrupted forever regardless.

At least in the case of translators, I think you'd be surprised. I happen to be acquainted with a good number of professional translators and almost to a man all of them are still booked out in terms of work and make solid middle class incomes.

My understanding is that the "ChatGPT" moment for translation was around a decade ago when neural machine translation was first getting good. Already at this point, for translation tasks that didn't require professional-grade reliability or well-written prose, Google Translate or DeepL were basically already good enough; translation for things like manuals or brochures was commoditized well before transformers.

Of course LLM's write much better than DeepL, but in practice the set of translation tasks that can't be delegated to Google Translate or DeepL, but can be handled autonomously by a LLM, is actually quite small.

High-reliability translation tasks like legal, medical or diplomatic still require a human in the loop, and LLM's are still subpar at translation tasks that require a high level of interpretation, as in the case of literary translation. At a high level, a good literary translation can be thought of as a re-writing of the original work, and as of yet LLM's are still quite poor writers without significant human intervention.

Operating system and browser zero-days go for millions of dollars.

If Mythos can spit these out for a million dollars a run it's still extremely scary.

A few thoughts:

I'm sure the model will be better than Opus, but the benchmarks look quite clearly overfitted to me. SWE-bench-verified going up to 94% is in particular a clear indication that something suspicious is going on here. It's been known that that benchmark has been contaminated for some time.

Cybersecurity seems like the natural extension of the RL scaling paradigm. I would expect that anything you can easily gradient descent with a well known reward function to continue to see massive improvements over the next year, e.g theorem proving, coding [in the pass tests for a given spec sense] and vulnerability exploits. It doesn't yet seem clear that this will scale tasks that are less amenable to RL scaling.

I'm not sure why you think FIRE money, or really money less than "literal oligarch" tier means you're any more or less cooked if AGI really does come to pass. FIRE in the first place relies on the world looking much the same as the last 80 years of Pax Americana, which seems increasingly unlikely at this point. At the end of the day you own only what you can defend, and it seems unlikely that you would be able to defend anything against sufficiently capable AI.

It seems to me that when people say things along the lines of "LLM's do not have intelligence" their definition of intelligence is something like "everything a human can do", and thus failing at something that can be done by a human proves a lack of intelligence, but in fact human intelligence is very jagged as well!

Should a chimp consider a human unintelligent because of our woefully inferior working memory?

Should a fly consider a human unintelligent because of our woefully inferior visual processing speed?

Should a squirrel consider a human unintelligent because of our woefully inferior spatial memory?

Sure, LLM's fail very basic things that can be done by humans, but humans also fail very basic things that can be done by LLM's; no human alive can write about the same breadth of abstract, novel topics in the same number of languages as even a very weak LLM, or write code as quickly as a LLM.

I fail to see how a LLM isn't intelligent in a way orthogonal to humans, in the same ways that animals are intelligent orthogonally to humans.

While I do align with you in that I consider the current models very powerful and use them plenty myself, and that using some Sonnet + Cline workflow while claiming that AI is incapable is misleading, I do find this sort of crypto-style FOMO inducing rhetoric counter-productive and annoying.

If you believe that the models will usher in the end of history, that they really do end up as AGI, ASI, ushering in the singularity then no amount of using 2026 agents at work will do anything to save you or change the outcome.

On the other hand, in worlds where the models do plateau at some point and end up being commoditized enterprise tooling, nobody is doomed because they didn't use agents correctly in 2026; even boosters have very little consensus on what actually works right now. There will be time to adopt the tooling as capabilities are better understood, the UX will get better, and people will develop best practices and discard what doesn't work or what is no longer necessary; who's still using LangChain or fine-tuning LORA's on hands in 2026?

Find me a real life story where an attractive woman with the option to pick between a handsome, reliable, but only moderately wealthy Blue Collar worker, and a high status millionaire minor celeb, and intentionally settled for the former

Lana Del Rey.

My issue with a lot of this thinking is that it's wrapped up in so many layers of hindsight bias. It's very easy to be right directionally, but have no clue on the timing and magnitude of the shift in the market you expect.

Nvidia had runaway 10x success because the unprecedented virality of ChatGPT, which OpenAI expected to be a boring research preview, drove a crazy compute demand supercycle.

There are many timelines where AI still ended up being a Big Deal, but where Altman decided not to release ChatGPT for safety reasons and hence LLM's didn't see unprecedented human and financial capital investment, and where the hyperscalers had time to build out TPU's, Trainium, Ascends, whatever, and hence Nvidia never ended up becoming a multi-trillion dollar company.

Congratulations on being abnormally successful - but if you're smart enough to have returns that good, surely you must realise that everyone else is just paying into some prop trader's next bonus.

If the war ends I will immediately sell the oil stocks

You are assuming you will make it on time to the sell button after you read the headlines, meanwhile Jane Street has already sold them all down three days in advance after having already predicted the end of the war.

At most I've lost out on a couple percent of gains

If oil stocks and index funds start moving in uncorrelated directions you stand to lose a lot more than a couple percent.

During the pandemic there was plenty of time to buy during the dip

Easy to say in hindsight, but if you missed the window by even one month you were pretty much just re-buying into Jan-Feb 2020 prices anyways, and if you missed the window by even two months you would have lost money.

At the end of the day it's your money, so do as you please with it - just realize that historically these sorts of bets have been horrific for the people trying to implement them.

It's not that hard to time the market

And on the pedestal, these words appear:

My name is Ozymandias, King of Market Timers;

Look on my Works, ye Mighty, and despair!"

I do not share your blind faith.

The semi-strong EMH is perhaps not true if you're a quant at a prop trading shop, but it is certainly true for a random guy off the street. I assure you anything you think you know about the war has already been priced into the market by an army of quants.

IDK Covid was a pretty big market oversight

The SP500 was up like 10% from Jan 2020 at the end of the year. It did in fact turn out that the Fed would print unlimited money to keep the market going and that the pandemic wouldn't hurt the markets long-term. Lots of rationalist types were selling everything they had in early 2020 as well, but likely they all lost money unless they all bought back exactly during the March dip. Even if you think assets are over-priced it's not actionable unless you can predict how and when the drop will happen.

I can always sell both of these instruments and get back into index funds

Your plan is, checks notes, to sell when prices are depressed and try to get back in if your new assets start dropping and index funds recover? Bold move.

Are the markets pricing the effects of this war correctly?

If you need to ask if markets are pricing something correctly, as a member of the public with no insider knowledge, the answer is always yes. As others have mentioned, there are plenty of potential mitigations or off-ramps that could make this a relative non-issue in a few months.

Not your financial advisor but liquidating your entire portfolio based on a current event that the public has almost no visibility on seems incredibly insane to me.

Yeah this literally exists now, it's called OpenClaw.

Realistically it would already have been possible with the very early LLM's and some creative scaffolding, but I think OpenClaw going mainstream with the ability to automate astroturfing without needing any technical knowledge was the final nail in the coffin for the internet.

Laws that cannot be enforced are laws not worth drafting

I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.

It's pretty much impossible to catch people in the act doing various anti-social things like littering or cheating on schoolwork (even pre-LLM) but having rules against littering and cheating are still important to set norms. Similarly, the recent wave of underage social media bans and online censorship are impossible to enforce against anyone with a VPN, but are still real laws that end up shaping people's behaviour.

I agree that it's really going to be a symbolic effort at best, but I think it does have value in shaping norms for what the moderators want their board to be, and perhaps in catching some of the most egregious cases.

Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue

To steel-man their attempt, it's not really about the actual prevention but rather stopping the most egregious examples and raising the quality of the discourse. There are literal HN poster plugins for OpenClaw alongside an enormous amount of 1 day old em-dash posts flooding HN that were technically not against the rules.

Yeah, if someone puts in any effort it'll be indistinguishable from human writing, but at least it serves to get rid of the most egregious spammers and bring up the floor.

Still, I agree that the quality of HN discourse has fallen for some time now, in a way not really related to LLM's at all. I used to really like HN but these days I only use it as a link aggregator unfortunately.