@Shirayuki2's banner p

Shirayuki2


				

				

				
0 followers   follows 0 users  
joined 2026 February 13 11:56:52 UTC

new account of Shirayuki, lost old password


				

User ID: 4180

Shirayuki2


				
				
				

				
0 followers   follows 0 users   joined 2026 February 13 11:56:52 UTC

					

new account of Shirayuki, lost old password


					

User ID: 4180

It seems to me that when people say things along the lines of "LLM's do not have intelligence" their definition of intelligence is something like "everything a human can do", and thus failing at something that can be done by a human proves a lack of intelligence, but in fact human intelligence is very jagged as well!

Should a chimp consider a human unintelligent because of our woefully inferior working memory?

Should a fly consider a human unintelligent because of our woefully inferior visual processing speed?

Should a squirrel consider a human unintelligent because of our woefully inferior spatial memory?

Sure, LLM's fail very basic things that can be done by humans, but humans also fail very basic things that can be done by LLM's; no human alive can write about the same breadth of abstract, novel topics in the same number of languages as even a very weak LLM, or write code as quickly as a LLM.

I fail to see how a LLM isn't intelligent in a way orthogonal to humans, in the same ways that animals are intelligent orthogonally to humans.

While I do align with you in that I consider the current models very powerful and use them plenty myself, and that using some Sonnet + Cline workflow while claiming that AI is incapable is misleading, I do find this sort of crypto-style FOMO inducing rhetoric counter-productive and annoying.

If you believe that the models will usher in the end of history, that they really do end up as AGI, ASI, ushering in the singularity then no amount of using 2026 agents at work will do anything to save you or change the outcome.

On the other hand, in worlds where the models do plateau at some point and end up being commoditized enterprise tooling, nobody is doomed because they didn't use agents correctly in 2026; even boosters have very little consensus on what actually works right now. There will be time to adopt the tooling as capabilities are better understood, the UX will get better, and people will develop best practices and discard what doesn't work or what is no longer necessary; who's still using LangChain or fine-tuning LORA's on hands in 2026?

Find me a real life story where an attractive woman with the option to pick between a handsome, reliable, but only moderately wealthy Blue Collar worker, and a high status millionaire minor celeb, and intentionally settled for the former

Lana Del Rey.

My issue with a lot of this thinking is that it's wrapped up in so many layers of hindsight bias. It's very easy to be right directionally, but have no clue on the timing and magnitude of the shift in the market you expect.

Nvidia had runaway 10x success because the unprecedented virality of ChatGPT, which OpenAI expected to be a boring research preview, drove a crazy compute demand supercycle.

There are many timelines where AI still ended up being a Big Deal, but where Altman decided not to release ChatGPT for safety reasons and hence LLM's didn't see unprecedented human and financial capital investment, and where the hyperscalers had time to build out TPU's, Trainium, Ascends, whatever, and hence Nvidia never ended up becoming a multi-trillion dollar company.

Congratulations on being abnormally successful - but if you're smart enough to have returns that good, surely you must realise that everyone else is just paying into some prop trader's next bonus.

If the war ends I will immediately sell the oil stocks

You are assuming you will make it on time to the sell button after you read the headlines, meanwhile Jane Street has already sold them all down three days in advance after having already predicted the end of the war.

At most I've lost out on a couple percent of gains

If oil stocks and index funds start moving in uncorrelated directions you stand to lose a lot more than a couple percent.

During the pandemic there was plenty of time to buy during the dip

Easy to say in hindsight, but if you missed the window by even one month you were pretty much just re-buying into Jan-Feb 2020 prices anyways, and if you missed the window by even two months you would have lost money.

At the end of the day it's your money, so do as you please with it - just realize that historically these sorts of bets have been horrific for the people trying to implement them.

It's not that hard to time the market

And on the pedestal, these words appear:

My name is Ozymandias, King of Market Timers;

Look on my Works, ye Mighty, and despair!"

I do not share your blind faith.

The semi-strong EMH is perhaps not true if you're a quant at a prop trading shop, but it is certainly true for a random guy off the street. I assure you anything you think you know about the war has already been priced into the market by an army of quants.

IDK Covid was a pretty big market oversight

The SP500 was up like 10% from Jan 2020 at the end of the year. It did in fact turn out that the Fed would print unlimited money to keep the market going and that the pandemic wouldn't hurt the markets long-term. Lots of rationalist types were selling everything they had in early 2020 as well, but likely they all lost money unless they all bought back exactly during the March dip. Even if you think assets are over-priced it's not actionable unless you can predict how and when the drop will happen.

I can always sell both of these instruments and get back into index funds

Your plan is, checks notes, to sell when prices are depressed and try to get back in if your new assets start dropping and index funds recover? Bold move.

Are the markets pricing the effects of this war correctly?

If you need to ask if markets are pricing something correctly, as a member of the public with no insider knowledge, the answer is always yes. As others have mentioned, there are plenty of potential mitigations or off-ramps that could make this a relative non-issue in a few months.

Not your financial advisor but liquidating your entire portfolio based on a current event that the public has almost no visibility on seems incredibly insane to me.

Yeah this literally exists now, it's called OpenClaw.

Realistically it would already have been possible with the very early LLM's and some creative scaffolding, but I think OpenClaw going mainstream with the ability to automate astroturfing without needing any technical knowledge was the final nail in the coffin for the internet.

Laws that cannot be enforced are laws not worth drafting

I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.

It's pretty much impossible to catch people in the act doing various anti-social things like littering or cheating on schoolwork (even pre-LLM) but having rules against littering and cheating are still important to set norms. Similarly, the recent wave of underage social media bans and online censorship are impossible to enforce against anyone with a VPN, but are still real laws that end up shaping people's behaviour.

I agree that it's really going to be a symbolic effort at best, but I think it does have value in shaping norms for what the moderators want their board to be, and perhaps in catching some of the most egregious cases.

Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue

To steel-man their attempt, it's not really about the actual prevention but rather stopping the most egregious examples and raising the quality of the discourse. There are literal HN poster plugins for OpenClaw alongside an enormous amount of 1 day old em-dash posts flooding HN that were technically not against the rules.

Yeah, if someone puts in any effort it'll be indistinguishable from human writing, but at least it serves to get rid of the most egregious spammers and bring up the floor.

Still, I agree that the quality of HN discourse has fallen for some time now, in a way not really related to LLM's at all. I used to really like HN but these days I only use it as a link aggregator unfortunately.

Realistically the public, anonymous internet is simply over at this point. The only ways forward are either the end of anonymity or accepting that you'll be writing to an LLM half of the time.

At this point I've pretty much cut down my internet usage down to private Discords/IRCs and various Substacks/tweets/articles from accounts that are known to be human. I don't really have any issues with reading LLM content per-se, and even like reading LLM takes on various topics, but if I wanted that I'd just prompt it myself.

I still check The Motte as I think the relative obscurity, active moderation and high concentration of regulars protects it from the worst of the dead internet, but unfortunately it seems unlikely that it'll be able to stem the tide forever.

Well, you do need a centralized authority but any jurisdiction interested in implementing such a policy has the nation state ready to step in.

The initiatives mentioned in pbmonster's post look pretty interesting qua implementation.

That's actually quite well implemented: I didn't know about those initiatives.

I do honestly agree that with the advent of LLM's, the time of the internet being an anonymous wild west has pretty much already ended. Once we've decided that we need online ID checks, it does seem best if we implement it in as privacy-preserving fashion as possible.

Zero-knowledge proof of age systems would be a very easy way to handle age verification without having to provide any identifying information to any third party instead of faffing around with crap like AgeGo or face scans.

Of course, I wonder why this will never happen ...

Calvin Coolidge, one of the most powerful men on the planet at the time, watched his son die at 16 of an infection that you could cure for a few dollars now, and fell into a depression that lasted the rest of his life.

Stalin, one of the other most powerful men on the planet, watched his wife die at 22 from what was likely typhus, which would also be curable for a pittance nowadays, and by all accounts was never the same after her death.

A bottom 10-20% percentile American in 2026 gets to live a life of pigging out on delivery, video games and porn with a 75+ year life expectancy.

Honestly not even close tbh.

Joe might be broke, but he still has antibiotics, vaccines, a smartphone, out of season fruits, climate control, and budget airline tickets; none of these could be had for any price 100 years ago.

Now, I don't actually think this level of AGI goes well at all, but at the limit it's everyone (or 99.99% of people) die or there's some level of redistribution that allows people to survive. In such a redistributive world it's not really implausible that even a pittance of UBI buys enormous amounts of material goods, much like a relative pittance today buys goods that a billionaire couldn't have had 100 years ago.

How are we supposed to know when we've hit AGI?

My personal AGI benchmark is an unemployment rate of 20-25% within most or all developed countries. I do agree that most (all?) of the benchmark worship is largely pointless but you can't really hype your way into that kind of unprecedented structural unemployment.

It has to be a messianic vision, because it can't be anything else

Yeah I mean the AI companies would tell you this themselves. If we really get a level of AI that enables 50% knowledge worker structural unemployment it's unprecedented levels of disruption to the political system and to the economy even before accounting for x-risk thinking. Honestly the only glee I've seen is from blue collar workers who don't seem to realize that we're all fucked together if anything approaching this level of disruption to white collar work does materialize.

Just take the legal industry; Anthropic released a report earlier this year that claimed 88% of all legal tasks could be automated by AI, though only a small percentage of those tasks were actually being automated by Anthropic's customers

Assuming you're referring to this report, it's not saying what all the headlines are claiming it says. The exposed tasks in blue here refer to tasks that could be theoretically doubled in speed using either a LLM or using LLM tooling, even tasks that LLM's categorically aren't doing right now like authorizing drug referrals.

It's not claiming that LLM's can assist with all of these tasks at current capacities and it's not claiming that all of these tasks can be fully automated even with significantly more powerful LLM's. This is a bit of a spurious claim, but it's been taken hugely out of context, and the report even admits that employment trends in jobs exposed to LLM's are currently indistinguishable from jobs that aren't.

This isn't really an argument against short-time AGI believers though; even extremely maximalist predictions like AI2027 predict that pretty much nothing happens in the broader employment market until the models reach a tipping point and suddenly large swathes of the population become unemployed and things start getting very weird very quickly. Even if you believed lawyers, researchers and SWE's are all irrelevant by 2028 you still need to hire them to get the models over the finish line.

The disconnect seems to be that the bears point at the lackluster current capabilities relative to AGI expectations(which is largely true) and bulls point that the trend lines are still holding (which is also largely true); the trend lines have to bend eventually but that could be well after the employment market is annihilated, and we are all living in luxury gay space communism or have been paperclipped. Frankly nobody just has a good answer on whether this all leads.

the moment AGI is cracked nothing else matters

Are they wrong? You could argue they're wrong about the timeline, which is largely trying to predict unknown unknowns, but it does seem to me that once/if AGI is cracked it is pretty much true that nothing else matters.

Now, I'm kind of a doomer about AGI in the LW/rationalist sense, but even if you're not, every extant political system rests on the assumption that humans are a necessary input for production, and every economy relies on the spending of consumers to stay afloat.

What does debt load or shareholders matter in a world where humans no longer have any say via their ability to produce or their demand for consumption?

Seems unlikely. There's no indications Microsoft wants to buy them out, and it would be largely unviable without a huge drop in OpenAI's valuation regardless

Inference is unquestionably profitable in and of itself on API pricing, given that there's plenty of third-party inference providers selling tokens for dirt cheap and price/capability has fallen by orders of magnitude.

Whether inference is still profitable after factoring in R&D and all the costs that go into training each model is an open question; Epoch AI have a good post trying to estimate this.

Really, it's academic though, because even if it was profitable the frontier labs can't actually cut the R&D and capital expenditures; if they tried, they'd get dragged down within 12 months by distilled models and commodity hardware, so in the end it's reach heaven [AGI] or die.

This is the best analysis that I've seen with regards to OpenAI's business model. OpenAI in particular seems pretty hosed unless they can crack AGI or at least some sort of currently non-existent network, data or technological moat, or else their only option seems to be to angle their way into a bail-out.

Anthropic at least is a true believer in AGI and is well aware of the risks of over-capitalizing even if AI does end up making huge breakthroughs. They're better positioned with having made less spending commitments and having pivoted into enterprise, but they still ultimately need AGI or some sort of moat to make it in the mid-long term.

But inference is profitable!

I mean, it is, but selling tokens by themselves is inevitably going to be a commoditized business. The price of inference is going to be a race to the bottom with compute buildouts and efficiency improvements, and selling tokens, for as long as Chinese models can get 90% as good within 6-9 months for a fraction of the price, is not going to make a trillion dollar business.

Still, at the end of the day the finances don't really matter in my view; if they do crack AGI then the finances start rapidly fixing themselves and/or stop being relevant very quickly, and even if they don't and go bust all the researchers will still exist, and there still will be cheap distilled open-weights Chinese models served at commodity prices, the genie isn't going back into the bottle.

I worked in a nursing home during my teens, and met many old men and women who married, had families and still ended up living out the end of their lives decrepit and lonely. Their wife/husband passed away early and they struggled to cope, or they had a bad divorce and never recovered, or their relationship ended up deteriorating and kept together largely by inertia, or their kids ended up apathetic / having moved elsewhere / busy with their own lives.

The ones who fared the best had little to do with their romantic relationships; generally it was the vivacious types who still had wide friend networks in old age, and were constantly active and out at social events (bad news for the type of people to post on the Motte...)

I don't mean to say that it's pointless to marry, because a good marriage is likely still a net benefit over being lonely. Yet at the end of the day a life where you grow old is very long, and there are no guarantees given at birth, except that one day you will suffer and die alone.

This isn't bad advice on an individual level, but I would argue that on a societal level, the highest-EV advice is to teach men [and women, although they do seem to be better at it already] how to be content without a romantic partner.

Modernity has made it so that there are very few reasons to settle for anything less than perfection; the internet has annihilated boredom, porn in all its various forms have annihilated the purpose of the sex drive, mass production has annihilated the economic reasons to marry, fruitful and multiply, and individualism has annihilated the social pressures that might encourage someone to settle down.

What both men and women now want in a relationship is self-esteem and self-actualization, to be desired by a high-status partner that they themselves desire, and fundamentally most people won't win that red queen's race no matter how hard they self-improve. If you would like a transactional relationship, hiring sugar babies and escorts for strings free sex, or taking a mail-order bride to bear your children from some impoverished country are still on the table, but fundamentally relying on these options as a bachelor is tacitly an admission of low status, and besides the point regardless if you're looking for a self-actualizing relationship and thus in practice done by very few.

None of this is going back in the box no matter how pundits gnash and wail online about it. Status will always be the final scarce good even if humanity eventually solves scarcity in all other things, and hence the only real way forwards I can see is for people at scale to take up some bastardized version of Buddhism, and realize that the romantic relationship of their dreams probably isn't going to happen and yet still find happiness.

Given that overhang, it is not obvious to me that the median office worker will still be able to earn a living using their brain in the equilibrium state

My impression is that if we don't get further step-changes in model capability, the long-term disruption to employment will be ~0%, just like the internet ended up disrupting every job and life significantly, but at the end of the day we still have a job market that looks remarkably similar.

As long as you need a human in the loop, and as long as having a more skilled human in the loop leads to better results (which is certainly true with current agentic tooling) then white collar work will be fine, even if their work looks very different to their previous job description.

It seems to me that at least in the short to medium term the white collar job model probably looks more like a pilot's job, where the computer is doing a majority of the work, but you still need humans in the loop to set the direction, intervene in and handle anything particularly out of distribution, and bear responsibility if anything goes wrong.

By contrast, knowledge workers are easily replaced, once your LLM can do the job, you can spin up a zillion instances

Also, hitting the wall will mean that we will have tons of GPUs which can be bought for pennies on the dollar from the companies which were betting on FOOM.

This seems like an oxymoron, if we do get knowledge worker drop-in replacements the bottleneck is going to be compute; the demand curve for AGI is going to be pretty much vertical.

Still, I don't disagree that in the short-term robotics will take longer to roll out than knowledge work replacements, I just really don't think white collar jobs being gone matters if we truly reach AGI; either one of the thousand doomer scenarios happens and everyone dies, or the unemployment rate ticks high enough and public outcry forces sufficient redistribution.

If AGI is is really in the cards, there's realistically nothing actionable to be done for 99.99% of the population apart from enjoying what time that's left and praying that it goes well.

I don't really disagree that this is how the arc of progress is turning, but it does seem a bit ridiculous to worry about what your job is going to be if AI attains intellectual supremacy over humans.

It seems to me that there's really only two possible paths forward; either AI remains jagged in capability like current LLM's and the standard economic arguments about technology hold, or we develop an AGI that represents a perfect labor substitute (it seems hard to believe that an intelligence-complete AGI could not develop sufficiently advanced robotics) and every economic and political assumption grounding society made under the assumption that humans are required for production starts collapsing.