mrvanillasky
Indo Aryan Thot Leader
Future apocalypse survivor
User ID: 3273

Reasonable. Formal law, police, courts, all are modern left wing centralising forces that ideaologically are being seen as impediments by everyone whos not a normie idealist. The anti law stance is however kinda ard since you risk jail time saying this out loud in place. I do wish for him and others to come back, this place will never be an echo chamber, its still helpful to hear different perspectives.
What actually happened? anything you can share without violating ones privacy?
what happened? @ymeskhout has been offline or at least offline here for a year now. What exactly went wrong?
He had a biking accident that caused him issues with his arm, his hand took a while to get back to typing properly again. The libertarian ideals you can skirt by in college when you wish to use them to avoid showing off your power levels don't work after a while.
It's been alleged here by many multiple times, unless it's an inside joke I'm unaware of, many strongly suspect that he was trannyporn0.
Yeah no I stand corrected, I'm certain it's way more than 20 but 1000 is unwarranted speculation on my part. My root comment still stands, I'll post an edit making the necessary correction.
The /g copypastas are funny but I'd recommend him as an example of why you should consider faiths from antiquity.
I have a slight bias towards the orthodox due to them having more in common with Hinduism. Beyond that, Luke recommends people to leave this hyper modern system in all ways and recommends they start a family and be close to their kin, this is form his latest video.
He's not a coder, he seems sincere enough to me. Any man who discovers values of his forefather and encourages other to live by them like he does is a good person.
Though funny copypasta regardless. I really like /g
Kulak is a dude who shilled rich man's aella, Addison Moore, a super high end escort who is active on twitter.
Hearing about him makes me a little sad. He's slightly older than me and would post long detailed threads on the subreddit when I was just starting university. He had an accident that @FiveHourMarathon linked to once and became deranged.
His final post was for calling for arms against Mirpuris in the UK, something I'd encourage in an ideal world, unfortunately, fedposting gets you and your movement killed since people actively look for ways to stereotype you (hello charlottesville). Calling for violence online is stupid, doing it for a right wing cause is even worse.
Him and Trace changed after leaving, kulak changed a lot more but trace did too. There are plenty of alums that are popular on twitter, covfefe anon, cremieuxrecueil (allegedly trannyporn0).
People I know who are unaware of themotte genuinely think he's a Canadian girl of rhodesian ancestry. He plays along despite fucking podcasts where his voice is octaves deeper than the average guy.
I wish we'd get newer episodes. I'll go and listen to the older ones, thanks for the link.
I sand corrected then, probably not a thousand dollars then, I do think it has to be more than 20 given that it costs more tokens and you do not have an api available for it, which makes calcualtions quite hard. The true costs are signficantly higher than what they charge even for their apis.
good find, this tweet plots it on the leaderboard and the other srouces are rough calculations from others based on thier own costs. This is after taking in the real price for the model as openai charges people a few times less than it should simply so that others can afford their models.
I can link other tweets, I saw this tech crunch article linked, I think others used this to extraolate costs as it does not clairfy the explicit cost. Apologies, will edit my comment.
it is true. What's worse is that the actual api costs are probably 10-100 times more. They're all willing to spend more to make more.
edit - this tweet is a better source
They make you feel wierd when not on them. Even hacker news is terrible. Here, you have one thread and beyond that there's not a lot. Less is more.
I use brave. Posting it with typos is bad already, I'll have to wait till tomorrow (9 hours) to put the dozen links I got from zitron and blow (mostly zitron).
I will believe that they can get there once it happens which seems highly unlikely as of now. If they could do it, they wouldn't be advertising for front end jobs on anthropic. Front end is the easiest domain to automate.
This all hinges on transformer based llms scaling and not hallucinating whilst not costing a ton and all that happening before other firms rightfully pulley funding out from a total loss making mission.
Oh man, I should copy and paste better. I don't have any extensions for spell checking since I moved to Linux so hastily put the entire comment in an llm to iron out my typos.
I didn't get an llm to write this. Just sloppy pasting. I'd actually be for the half trillion dollars being poured in if it meant consistent schizopoasting that sounds like me.
The jobs were not created by the bubble, people were already laying off all engineers beyond ai ones. Plenty of good hackers got laid off in 2000 and 2008. Even ones who would have been employed in normal circumstances.
The first step towards walking in h spiritual path is reducing stimulus substantially. Luke Smith release a good video on this perspective yesterday. A massive reduction and replacing your internet culture war usage beyond this place with long form texts would make life much better.
My mannerisms resemble shock jocks, when I sparred, my movements looked like a worse version of the fighters I liked. People need to unplug, many people crystalise permanently and that's not where I want to be. The single worst conversation of my life was with an ssc reading rationalist who made ssc his emtier personality. Now ssc is the single highest iq place median iq wise like themotte or maybe hackernews. I hated every second of it and wished I never met the guy. If I meet anyone from this place offline, I'll not bring up culture war things or anything internet related.
Great post op. Approved!
They are useful, they're just not worth the near half trillion in speculatory dollars and the thousands of jobs that are about to be lost soon. They aren't a replacement for a junior dev, as the Dev will get better, whilst llms at best will be iteratively better if they keep getting billions to burn.
The journos and public intellectuals of our times could have simply questioned the financial basis. Also fun fact, apparently Google runs an llm in their actual search now. I hope this is not a rumor, not talking about the gemini answers.
I'm not good enough to write complex code but there was a recent paper that suggested github copilot generated code ws worse than human code. Now, again, llms are great in particular scenarios, so more noobs using them badly is a big part.
I'm still Jonathan blow like on my opinion of these things.
And it's a good thing for limited usage. The issues I have is with the intelligensia willingly acting stupid, hackers deluding themselves by being overly optimistic and VCs being a public hazard.
Open weigt self hosted models are the way to go as these fucking text generators now store everything you send to them. Worse, the training costs will make training more expensive as Nvidia will keep calling it's graphic cards magic sand from dune.
This is a bubble that is going to burst, and the people we see hype-posting are aware of it. AGI and ASI are pipe dreams that we don't know will materialize, but their existence is seen as something unstoppable, whereas we may never get there in fact. So what can't these systems do today?
Honestly, nothing well. If you have any white-collar task that is complex, an LLM cannot do it. Various other sub-branches of AI can, in fact, do a lot of things, but even that is minuscule compared to the vast amount of things we do as people. Programming is one thing LLMs just cannot do very well. If your idea of writing code is stuff that would rival what I, as a total noob who is looking for a job and learning, does, then sure, it may be decent, but LLMs are simply bad off-ramps.
Let's dive deeper into the second point since it's the easiest to dismiss. Our understanding of the human brain and intelligence is not complete. Our understanding of how to replicate even what we know well is not complete. LLMs try to do one small subset, and despite having had more money thrown at them than any piece of tech I can think of, all I get is broken code correction and slightly better words. The amount of optimism we see for them is simply not fit, and this is where number 1 comes into play.
Cruise, a self-driving car firm, got axed recently. Waymo serves fewer people than a strip club in a Tier 3 town in India, and Tesla's cars are still not capable of fully replacing a human being. We are, by many estimates, 90 percent of the way there, but the remaining 10 percent means that we still drive daily. We always have this assumption that things will always get better, but everything has a ceiling. Moore's Law stopped being a thing. The human records for the 100m sprint, Olympic weightlifting, and thousands of other activities were set a while ago and are not even met, forget about breaching them. The idea that things will just get better linearly or exponentially is not true for most phenomena we observe. Yet language models are an exception, so we should spend half a trillion more so that we can get buggier Python code in short.
Coming back to the first point, LLMs are a scam. They are not a scam because the tech is bad—quite the contrary, it's amazing tech. It's just that the entire economic and religious structure behind them is unjustifiable to the point where people 10 years from now will look at this like a combination of Y2K and the dot-com bubble of the 2008 crisis.
OpenAI is the big dog in generative AI and made slightly less than a billion in 2024 via APIs. This means that the market for LLM wrappers itself is tiny. It's so tiny that OpenAI, despite giving away its models for nearly free, cannot get firms to use it. "Free?" Mr. Vanilalsky, you have to be joking; it charged me 20 dollars. Well, you see, they lose money with every single query. That's right, every single time any of us goes to a Western LLM provider's chatbot and says hi, they bleed money. If you pay them 20 dollars, they bleed even more money since you are a power user and get access to their shiny objects. The newest being deep research, which according to some estimates, costs a thousand USD per query. Yes, a thousand. So if you pay 200 dollars to get 100 of them, you can use more graphic card power than what is needed to run a video game arcade in a nation and get analysis that is spiked with SEO shit.
The rationalists and techies are in for a rude awakening, and I won't post about it more, but I cannot comprehend how no one questions how bad this entire thing is. Uber lost money, Facebook lost money, Amazon lost money, except they all were not hogging close to half a trillion dollars the way these AI firms are. OpenAI raised 40 billion dollars, most of which would be given to it via Softbank as they take loans to get the requisite funds. Apple pushed to stop the AI madness by publishing a critical paper, and Microsoft halted the plans for data centers that need more power than Tokyo to run.
I can go on and on about how absolutely insanely stupid the economics for the entire industry are. When non-technical people like Sam Altman and Mira Murati are your number 1 and 2, you have to have messed up. Mira Murati, a career manager who could not answer basic questions like those about SORA's training data, raised 2 billion dollars. I get Ilya and Dario, but the above two are terrible people for leading what was an AI lab if they themselves are not researchers. Dario, on the other hand, alongside Dwarkesh, needs to be considered a safety hazard information-wise. "AI will take away most jobs in a vague timeline, but media will give you the worst figures" is a sickening line.
My cynicism is not unfounded. How do OpenAI, Anthropic, Gemini, etc., plan to make a profit? Do you, dear reader, or do their investors believe that just throwing more graphic cards will solve it? It's been two years; R1 did better because of tighter code, but even that has limits. Training runs are going to touch a billion; people want data that is four times the size of the entire internet, and the inference costs are not coming down with newer models.
Business, in normal circumstances, should make some profit. This may seem heretical, but burning money with worse margins for slightly better products that are being shilled by the entirety of the world's media and the smart people of the rationalist circles in Silicon Valley seems about as good as it gets publicity-wise. OpenAI is one bad round of funding away from having to pack up shop, being valued at 300 something billion dollars.
This makes me angry because I unwittingly worked on a doomed LLM-based idea and know that if I can see the holes, other people would too. It would take one large hedge fund to start shorting American tech and things will be bad. People will lose jobs, funding, and decent startups that are run by and employ people here would see bad times. I will be personally impacted, even if I go the indie hacking route. We can debate the magic of LLMs all day, but how long before this crashes? I feel this is closer to Theranos than to Amazon if we compare it to the 2000 crash time Amazon. We have been promised the world to justify the investor hype, despite tech firms flinching, there is an air of optimism.
Even if you are an optimist, how much better will these models get? When will they justify these numbers? AI hype helped the S&P 500 peak with Nvidia, a firm only gamers knew for decades becoming worth more money than God. What happens when people realize that the tech promises sold were never there? "You will get a better chatbot and better image generation" is not as sexy as "your worst nightmares are coming true for 20 dollars a month." As a child growing up, I saw tech get better. Each year, things got better; the internet got worse, but I saw new devices come up. This is a dystopian image of that world where firms will knowingly crash a market and will probably get bailed out after doing so because I do not see a path forward.
If you are a tech guy, please start telling others about it; the quicker this gets over, the better, because if the investment amounts cross a trillion or something ridiculous, then the fallout would be even worse. I used to look up to a lot of the VC types, read the essays Paul Graham wrote. Somehow his protégé is a total nutbag psychopath, and we should all still look up to him as he tries to crash markets. I lost respect for Scott. Hey, what I wrote is a sci-fi story; do not take it for something rigorous when you criticize it, but if you like it, then do actually take it more seriously. Here is my 8-hour podcast where I butcher the basics of manufacturing to the point where even my subreddit calls me names.
I am about to turn 25; I was a child during 2008, I was born around the dot-com crash, and I saw the glowing fellatios people gave to Elizabeth Holmes and Sam Bankman-Fried, yet neither used up as much money. Even if you are an AI maximalist, you cannot seriously think that modern tech, despite 10 times more money, can actually replace actual programmers or investment bankers or even doctors.
The rationalists have had leftists post bad things about them, David Gerard being one of the worst ones. Guess what, skeptical broken clocks like him are right this time, as his site's pivot to AI gets more right than SSC or ACX does. The guy who had a hand in doxxing a guy we all liked a lot at one point, whose jannie duties were something out of a 4chan greentext, gets this right and will be seen as a sane member of society, which makes me squeamish. I will add the links for all I posted here, mostly taken from Ed Zitron, J. Blow, Hacker News, 4chan /g, etc., but we should all post the financial reality of all these firms anytime we talk. My inabilty to not sound like a lunatic comes from this fact alone. I am not bitter at the people making money btw, I am worried about the future of the people I know, irl and online. Anytime I hear about someone being laid off here or in my circle, I feel terrible. All of this was preventable.
Edit - typos, will link stuff in a bit.
The crash would be a big L for rationalists who despite evidence to the contrary believe in future outcomes that require multiple miracles. Yud and Scott would be questioned for spreading ai doomerism over tech that's not even good at what it's supposed to do, let alone be agi.
David Gerard is one guy who I abhore and him providing decent criticism for ai with his website pivot to ai would be a small win for him and tens of his readers.
Openai may have to charge anywhere from 10 to 100 times more per query in order to be profitable. Their api calls netting sub 1 billion means that all llm wrappers are bad, period and are getting funding for valuations they don't deserve. Sam, Dario, Amjad, PG, every single VC knowingly lied and should be seen as responsible parties.
All of them are aware of the financial impossibilities I've listed. My contributions to this place haven't been the best, putting a realistic future scenario is the least I can do. I don't see a way out where this doesn't lead to thousands of jobs going out, I'll write one last post on this in a bit. Many here including me are in tech at various stages, categories, seniority levels and some are employed in startups that use non llm based ai, the good kind I mean. This affects all of us.
What are the odds of a global recession given ai spending. Where's your Ed at by Ed Zitron is a decently rigorous left leaning newsletter that covers ai and his posts made me worry about a real collapse. Large firms like Microsoft are already pulling out by not allowing openai billions in credit and Apple came out with a paper against reasoning models not actually thinking.
The vast majority of comments on twitter about the paper are negative, ofc not one engages with the actual paper. The ones that do mostly agree. Given the s and p 500 being so tech heavy, tech being so ai happy and ai not providing results, is a recession likely? Ed's reporting states that openais dependent on smaller firms that are taking loans for their datacentres. He details this out better than me so please do check it.
The extreme amounts of money aren't justified given current rate of progress. Llms and their usage has gotten better but if it were worth a trillion dollars, the consensus would have been far more unanimous. Plus every single anthropic employee appearing publicly including their ceo isn't doing this any favor. You hear them say that this will take away all jobs etc etc, agi is here in 2026, which 's a year late compared to openais estimates.
I like llms, people get a lot of help from them but Ed's reporting and Microsoft and Apple conceding ground looks bad.
- Prev
- Next
He seems unhappy, I wish I could have spoken with him when he would comment on my posts. He is quite smart, canda is a good example as to why you should not have mass migration, the migrants there are rarely decent ones, many blue collar punjabis and gujratis who explicitly have very narrow views with regards to ethnic loyalty.
I get why people miss the 90s. Back then, the quality of people moving was not as bad. His accident, increasing amounts of fedposting, the canadian situation explains his current views. I hope he comes out of it ok. There are very real legal ramifications for what he posts at least in Canada. People here have been quite helpful to me, seeing someone whose name i recognize and have spoken with before spiraling hurts a little.
More options
Context Copy link