This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling
I'll go ahead and call this the peak of AI version one-dot-oh
The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.
Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.
But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.
I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.
And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.
Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.
"people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue."
Funny you say that. I bought a DVD of an HBO series from over a decade ago called Family Tree. Anytime I play it I have to sit through a nearly 5 minute long ad promotional for HBO, featuring series from the time (Girls, Game of Thrones, some cancelled one about New Orleans). I can't skip. Nice time capsule at first but awful over and over
More options
Context Copy link
The thing no one seems to be talking about with respect to AI is how the underlying economics of it all are so mind-numbingly bad that a crash is inevitable. I have no idea when this crash is going to happen, but if I had to fathom a guess it will be some time within the next five years. We're talking about a technology that has already burned at least half a trillion dollars and has plans to burn another half trillion with no model for profitability in sight. There's only so long that the flow of venture capital will keep coming before the investors start expecting some kind of return. Add in the fact that Nvidia currently represents about 8% of the total value of the S&P 500 based on sales of graphics cards to a single, unprofitable company, and the economic picture looks even more dire.
I think that the underlying problem is that they're trying to run an enshittification model on an industry where the path has typically been the exact opposite. Look at computers themselves. When computers were first invented, they were limited to institutional uses by governments and large universities, and were subsidized through R&D budgets that weren't relying on profitability, i.e. as a present expense rather than a credit against future earnings. Then large corporations started using them. When personal computers were developed in the late 1970s, they were mostly used by businesses, and in the consumer market they were expensive machines for the tech-horny. As costs came down, more and more households began using them, and by the time they became ubiquitous at the end of the 20th century it had been 50 years since their invention, and they still weren't exactly cheap.
Now imagine an alternate timeline where IBM decides in the 1950s to build several large computers in cities all across the country, enough that they can let every Tom, Dick, and Harry run whatever programs they want for free, all the way down to middle schoolers doing their math homework, with minimal wait time. And of course they're offering on-site programmers so that you don't actually need to know anything about computers to be able to take advantage of them, and they're convinced that after doing this for years people will be so enamored that they'll eventually start paying for the privilege. You'd have been laughed out of the board room for making such a suggestion, yet this is roughly the state of the AI business model.
AI cheerleaders will point to other tech companies that lost tons of money in their early years, only to later become behemoths. Uber is often cited as an example, as they spent more than a decade losing money before becoming profitable. But there are two big differences with Uber. The first is that they were actually responding to a market need. Outside of a select few cities like New York and Las Vegas, taxi service in America was at best, inconvenient, and at worst, nonexistent. They successfully discovered an unmet demand and developed a service to fill that demand. No one was ever speculating on what Uber would be used for they way they are with AI, and from their launch they provided the exact service people expected that they would provide. The second, more important reason, is that Uber never gave away their service for free. Okay, maybe there were some promotions here and there, but by and large, if you wanted to get an Uber, you expected to pay for it. There was never any ecosystem where Uber was providing free transportation for everyone who wanted to get from Point A to Point B with the expectation that people would ditch their cars and get charged through the nose later.
Even companies like Spotify that started with free models and were unprofitable for a long time didn't have quite the same issues as OpenAI has. In 2016, the earliest year for which we have financials, Spotify's loss was about 20% of revenue. By 2018, the first year it was public, that had dropped to 1%, and stayed in that neighborhood until the company became profitable. OpenAI's loss last year was in excess of 100% of revenue, and is on pace to be nearly 70% this year, and that's after record revenue growth. And next year they're going to be on the hook for the first round of the 5-year, 300 billion deal with Oracle. Spotify has also had about a 25% conversion rate from free to paying customers throughout most of its history, though that's recently jumped to over 40%. ChatGTP currently has a conversion rate of around 3%. And Spotify at least ran ads on its free platform whereas free ChatGTP is pretty much all loss for OpenAI, and even the paid version lose money on every query.
So what we ultimately have then, is a company that loses a lot of money, is available for free, has a poor conversion rate for paid versions, and is selling itself as a product you didn't know you needed rather than filling an obvious demand. the leading company has already committed to spending several times more than the company has raised in its entire existence within the next five years, and they need their revenue to dectuple in the next four years to break even. They're also involved in a weird money-go-round situation with Nvidia and Oracle that's 100% reliant on them finding investors willing to lend them the GDP of Finland. And now they want to add video, a notoriously difficult thing to process even when you don't have to make the entire composition from scratch. Color me skeptical that this will be around in five years in anything approaching what it looks like today.
To push back on a few things:
I'm pretty sure that I remember reading that the unit economics per token are actually quite good on Zvi's substack. So the AI companies could actually make money selling their current AIs. The money burn issue is because they're trying to hyperscale and dumping epic amounts of cash into R&D.
Again, a memory from reading Zvi, but I'm pretty sure the vast majority of the AI giants revenue is from API tokens anyway, not subscriptions
I'm not really sure how to differentiate these. Prior to Uber's existence I was annoyed by Taxicabs and still took them when I needed to go somewhere. Once Uber started existing I learned that a functional taxi app was infinitely better and started using that instead. If I had known a slick taxi app was something I needed pre-Uber, I could have invented Uber.
ChatGPT/generative AI is now incredibly useful to my life. Google got slopified before ChatGPT 3.5 was released, I was already appending "Reddit" onto every search. I had a demand for not-shit search and now ChatGPT provides that to me (among so many other things).
I use it at work all the time too for a variety of things (consulting). It's really, really useful and if every GenAI model was Thanos snapped out of existence tomorrow, my life would get more annoying and my work output per hour would decrease.
More options
Context Copy link
I still don’t understand the enshittification model.
There are plenty of reasons to degrade your user experience. Increasing revenue through ads or merch or predatory monetization. Decreasing costs by cutting complicated features and tech support. But the central examples of enshittification aren’t doing those things. They’re paying more to add features that people don’t want. To adopt patterns that don’t seem like they should make more money.
I mean, maybe I’m just wrong. Maybe spamming AI news articles on the lock screen really does get more people to buy Windows. But…why? How?
Feature bloat is for internal management and employees to make themselves look good to higher ups, and get promoted.
More options
Context Copy link
More options
Context Copy link
I think the models is less that those chatbots will be the face of the profit making for AI companies. Not true, I think the people using the bots now are unpaid trainers, not the future end users. Every issue that comes up now can be fixed once the bot gets a correction from the freeware users. But that’s not a very useful user base anyway. The best use case for such bots is actually business to business. Maybe Walmart wants to use it with its app to help customers find a product that fixes a problem they have, or can tell you where something is. They’d probably want to buy a license for incorporation of the bot into their app. Maybe Apple wants to replace their social media team with Open AI based solutions. Or the CEO of Tesla wants to use AI to suggest improvements to their car line. In those cases, getting a good useful bot would get them an effective and efficient solution probably worth a good deal of money to them (if for no other reason than it reduces headcount), and they will pay for it.
I just encountered a business whose product is "AI renter harassment". Imagine a chatbot that pretends to be a person, and annoys your renters with frequent reminders that the rent is due, and then keeps hassling them for up to three months after move out!
Can't wait for the counter-offer, "AI creditor deflection".
Let the arms race begin…
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI platforms as they currently exist are also incredibly fungible. To the average end user there is minimal difference between chatgpt, claude, grok etc. To an enthusiast, the main difference is how much they censor, evade, or try to avoid controversial prompts (of which grok is by far the least annoying here). Burning billions on the cutting edge doesn't give you any lasting advantage against 11th hour entries who spend 1/10th the amount to produce something 90% as good at half the price to their customers. And the amount getting invested already implies capturing a substantial amount of consumer spending at some point in the future just to break even. (Already you want $30 per person on Earth per year).
Nvidia itself will probably be fine, though it depends on if the crash just hits the cutting edge or if its so severe it becomes hard to even cover the cost of serving prompts.
This factor has surprised me completely. The assumption was that any company that got an edge in AI would probably be able to use that edge to speed up its own improvements, and competitors would have to burn a TON of money to try to catch up. So the first mover advantage would be potentially insurmountable.
And its worked about that way in a lot of other industries. With Uber itself, sure there's Lyft and Waymo and a few other small competitors, but the network effects it achieved have kept it out in front, handily.
In the AI space, I guess the fact that its working entirely with bits rather than atoms means the cost of 'catching up' is much lower, especially once a particular trail has been blazed.
What this does seem to reveal is that the player placing bets are REALLY assuming that whomever wins is going to win REALLY BIG, big enough to justify all the previous burn across all the losing companies.
It is hard to imagine (for me) a world where more than, say, 3 AI companies are standing once all is said and done.
More options
Context Copy link
More options
Context Copy link
Almost every claim in this paragraph is incorrect. Every model Openai has trained was profitable, gpt3 and gpt4 both almost certainly made back their training cost. They are pouring an incredible amount more into R&D of course, but that's kind of the point, the market for this stuff is actually red hot. And I genuinely have no clue why you think NVidia only sells to one company, estimates put the amount of chips sold to microsoft(including openai) at
23% of nvidia sales, google at12%, amazon at13%, meta at11%, and Xai at around 10%.I don’t understand it, but there are a crazy number of tech companies purchasing calls to (outdated versions of?) GPT. The corporate market is definitely hot.
More options
Context Copy link
Can either of you actually point at some numbers with respect to profitability?
We don't have access to specific numbers. We know that GPT-3 cost somewhere around $5 million in compute to train and that openai's revenue in 2020 and 2021 while it and it's derivatives before 3.5 were their primary product that their revenue was $3.5 million and $28 million. As you get to the era of many competing models and then need to factor in what their actual margin on revenue is it becomes more muddled after that but their projected revenue for 2025 is $12 Billion and the trend looks exponential. Maybe adoption and usage falls off but the doom and gloom that they aren't finding buyers is just kind of nonsense.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Personally I think we're going to see (are seeing) a bifurcation in those who are hopelessly taken in/addicted to the AI's sway, and those who do see it more as a mere instrumental affordance for achieving certain goals, that they can still turn away from to enjoy real world activities and interactions.
I don't know which of these groups will end up being larger, but I'd bet that the AI-vulnerable are around 60% of the U.S. population.
This is a very nice dream, and maybe for a brief period of time it will be possible, but alas we are all but training data for the model so by running such a business via their systems, we'll be teaching the system how to run the businesses without us. I don't see how it ends up any other way.
Looking at social media, I predict the "AI-vulnerable" will be the massively larger group. Maybe the inability to even pretend there's a human connection will put people off, but I'm not counting on it.
I agree.
But I also am aware that we SEE the group that is addicted to social media, they're the ones you encounter on the social media sites. People who avoid it are, almost by definition, less legible to us here online. I could be underestimating how many people are able to switch off.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To what extent is it or will it become possible or practical to run a homebrew jailbroken LLM on local hardware? That's the big question in my mind.
I'm late to the party, and I'm aware of it, in that I'm only just now using LLMs beyond a toy for research and education purposes. But essentially every day I'm aware there's an expiration date, that the product is just a few bad days for the SP500 from being enshittified. Whether that comes in the form of censorship and legal caution that makes it useless for my purposes, or in the form of pricing that makes it prohibitive, or commercialization and monetization in ways that make it unreliable (pay extra for your product to be recommended!), or optimization for it as people start to operate their products specifically to be seen and understood by LLMs. There's going to come a time when I can't just log into ChatGPT and get a good result, I'm sure the old timers are already complaining; and there's going to come a time when there isn't enough VC money sloshing around to fund a competitor like Grok that throws off shackles.
So at that point, can I or will I be able to operate a homebrew LLM for my personal and business purposes? I'm not handy enough to know how possible that currently is, or how user friendly, I'm at the level of "I can run a Linux machine but I'll need to look stuff up once a week or so."
I strongly endorse what @erwgv3g34 says below. You can, in theory, run a model far superior to GPT-4, and not that far from the SOTA, all on consumer hardware.
Of course, as he correctly points out, it's going to be expensive to host it on your personal hardware. Somewhere between used car and new car expensive, but people can and do buy cars. But not that difficult, if you're capable of following instructions. He's right that it makes more sense to just rent an H200 as and when needed, a while back I saw them going for below $2/h.
If you have an archived copy of a decent model, especially one fine tuned to remove censorship, there's little that can be done outside of totalitarian legal action to stop you from using it. That's far less likely than potential enshittification or censorship online.
Elon is a stubborn mf, and supremely wealthy. xAI is probably one of the companies most resilient to VC panic. Or look at DeepSeek, which is owned by a net profitable quant firm. They'd be relatively safe options to out last a market downturn.
The idea of Google being enshittified was incomprehensible when I was young. Clearly it isn't so anymore.
More options
Context Copy link
More options
Context Copy link
The strongest open source models aren't that far behind the strongest proprietary models; a year or two for LLMs, six months for text to image. You can see several open source models in the LMArena top 20, such as Qwen3, DeepSeek R1, Kimi K2, and GLM 4.5.
Problem is, those models are huge. Qwen3 is 235B, R1 is 685B, K2 is 1T, and GLM 4.5 is 358B. It would cost a fortune to get enough GPUs to have enough VRAM to run such locally.
Your best bet is to rent GPU time from an online company and run the models there.
More options
Context Copy link
I think that one aspect is the question which performance you actually require from the model.
A fundamental difference between free / open source software and open weight models is that for software, the bottleneck is mostly developer hours, while for models, it is computing power on highly specialized machines.
For software, there have been large fields of application where the best available options are open source, and that has been the case for decades -- for example, try even finding a browser whose engine is proprietary, these days. (Of course, there are also large fields where the best options are all proprietary, because no company considered it strategically important to have open source software, nor was it a fun project for nerds to play with, e.g. ERP software or video game engines.)
For LLMs, tens of billions of dollars worth of computing power have to be sacrificed to summon more powerful shoggoths forth from the void. For the most part, the business model of the AI companies which produce the most advanced models seems to be to sell access to it. If Llama or DeepSeek had happened to be more advanced than OpenAI's models, their owners would not have published their weights but charged for access. (The one company I can imagine funding large open-weight models would be Nvidia, as part of a commodize your complement strategy. But as long as no AI company manages to dominate the market, it is likely more lucrative to sell hardware to the various competitors than to try to run it yourself in the hope of enticing people to spend more on hardware than on model access instead.)
That being said, for a lot of applications there is little gain from running a cutting edge model. I may be nerdier than most, but even I would not care too much what fraction of IMO problems an AI girlfriend could solve.
Unfortunately, Nvidia sucks ass at making LLMs. Nemotron was a joke.
Mistral Nemo was great though.
Nemo? I know No One by that name.
Haven't tried it, unfortunately. I think it's still a poor showing that they needed another company to finish their work for them.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The trick's that the same chips used to produce a model are also usable to run the model for someone else, and a lot of the technologies used to improve training has downstream benefits on inference or implementation improvements. Every AI vendor has its own complement to turn into a commodity.
More options
Context Copy link
More options
Context Copy link
Don't try to do your own local setup first.
There are host-your-own solutions; featherless, together (my choice), runpod, openrouter.
These all have privacy policies that are far better than the Big providers, but you are, fundamentally, sending your prompt (and data) to another computer to have it processed. There can be leaks, there can be man in the middles etc. Still, you aren't literally being used as a guinea pig like you are with OpenAI, Claude, Gemini etc.
For really serious / personal stuff, the answer is "wait." This is such an obvious market need that I am 100% confident we're going to see LLM-specific personal hardware at the $1500 or below price point in the next few years.
I'll look into that, thanks!
But philosophically, it's less personal interest that I'm talking about than it is a sort of market competition threat against enshittification. As long as its possible to homebrew at some level of effectiveness, OpenAI won't be able to completely ruin their product and still market it. Social networks enshittify because I can't make my own at home. In my mind I'd like to set up a local LLM more to learn the process for the inevitable future where I can't trust the commercial variants that have been enshittified. Inasmuch as I find LLMs useful, I want to have full control over it to protect myself from losing a tool that I find useful because it is under someone else's control. Hosting my own might be the intermediate step...
Yeah, if that's your goal, 100% go for a paid-to-host solution. I like TogetherAI because you can easily pull from a lot of stuff on HuggingFace and it's all pay-as-you-go. $25 will last quite a long time if you're in pure chat mode. If you're using an API to sling code at it, $25 will evaporate quickly.
There will be no fundamental enshitiffication. The crucial IP feature of LLMs is that their architecture is pretty easy to grok for anyone with a basic ML background. The cost comes in 1) Collecting and preparing training data and 2) training the models - especially big ones - at scale. There's not really a lot of secret sauce in the model itself.
The secret sauce, to the extent it is real, is what happens during inference time. This can be system prompts or other intermediate prompts that are both visable and not to the user. We also know that all of the Big AI firms are now using multiple models at once to "route" different parts of the user query. I also heavily suspect that there's a middle layer that does some sort of context management to create a proto "memory." What do you need to build a system like this? The same thing you need for any software system; a bunch of talented engineers with a defined vision for the product and some coordination overhead. That is difficult to replicate.
The question is how much does it matter? We're going to be able to run open source versions of very good LLMs on our phones one day (and, maybe, one day, have actually private phones!). Will those private LLMs be so much worse in terms of performance than the Big AI system-of-systems in place? Hard to say. They're making $100 billion bets on it right now.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Main thing about running LLMs locally is that GPU VRAM is probably the limiting factor in most cases. 3090/4090 with 24GB of VRAM or 5090 with 32GB are OK, but for hobbyists really into it, they've made custom GPUs like soldering 24GB more VRAM to a 3090, or using dual-3090 for 48GB of VRAM (spreading it out over multiple GPUs works for LLMs). The speed differences matter, of course, but 3090 with 48GB of VRAM will likely enable more than 5090 with 32GB due to being able to fit bigger models.
More options
Context Copy link
It's surprisingly easy if you already have a good GPU: https://ollama.com/
The real question is whether frontier models remain proprietary for the foreseeable future or if there's really just no moat and everyone will be able to run open models on consumer hardware instead of having to pay for data center compute.
My gut says decentralization is likely because I'm pattern matching this on the rise and not-quite fall of IBM, but I'm biased.
More options
Context Copy link
You can run a homebrew LLM (7 billion parameters / 12bn / even 24bn) for nothing on any decent PC with a GPU. It will be lucid but really pretty dim.
You can rent a RunPod server pay-as-you-go and run a 70bn / 105bn / 200bn model for a few dollars an hour. It will be smarter but not quite GPT / Claude level. You can also pay 25 USD a month for Featherless, which is the same thing but less under your control.
Or you pay for the APIs.
I've run a few 4-byte quantized 70B models on a small home gaming machine pretty easily (Intel i3-13100, nVidia 3060, 48GB RAM). It's a little slow -- non-MoE models can go into a couple tokens-per-second, and MoE seldom go higher than 10 tps -- but there are some set-and-forget use cases where the difference isn't a big deal, and you're just a couple GPU generations away from it going faster.
Both ollama and lmstudio work pretty easy 'out-of-the-box'. You can dive down the deep end if you want, and start moving to vllm or others, but it's far from necessary for most use cases.
Scaling up without waiting can get expensive, though. Used server GPUs aren't ludicrously expensive and buy you more RAM (and thus more context/bigger models), but they're slower than current-gen (or even two-gens-old) gaming cards. Trying to break past 24GB VRAM gets into the kilobucks range, and while nVidia says that they're dropping a card that will change that in a few months, it'll probably be seconds before it get scalped. For LLMs, processing power is lower priority than total memory bandwidth, so you can get away with some goofy options like the Ryzen Max series and run 128 GB ""VRAM"" with a CPU, but setup is more annoying and throughput suffers a lot, and it's still not cheap.
I have a Ryzen Max 395+ with 128GB RAM and it runs pretty well; granted I don't use it for LLMs but the humongous amount of RAM is useful more often than one might think.
From what I've read on HN and Twitter, the Ryzen Maxs can run larger LLMs, but not very fast. The throughput for tokens/s is single digits at times.
More options
Context Copy link
God damn man! What do you use it for if not llms?
Prototyping things which I really should be pushing off to a cluster for computation but I can't be bothered with doing the SSHing. When it comes to prototyping not needing to do the extra steps of moving my latest version of the code over to the cluster saves me around 10-15 seconds for each iteration, which is enough of an annoyance I'm happy to pay to avoid it. My machine is the HP laptop that comes with AI MAX 395 so it's my main personal device. Not needing to worry at all about RAM management for my own code has been surprisingly freeing.
That and of course playing DOTA with all settings set to maximum, it's amazing how I can get Desktop quality DOTA performance on my laptop today.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What consumer backlash? For every reddit post about how AI is terrible, there are probably 100 people who are enjoying using ChatGPT, find it convenient, 10 people gooning to physically impossible pornography or degen ERP, 30 people enjoying the funny AI cat video that chops up and cooks other animals...
Many consumers say they hate Facebook ad-slop, Microsoft's persistent disregard for consent with Windows updates, Google spying on you and the crap Google algorithm, Tiktok brainrot short form video.
But these companies are making huge amounts of money. Trump and Larry Ellison aren't trying to secure Tiktok because short form video is unpopular, quite the opposite. Tiktok is making billions. It's high-status to say Tiktok is slop, I think portrait video was a mistake and repress youtube shorts furiously whenever I see it... but it's clearly very popular.
If we just read what consumers say and what the media highlights, we'd assume that Facebook was near bankruptcy. They're constantly getting fined, called into congress, delete facebook and hit the gym is an ancient meme at this point, billions shoveled into VR with no returns, their Llama AI models have been shit, everyone thinks of it as a website for boomers, people blame them for everything from loneliness to anorexia to genocide in Myanmar... But no, Facebook is making gigantic profits and their profits are rising fast. Money >>> talk. AI is paying off massively for Facebook in the unsexy ad algorithms that nobody talks about. They can easily pay for these huge capital investments, profits are up even as they spend more and more!
OpenAI is making 42% margins on inference, they want to grow the inference market and this is a natural route to take. 42% margins when they have such a big free-tier is insane. Research is the expensive part, not inference. AI research is clearly important, Facebook and Tiktok prove there's fortunes to be made. LLMs and generative AI are also lucrative, only they're resource-intensive for R&D compared to deep learning. But the promise of mechanizing intellectual labour is incredibly seductive, the big players are not going to slow down here. The market for LLMs is awkward because they're so immensely powerful and valuable that there's furious competition driving prices down, while the market is also still immature and yet to be developed so revenue is starting off small (but growing very quickly).
Unlimited weird porn and anime cat videos are going to accelerate techno-capital, not slow things down.
You and I had a version of this conversation last month, I think. And I know I still think you're wrong.
First, what's your source for the 42% margin on inference? Second, forget about the source. I'll make the same point I did before; separating out margins across different segments of your business is a great way to obfuscate a failing business. This is the same as saying "Why am I broke? I never, ever break my budget for food! And don't worry about this $10k a month I'm spending at the casino." If research and model development costs more than your inference margins (and your inference margins aren't somehow greater than 100%) you're losing money.
No, because at any point you can just... Stop doing R&D and happily sell tokens at a 42% margin.
More options
Context Copy link
https://x.com/kimmonismus/status/1973120175470944615
Says it right here, source is paywalled article.
Revenue: $4.3B in H1 2025 Cost of Revenue: $2.5B in H1 2025. Do the maths, margins are 42%.
I never see a source for these claims that inference costs are higher than what is charged to customers but people keep saying it, in spite of the fact that it violates basic rules of economics.
Companies are allowed to make losses investing in R&D for new products. This profit-brained beancounter mindset is why the West has been declining, in a nutshell. If you don't invest aggressively, how are you going to innovate? R&D and capital deepening is the source of prosperity.
It's very reasonable to expect there will be all kinds of lucrative offshoots from LLM research, just like how deep learning is staggeringly, ludicrously profitable, that's why these big companies are investing so much. The technology is fundamentally very promising and is worth investing in.
From your own link:
And
I think the 42% margin you're talking about is the Server rental cost which that twitter post calls "cost of revenue" but ... that's not how operating margin works.
I'm not talking about operating margin, I'm talking about inference margin, where the server rental is the cost of production.
The operating loss is due to research. Research is the basis of all modern technology and companies should be doing more of it. It's inappropriate to compare it to casino spending like in your above comment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Agreed. My feeling is that OpenAI is burning through venture capital faster than any company in history. If they are selling inference for more than what it costs them in chip deprecation and electricity, that is only because they have a moat in the form of good models. If they ever decide to stop burning through money to make more powerful models, they will quickly find that without that moat they will only be able to charge the same as any rent-a-chip company.
For the most part, the investors do not care about OpenAI being able to sell anything at a profit in 2025. They are simply purchasing stakes in the ASI race. If OpenAI wins that race and alignment just happens, they will be the nobility under god-emperor Altman. If LLM progress plateaus and the singularity fizzles out, their stock will likely crash like the internet companies in the dot-com bubble.
User interface, agentic behaviors, and (down the road) deep research tooling matters, a lot. It's possible to set this stuff up even as a single dedicated user, but there's reason that approximately zero people have home deer-flow setups, and there's vast economies of scale once you do configure them
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The natural selection of memes on this platform will happen at an incredible pace. TikTok meme spread is bounded by the need to actually film your mutation of the meme in real life: you have to actually learn the dance, you have to actually bake that feta, you have to actually prepare and record your funny take on the ice water face bath guy.
With Sora 2 you no longer have to do this. Of course, faking a real-world activity will be seen as lame (until it's ironic and cool again), but memetic storms (if OpenAI doesn't fuck with the recommendations algorithm) will be sudden and violent.
Hahahaha thanks for the memory, this was pretty good pasta I will say
More options
Context Copy link
More options
Context Copy link
This is the worst that AI video gen is ever going to be.
Which is good, because that means that there's every chance that quality will improve until it isn't slop anymore. I look forward to actually decent and watchable AI movies, TV shows and animation. We'll be able to prompt our way to making whatever our hearts desire. Even if the cost doesn't become entirely trivial for a full-length project, as long as it's brought down to mere thousands or tens of thousands of dollars, a ton of talented auteurs will be able to bring their visions to life. Will Sturgeon's law still hold? Probably, but we'll go from 99.9% unwatchable slop to a happy 90% soon enough.
And it's bad, because this is the least viral and compulsively watchable AI generated media will ever be, including shortform "reels". I'm not overly worried, I have some semblance of taste, but eventually the normies will get hooked. And I mean the average person, not people with dementia. If it gets me, it'll have to work for it, and if I'm being presented with content more interesting and high quality than typical mainstream media, I don't really think that's a bad thing. I already have little interest in most human visual output.
I think you missed the second half of my original post. I'm angry because we're using what absolutely is epoch changing technology to re-run the product format of yesterday. I am livid that Sam Altman's lizard brain went "Guys! YouTube plus TikTok Two-Point-Oh!" and all of the sycophants in the room started barking and clapping like seals in approval.
Because even one of the evil empires, Google, is trying to use AI to make textbooks more accessible. And this is to say nothing of AI's ability to fundamentally extinguish do nothing fake e-mail jobs and turn motivated individuals into motivated individuals who can work for themselves on things they want to instead of trading 40 hours at PeneTrode inc to pay their bills.
"I'm not worried about bad CONSOOOM, because the future has so much better CONSOOOOM!" This is no future at all. @2rafa nailed it with the Wall-E references and @IGI-111 brought up a worthy pondering point from the Matrix. Purpose in life is not the pursuit - or even attainment! - of pure pleasure. Especially when that pure pleasure is actually a pretty paltry fake world.
That's not the viewpoint I hold, though I don't blame you for assuming I do, given that I restricted myself to pointing out that addiction/consumption of mindless slop ceases to be remotely as bad when it's arguably no longer slop or mindless.
I'm not a pure hedonist. Far from it. If all I wanted was unlimited pleasure, I'd await the invention of super-heroin or some kind of brain implant that could ensure a perfect nirvana of lotus-consumption. I don't want that. Keep that shit away from me with a gun.
My values are more aligned with a preference utilitarianism that includes complex goals like discovery, creation, and understanding. The problem is that I believe the opportunity for baseline humans to pursue these goals through what we traditionally call "work" is ending.
In other words, as far as I can tell, there will be no meaningful work for baseline humans to do in not very much time. Make-work? Sure, that's always a possibility. I would deeply resent such an imposition, I reserve the right to spend my time in Utopia doing what I care about.
For the people who crave work? I'm not opposed to them seeking out work.
Notice that I specifically mentioned baseline humans. That is an important distinction. The average nag might be rather useless today, but people will pay money for a thoroughbred race horse that can win races. People who wish to remain economically relevant might well need to upgrade their cognition to stand a chance of competing with AGI or ASI, and to an enormous degree to hope to stand a chance. The question of whether there would be anything recognizably human about them at the end of process is an open one.
I have previously noted that I'm an actual transhumanist, and far from content with the limitations of my current form. I have few qualms about uploading my mind into a Matrioshka Brain or an AWS Dyson Swarm Mk. 0.5, and extending my cognitive capabilities. That might let me do something actually useful with my time, if I care to.
What I resent are people who don't care to make such changes, even in theory, and then declaim the fact that they're useless - and worse - want to force the rest of us to dig ditches to accompany them. To think of an analogy today, imagine someone who has ADHD and can't do anything useful, bemoaning the fact that there's nothing they can do. Such a person can pop a stimulant and find that for a relatively low cost, they can be functional and useful. I have ADHD, and I pop the pills. Not because I literally can't achieve anything (thank God it's not that bad), but because I need it to be competitive and actually make something of myself, which I think I have. If you have myopia, then you get glasses, not welfare. If you're congenitally or accidentally blind and we can't fix that, I suppose I can't blame you for asking. If there is no real work, and we can afford to let you do whatever you want, be my guest.
This is why when @2rafa says:
I begin to pull my hair out. That is such an incredibly myopic stance, a nanny-state run-wild, forcing us to waste our time because of a fear that we'll waste it in some other, less aesthetically pleasing way. I call it myopic because I've pointed out that there are clear solutions.
I won't call you sentimental, but you're clearly being tyrannical. That's none of our business. The current analogues to wire-heading are awful because they cause clear harm to both the person indulging in them and the rest of us. A heroin addict doesn't have the decency to keep it to themselves. They resort to criminality, and make their problem one for the rest of us. Current civilization requires that humans work, and work hard. That will not be true in the future.
My central thesis is that we are approaching an event horizon beyond which meaningful, economically relevant work for baseline humans will be an oxymoron. This is a technical prediction and not a value judgment. Once AGI can perform any given cognitive task more effectively and at a lower cost than a person, the economic rationale for most/all human labor will evaporate. The distinction is crucial. People who want to remain relevant will not be competing against a slightly better human, but against a system capable of outperforming the entire human race on any given metric. The logical path forward in such a scenario is not to try harder without upgrading ourselves, but to try harder after a full overhaul (or be happy with UBI and hobbies). This leads directly to cognitive enhancement. I agree that "we" (by which I mean the subset of people who want to do anything meaningful) need a solution, but I disagree on what that solution is.
Your reliance on State Paternalism would have us dragging train-autists out of their homes to dig holes and fill them in, and forcing Tumblr fanfic readers to sign up for MFAs and get to work in the mines writing the next Great American Novel. You might be okay with that, I'm not. I trust myself to find something I would be happy to do when I don't have to do anything.
At the end of the day, I find myself in profound disagreement, even as I understand the benevolent impulse to save people from anomie. The proposal appears to be a form of state paternalism that diagnoses a problem of the spirit and tries to solve it with a blunt political instrument. It pathologizes leisure and seeks to enforce a specific, perhaps aesthetically pleasing, vision of a "fulfilling life" onto the entire population. This seems like a cure far worse than the disease.
I ask for tolerance. Not all hobbies have to make sense. Not all hobbies or forms of recreation do, even at present. That's fine. I have no interest in pure wireheading or climbing into a thinly veiled Skinner Box, but I have little interest in stopping others from doing so. There is a drastic difference between disapproving of such behavior, and then enforcing your disapproval through force or the government. The latter is what you're advocating for.
Your observation that many of today's idle rich are unhappy is an interesting data point, but I believe it is a misapplied analogy. It points to a bug in the current version of human psychological firmware, not a fundamental truth about idleness. This wasn't a bug for all of recorded history, and isn't today, but our reward systems are calibrated for a world of scarcity and struggle. The solution is not to recreate artificial scarcity in perpetuity, which is a monstrously inefficient and coercive workaround. The solution is to fix the bug.
That is an engineering problem. That is an artifact of current baseline human cognition and psychology. What sacrifices do you actually have the right to demand people to make, when there is no actual need for such sacrifices?
Finally, a related question for @IGI-111, since you approach this from a religious framework. Many conceptions of Heaven describe it as the ultimate post-scarcity utopia where there is no need for toil. How does you solve the problem of meaning in that context? Is purpose in the afterlife derived from labor, or from a state of contemplation and communion with the divine? If it is the latter, this would suggest that a state of being without work, as we understand it, can in fact be the ultimate good. It's a shame that I have little confidence in the light of God to solve our problems, and I have to settle for arguing for merely technological and engineering solutions to the problems we face down in the dirt.
Say your son becomes a heroin addict. All he does all day is get high and waste his life. He has UBI and lives in social housing, so his financial situation isn't precarious. He has a sophisticated future chip implant for dosage that always just stops him from ingesting a fatal dose. He never goes outside, and he has a dark web supplier who delivers by mail (easily affordable on his UBI check), so he's no aesthetic or criminal or otherwise problem for the public.
Would you be happy for him? Would you be proud of him? Would you care about him doing that with his life?
Call it empathy, but I do care, I do consider it my business, and I have interest in stopping others from doing so, whether they are my family or my community or my countrymen and women or just the wider human race. There are a lot of decent people out there who deserve better than a life as an addict, having never created anything. It doesn't have to be "of value", this isn't an economic question. In a post-scarcity world I think it better for people to play sports, socialize in person, work with their hands, craft, cook, construct, have children, raise them, fall in love (with each other). Is that an aesthetic preference? Sure.
But it's also an empathetic one. Most people don't have the agency and the time preference setup to be able to autonomously turn off the dopamine pump. We don't know if we would, which is why they tell you never to try heroin. Even plenty of people who want to quit tobacco never make it, even if they really want to. It seems to me supremely arrogant to assume that so many people, not least yourself, have that degree of control over their own happiness, their own destiny. This is likely a philosophical difference between us.
Sometimes people need to be saved from themselves. You acknowledge this in the way in which we often discuss it, homeless drug addicts threatening people on subways, feral children who never finish school, but it's not just about the negative externalities, not just about the fact that it makes things harder for me, or for you. It's about them too, and about us, because while we maintain a work ethic and some discipline today, who knows how that will hold up in the face of the entertainment that is coming?
Sure, maybe we can rewire ourselves to inject fake memories of an entire life well-lived, winters by the warm hearth, homes built by hand, children's clothes sewn, religious services attended, adventures had, and then cheat ourselves to fulfilment that way. But even that is a little sad, when so much of the promise of automated abundance is that finally we can take a step back (with our hopefully longer lifespans) and do all of these things. And yes, I think forcing people to do them is better, and will make them happier, than allowing them to press the pleasure button all day, which the vast majority of people, quite possibly pretty much everyone, will do if you let them and if the button is good enough - which you and I both agree it probably will be.
My preference, by the way, would be for a status hierarchy with a baseline that allows for real wireheaders to do what they do, but which provides superior status and more resources to those who embrace a more fulfilling, communal and meaningful existence, as defined in various but strongly overlapping ways by philosophers going back to Socrates.
Well, shit. Why didn't you say so at the start? Here I was going through your arguments paragraph by paragraph to be hit by this. If you want "softer" forms such as societal disapproval, or exclusion from status hierarchies, I have no beef with that.
I have no problem with a world where people who pursue pro-social, creative, or communal lives are rewarded with higher status. I can even envision a system, like a universal Patreon, where people voluntarily grant portions of their resources to others whose 'work' they admire. What I cannot accept is the state, the provider of the UBI baseline, making those judgments. The baseline must remain unconditional. Your neighbors are free to shun you for wireheading; the government is not.
The rest of my reply was written earlier, I'll leave it in since it expands on my understanding of your thought experiments and objections therein.
You ask if I would be happy, proud, or even indifferent if my son spent his days in a haze of safe, state-sponsored heroin, no threat to anyone, just slowly dissolving into the furniture.
The honest answer is: I would not be happy. I would not be proud. Of course I would care. I would be upset, and I think almost anyone would. But the important part is why. As I grieve, I'd try many different things in an attempt to get him to change his ways. Those would include options like offering support, debate, cutting personal contacts, disinheriting him until he reconsiders and so on.
These are, as far as I'm concerned, legitimate forms of coercion. What I wouldn't do is ask for sweeping legislation against his behavior.
Many children do things that violate the norms and desires of their parents. Pre-marital sex, turning away from church, the occasional bender. Quitting the family job, dropping out of grad school to be a Bohemian layabout. Yet society does not, as a general rule, allow parents to bring about the use of force (at least after adulthood). Not even when such a breach represents immense pain and suffering from the parents.
I will grant that parents have the right to try and use legislation, everyone can vote after all, we hope that consensus establishes sanity. You can vote for your view of the future, and I'll vote for mine. If it gets too abhorrent, I'll vote with my feet.
So I would care, but I would not do everything within my power. Such options would include breaking into his house and taking him away at gunpoint, or paying someone to hack his system, and yes, a general prohibition against drug use or wireheading.
I would be open to at least making falling into such a state by accident immensely difficult or impossible, such an irreversible act might well require informed consent and the signing of strict waivers.
Were I do such a thing, I struggle to find a clean break between pure chemical or electrical stimulation and what most would consider productive or ennobling ways to engage with the world. It's a quantitative, not a qualitative difference in the world to come. Your work - at present - gives meaning and value because it makes a tangible difference. We agree that is unlikely to hold indefinitely.
There is also a quantitative blur here. In a world where baseline labor is mostly obsolete, the difference between heroin, hyper-tailored VR, and a perfectly gamified social platform is often intensity and bandwidth. If you legislate against one flavor of narrow reward, you will spend the next decade chasing reskins. Better to aim at the decision process.
The instinct to care, to want the best for everyone, is laudable. The issue is when your definition of caring and "best" faces earnest disagreement. Would you enjoy it if, out of genuine concern for the fate of your immortal soul, contraception was banned? If a Jihad was called to force you to repent at gunpoint and find solace in the arms of Allah?
Claiming the right to choose for yourself, as far as I'm concerned, necessitates the symmetrical extension of the same right to others. In this scenario, my son is an adult in his right mind. My hands are tied, and I am the one tying them.
You are correct that it possibly represents a philosophical or fundamental values difference. I would hope it doesn't, which is why I'm attempting to show you that there is genuine risk of freedoms you hold dear being taken away from you if you endorse a nanny-state that stops people from spending too much time playing video games.
You are conflating your personal, aesthetic, and empathetic preference for a certain kind of life with a universal moral imperative. My objection is not to your preference. Your vision of a fulfilling post-scarcity life, filled with sports, community, craft, and family, sounds lovely. I might even choose it for myself. My objection is to the word "forced" and to the belief that your empathy grants you the right to make that choice for others.
The majority of people don't do heroin, don't gamble away their money or play so much RDR2 that they lose their jobs. Most adults can be trusted to that extent. I think the odds of my hypothetical son doing something like that is low, and it wouldn't remotely justify the breach of liberty it would take to categorically rule that out.
In general, I wish to treat myself like a trustworthy adult who can make their own choices and live with them. I think that's true for most. Where it's not, say by dint of mental illness or cognitive dysfunction, I'm happy to ask they get treated, including by force. But I don't want to shut down churches because they're a front for wireheading in heaven (one that's making false promises to boot).
You suggest that most people lack the agency to resist a sufficiently advanced dopamine pump. You cite tobacco addiction. This is a fair point, and the behavioral psychologists would call it a problem of akrasia, or executive dysfunction. We see it everywhere. People eat junk food when they want to be healthy, they procrastinate when they want to be productive. I am okay with making it harder to do dumb and personally counterproductive things (along the lines of wait times, waivers and all of that). I am against a "utopia" that only lets me choose among good options, especially when the value of good is not set by me but by committee.
Your proposed solution is a kind of societal pre-commitment strategy, a paternalistic structure that saves us from our own predictable irrationality. You want to remove the button.
I'd rather not build a cage around the button, but to upgrade the person pressing it. Then they can choose for themselves.
More options
Context Copy link
More options
Context Copy link
MBIC, in your own story you had to posit SAMSARA resulting in successful total legal abolition of too-advanced AI, aliens waging an extra-dimensional Butlerian Jihad, and widespread, spontaneous superpowers, including non-causal precognition just to carve a place for a human protagonist to matter in a setting where AI was a fully valid tech tree.
Do you really not see the concern? Do you want to be a permanent heroin wirehead?
I'll address this first, since it's a general interest question rather than a screed about my pet worldbuilding project.
I'm not sure why you're asking me this question! I've made it clear that I have negative interest in being a heroine wirehead. I will actively fight someone trying to put me into that state without my consent.
My actual plan (modulo not dying, and having resources at my disposal) is closer to continously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.
This may or may not be possible, or it may turn to require compromises. Maybe human minds cannot remain recognizably human or retain human values when scaled far enough to compete. At that point, I will have to reconcile myself to the fact that all I can do is make effective use of my leisure time.
I could write better novels, make music, play video games, arguing on the Motte 2.0, or diving into deep immersion VR. Or whatever entertainment a singularity society provides. What I would like to make clear is that none of this is productive, in the strict sense. I would be doing this things because I want to, not because I have to, in the manner I exchange my current cognitive and physical labor for goods and services.
In other words, everything will be a hobby, not a job. At least for baseliners.
At that point, I have no interest in dictating the way other people spend their time. I want to do cool things, some might want super-heroin. Good for them. I refuse to join them.
At the same time, I see attempts to create "meaningful" work for baseline humans as entirely pointless, and hopeless, and definitely counterproductive. Imposing that, instead of making it an option? Even worse.
You've got it the other way around, the main driving impetus behind all of those
ass-pullsscenario defining characteristics/plot contrivances is less that I wanted humans to matter, and more that I know that it is very difficult to write protagonists or characters who are more intelligent than me, the writer. Also, I really wanted to write a hard scifi superhero novel.Yudkowsky discusses the difficulties at length, he's got a bunch of essays on the topic:
https://yudkowsky.tumblr.com/writing/level3intelligent
I don't believe I can write detailed superintelligent characters. [1] What a mere human can do is posit certain constraints and then discuss outcomes. Also, fiction, or at least fiction that most people want to read (or I want to write) usually relies on relatively relatable and understandable primary protagonists and stakes. My writing operates under those constraints, if I was writing with perfect rigor, it would be closer to AI 2027.
(Also, in the story, it's a known fact that the universe is a simulation, of many, running inside a hypercomputer, by which I mean a system capable of hypercomputation and not just a really big supercomputer.)
It's not really meant to be a rigorous commentary on plausible futures, though it grapple with the meaning of agency and human relevance in a world of superintelligence. I do not expect that the reasons for humans/a human to be relevant within the fictional narrative hold in reality. Maybe we'll actually nuke hyperscale datacenters if we suspect hard takeoff, that's about as far as it goes. Reality does not involve superpowers.
It is not impossible to write a much more rigorous analysis. Look at Orion's Arm, or Accelerando. However, in the latter, the humans end up entirely passive protagonists. I didn't want to write a novel about some human refugee watching the construction of a Dyson swarm by hyperintelligent mind-uploaded shrimp. Hence my novel and its choices.
[1] For example, we have strong reasons to believe that a Von Neumann self-replicator is both physically possible and we have an existence proof in the form of human civilization on Earth. I think it's fair to postulate that an AGI/ASI could make swarms of them and begin converting the galaxy, without me being able to actually write an engineering white paper on the details of the machine. That's for Greg Egan, and I'm not as competent.
I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.
And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain. We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.
But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources? Your brain in a jar, at the mercy of a shuggoth that is infinitely smarter and more powerful than you, is the most total form of slavery that has ever been posited - and we would all of us be economically non-viable slaves.
You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.
In most of the scenarios, there's literally nothing I can do! Which is why I don't worry about them more than I can help. However, and this might shock people given how much I talk about AI x-risk, I think the odds of it directly killing us are "only" ~20%, which leaves a lot of probability mass for Good Endings.
AI can be genuinely transformative. It might unlock technological marvels, and in its absence, it might take us ages to climb up the tech tree, or figure out other ways to augment our cognition. It's not that we can't do that at all by ourselves, I think a purely baseline civilization can, over time, get working BCIs, build Dyson Swarms and conquer the lightcone. It'll just take waaaay longer, and in the meantime those of us currently around might die.
However:
I think there's plenty of room for slow cognitive self-improvement (or externally aided improvement). I think it's entirely plausible that there are mechanisms I might understand that would give me a few IQ points without altering my consciousness too much, while equipping me to understand what's on the next rung of the ladder. So on till I'm a godlike consciousness.
Then there's all the fuckery you can do with uploads. I might have a backup/fork that's the alpha tester for new enhancements (I guess we draw straws), with the option to rollback. Or I might ask the smartest humans around, the ones that seem sane. Or the sanest transhumans. Or another AGI, assuming a non-singleton scenario.
I'm the evolving pattern within the meat, which is a very different thing from just the constituent atoms or a "soul". I identify with the hypothetical version of me inside a computer as you do with a digital scan of a cherished VHS tape. The physical tape doesn't matter, the video does. I see no reason we can't also simulate the chemical influences on cognition to arbitrary accuracy, that just increases the overhead, we can probably cut corners on the level of specific dopamine receptors without screwing things up too much.
If you want an exhaustive take on my understanding of identity, I have a full writeup:
https://www.themotte.org/post/3094/culture-war-roundup-for-the-week/362713?context=8#context
Some might argue that the former has already happened, given the birth rate crisis. But I really don't see a more advanced civilization struggling to reproduce themselves. A biological one would invent artificial wombs, a digital one would fork or create new minds de-novo. We exist in an awkward interlude where we need to fuck our way out of the problem but can't find the fucking solution, pun intended.
Isn't that the whole point of Alignment? We want an "AI overlord" that is genuinely benevolent, and which wants to take care of us. That's the difference between a loving pet owner and someone who can't shoot their yappy dog because of PETA. Now, ideally, I'd want AI to be less an overlord and more of a superintelligent assistant, but the former isn't really that bad if they're looking out for us.
My idealized solution is to try and keep up. I fully recognize that might not be a possibility. What else can we really do, other than go on a Butlerian Jihad? I don't think things are quite that bad, yet, and I'm balancing the risk against the reward that aligned ASI might bring.
Quite possibly! Which is why writing one would be redundant. Most of us can do little more than cross our fingers and hope that things work out in the end. If not, hey, death will probably be quick.
I don't see any reason for optimism here. Digital intelligence built as such from the ground up will have an insurmountable advantage over scanned biological intelligence. It's like trying to build a horse-piloted mecha that can keep up with a car. If you actually want to optimize results, step 1 is ditching the horse.
In which case, yes. I'd rather Butlerian Jihad.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel I need to point out here that despite being a Traditionalist I am not, in fact, a Christian. So my conception of purpose is certainly not going to neatly map to any conception of Heaven, especially as I regard the nature of Heaven, much like that of God, to not be entirely fathomable.
I don't see the need to call on God to attack the perfectibility of man's material conditions however, and need only to point at the very material consequences of large scale Hegelian attempts.
Why is it evil to immanentize the eschaton?
In process it is evil, because it liberates one from morality and requires of imperfect man to shoulder the moral burdens of God, having killed him. One only need to look at the XXth century to see how bad man is at being his own final moral authority. Mass deaths and the most gruesome and abject of conditions awaits.
In end it is evil, because paradoxically it seeks to reduce man to a passive nihilism that only seeks comfort and security, unable to self actualize any sort of ethos, and by way of consequence can only bring about quiet suicidal resignation. Who would have children or any sort of investment in the future in a society that perfectly caters to all their material needs? Without struggle, what need have we of motivation itself?
Mass anomie is not desirable, nor is it, I hope, achievable.
I appreciate the clarification on your religious framework, though I confess I'm not entirely sure how Traditionalism divorced from Christianity solves the problems you're pointing at. That's a digression though. I'm aware we share very different belief systems.
Your invocation of 20th century horrors as evidence against improving material conditions strikes me as erroneous. Are you seriously positing that things were better before? That sectarian strife, genocide, jihad and crusade didn't exist? Modern ideologies caused more damage because they had better tooling. The Crusaders would have cluster-bombed Mecca if they could. The wars of religion between the Protestants and Catholics might have gone nuclear, had they possessed nukes.
The atrocities of communism and fascism weren't failures of abundance but of totalitarian systems imposing their vision through violence and coercion. They murdered millions who didn't fit their mold. That's precisely the opposite of what I'm advocating for. I want tolerance and freedom in a world of genuine abundance, not mandatory participation in someone else's utopian scheme enforced at gunpoint.
For a fleshed out look at such a world, you can look at Scott's writing on the Archipelago. I just want to consider what that would look like after a Singularity.
The claim that perfecting material conditions "liberates one from morality" is asserted but not demonstrated. Why would meeting our basic needs destroy ethical behavior? Scarcity is what historically drives immorality. People steal because they're hungry, fight wars over resources, exploit others for economic advantage. The novel experiment would be seeing what humans do when survival isn't so obviously a zero-sum game. Maybe we'd finally have the luxury of being genuinely moral rather than merely prudent.
And in most ways that matter, we are more moral in this age of relative abundance. We're certainly less violent or cruel, compared to almost any point in history.
Yes, many people with material security feel empty. On the other hand, the people without material security are fighting tooth and nail to acquire it, while suffering from hunger, homelessness and disease. I think the revealed preference is rather clear.
You ask who would have children in a society that perfectly caters to material needs. People already have children for non-economic reasons. They want families. They find meaning in nurturing life. They enjoy watching someone grow. If anything, removing economic pressure might mean people only have children when they genuinely want them, rather than as accident prevention failure or retirement insurance. That seems preferable to me. And after all, billionaires have children despite having no material wants. Past a certain inflection point, the wealthy have more children than the mere middle class.
As for the claim that struggle is necessary for motivation and self-actualization, I'm baffled. Do you believe artists only create because they're hungry? Scientists pursue knowledge just for grant money? Friendship matters solely because we need social networks for survival? The most meaningful human activities are precisely those we pursue for their own sake, divorced from material necessity. People are curious, creative, social, status-seeking even absent material need. Give someone unlimited time and resources and many will waste it, sure. Many others will learn languages, make art, explore ideas, build communities, compete in sports, pursue a thousand forms of engagement with life.
The real question isn't whether we need struggle for meaning. It's whether baseline humans will remain economically relevant against AGI, or whether cognitive enhancement becomes necessary to compete. That's a choice individuals should make, and one that as a society, we must make.
I have argued as much many times here and elsewhere. It seems obvious to me that human society was much healthier before the modern era. But of course my understanding of quality of life goes beyond mere material conditions, hence I suppose your disbelief.
I'm well acquainted with the worst impulses of humanity, but this is precisely why I refuse to give it ultimate moral authority over itself. You can point to all the sectarian conflict in the world, it simply doesn't compare to the levels of brutality brought about by modern ideologies since the French Revolution and the advent of social contract theory. We're talking orders of magnitude here.
Total means are just the natural outcome of total ends. Read Hannah Arendt.
What you want is entirely irrelevant in the face of what your longing actually produces.
It's such a frequent critique of the combination of utopia and consequentialism that I feel trite saying it once again, but it is true nonetheless: heaven on earth has infinite utility, therefore all is permitted in the name of its creation. Near infinite levels of suffering are the price of a perfect society free of suffering forever.
As a Traditionalist, I hold that history has no moral arc or direction, so it is easy for me to reject this argument. I see no reason to reject it for someone who believes in progress except for a disbelief in the efficacy of the method.
I prefer Dancers at the End of Time and That Hideous Strength, personally.
We know what mice do at least. Groom themselves and die. I see no reason to ascribe to ourselves a higher nature in this particular matter. I think you only do so in hubris.
It is no question that people want security and comfort. People have a revealed preference for heroin as well, I see no reason to assume that what people want is virtuous.
One of the main lessons that turn a child into an adult is precisely that goodness requires we not always follow our wants.
I think what the evolution of TFR says is that people's revealed preferences are the opposite of this, actually.
Yes. Appeals to our good nature are cope. People do things because they have to, and rationalize them as whims after the fact.
Good art in particular is famously and obviously a byproduct of struggle and constraint.
The virtue required to make good use of wealth can only be learned through scarcity.
Something given has no value.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Stopped reading after getting through this insanely elitist and eugenic passage. I'll restrict my comments before I run afoul of the mods.
People are far more than their cognitive and physical abilities.
Are any of the other components things that can be distinguished from non-reality in any way demonstrable to another person?
More options
Context Copy link
What. Or at least what a way to miss the point. The core conceit is that humans face a severe risk of being made economically obsolete, and how to grapple with that fact in a productive manner. Is it "elitist" or "eugenics" to posit that an ASI might put us out of a job? Sticking your head in the sand doesn't make the problem go away, it only gives you aspiration pneumonia.
Sure. I'm on record for advocating for UBI, I think humans should have their needs catered to even in a posthuman future. Unfortunately, stating "people are far more than their cognitive and physical abilities" is not a useful take nor is it a solution.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is a pretty common perspective, and one that I just can't fully grok. Pleasure is pretty great, but pleasure is evidently not the only thing people go for. People will pursue all sorts of things when it comes to the capabilities of AI-generated media, and that will include pleasure, but that will also include things like meaning, depth, insight, or whatever other fancy-sounding term people like to use when they describe the value they get out of things they consider high art (I chose those terms because they apply to me with respect to works of fiction I consider great in some "high art" sense, such as The Shawshank Redemption or Crime and Punishment).
And the great potential for gen-AI I see is its ability to create these things without having to have someone intelligent and eloquent and talented enough to think about it and put it together. Films of the quality of The Shawshank Redemption was only possible due to the hard work of many extremely talented individuals working together to express something meaningful.
And yet, the film is just a sequence of grids of pixels flashing 24 times a second in sync with audio, and there's no rule of the universe saying that AI couldn't have generated those pixels and those sound waves (more generically, the precise sequence of 1s and 0s contained in an 8K transfer or whatever onto digital media), and the film would be no less inspirational, no less insightful, no less meaningful if it had been created that way. Likewise if it had turned out that Dostoevsky was an avid juggler who wrote Crime and Punishment by labeling balls with letters, then adding a letter every time he dropped a ball during his practice sessions, this wouldn't change the meaning contained within the novel at all.
And I see no reason to believe that gen-AI won't be able to order pixels or letters in a way to create new works of fiction that also provide insights and meaning of similar depth, around other topics, merely by training on what sequences of letters or pixels cause people to respond with, "Wow, that's really meaningful and deep!" versus "Wow, that's such vapid slop!" and everything in between and around. Because I don't think there's anything magical happening in the mind when someone thinks of something, notices that their mind judges that thing as "meaningful, deep, inspiring, etc." and then writes it down with intent to convey that sense to others.
And so instead of meaningful, deep, [insert other positive word here] works of art being limited by how few talented/skilled artists there are and how little time they have to produce art due to needing all that sleep and food, it'd be limited by how fast and common AI software and hardware are. These limits seem to be far looser than human ones, and so I see great hope for a future world where novel works of art that provide real, true, deep meaning will be as commonly encountered as a toilet or a microwave oven is today. There's potential downsides from being overexposed to too many works of art with too much meaning and depth and insight into the human condition, like how the downsides of social media and negative effects of overexposure to other people's approval and disapproval were both underestimated. But that doesn't seem like an awful problem to have.
This is a great comment and I thank you for it.
Let's be specific about three things, however; 1. LLMs/AI as a broad field. 2. Specific models 3. The commercial marketing of those model.
LLMs /AI -- Go for it. As something close to a free speech absolutist, I want progress in all directions on this front at this level.
Specific models. Go for it, again. I don't believe there is such a thing as an inherently "evil" model besides some embarassingly obvious ones (i.e. one trained on pictures of cheese pizza - that's an internet euphemism for the most very bad thing, btw). I have no inherent issue with even "produce marketing slip only!" models. This is where I think your comment operates at -- yes, generativeAI that could make a Shawshank level film would be excellent!
The commercial marketing. This is the level at which I am raging. Not because I don't want to see more AI-slop. I can already do that, I just turn off my computer monitor and phone. I rage because you have OpenAI, which has tens of billions of dollars to burn, sprinting towards the lowest common denominator use for gen-AI that's made even worse by the fact that it's attempting to replicate the attention capture model of social media. They could be putting infinite Dostoevsky in your pocket but they actively are choosing not to. That's the contemptible feature for me. Like my previous comment stated, even Google is going "hey, maybe let's try to make dense textbooks more accessible?" You can draw a straight line path from that to "I want to read Dostoevsky, but I find it hard, hey RussianNovelistGPT, can you explain Roskolnikov to me?"
But, again, the median appetite seems to be a re-hash of attention economy capture processes. Anthropic I am more optimistic about because they seem to be doubling down on using Claude to build agents and to make coding open to people who don't code. But I also worry that will turn into a bunch of MBA types re-building their own shitty versions of SalesForce and pitching it to their boss as "one man AI project to synergize all of the KPIs!"
This is some perfect world thinking, but I want to see the $100 bn of AI spend go to a company that's trying to develop new materials to help humanity economically escape the gravity well (and, no, this is no Elon an xAI). Or some AI company that actually has a non-vaporware approach to analyzing the big diseases that are responsible for the most suffering and death on earth. I'll stop here before I actually veer into "why can't all the good things be!" territory. My point remains; we're selling out early on AI because the charlatans by the bay captured a bunch of money and are re-plowing it into their business models from the 2000s and 2010s. We could be sprinting towards so much more.
More options
Context Copy link
When I was a young child, I cried every single morning for years because I didn’t want to go to school. Often my parents had to physically carry me out of the house before I begrudgingly accepted I was going, and I would cry the entire way.
But I loved school. Every day I had a great time and I’d be sad to come home and I’d tell my parents about who I spoke to and played with and how much fun I had. Much more than if I’d have stayed at home.
Adulthood is often similar. I was depressed for a year and stopped working because I was so sad and my life felt empty and meaningless. I got very lucky that an old coworker offered me a new job and everyone in my life essentially forced me to accept, and when I started I suddenly found things cleared up. I liked talking to people every day, I enjoyed working toward a goal, the sense of achievement after a long week, meeting new people, small talk about nothing in particular.
But if I hadn’t gotten lucky or had my arm twisted into accepting that lucky break, I fully know I could have spent another five years doing nothing on my couch, watching YouTube video essays and every Real Housewives franchise and reading and playing video games.
Not everyone knows what will make them happy. Even fewer can force themselves to do what will. Traditional institutions like early marriage and the expectation that couples produce children exist in part because sometimes it’s only with the passage of time that we realize the happiness and fulfillment these things bring us.
Let 10 year olds eat as much candy as they want, stay up all night to play video games and skip school and they will, no matter how much their future selves might regret it. Adults aren’t so different. If you give people basic income and infinite free amazing quality entertainment then certain consequences are inevitable, and if you care about the wellbeing of your fellow man (and I do) then that is suboptimal even if the machines can look after us.
Don’t rich people already have essentially infinite income? They do spend a lot of time frolicking on yachts and treating themselves to various extravagant delights, but for all that, their lives seem fuller than those forced to accept drudgery.
More options
Context Copy link
You discuss school and jobs, but I don't think any of that applies to entertainment media. Yes, it's usually good that we force children to go to school. It might even be good if we were to force adults to go to work, even ones that are independently wealthy or happy enough to subsist on welfare. But entertainment media? We currently have no way of forcing adults to watch certain pieces of media that we think would be good for them. Adults have pretty free choice - today more than ever - to seek out entertainment media as they wish, and though "high art" stuff are very very niche, they're still a significant niche.
This indicates that people actually seek this stuff out voluntarily. Where I see gen-AI being a boon for this is that we can have far higher throughput of art that is considered "good" by whatever "high art" standards are held by people with taste and discernment and [whatever characteristic that true connoisseurs have], and also for far more custom artworks that provide exactly the right amount of challenge to enrich someone's life without being so challenging as to make them shut down and reject it.
And building on that, there's also the fact that it's quite possible to train AI on media that makes people go, "I expected that to be really bad, but it barely piqued my interest enough to check it out, and I'm glad I did," versus ones that make people go, "I expected that to be really bad, and there was nothing about it that piqued my interest, so I decided not to check it out," versus ones that make people go, "I expected that to be really bad, but it barely piqued my interest enough to check it out, and I regret doing so," as well as many other combinations of similar concepts. And I don't see why some near-future gen-AI couldn't generate media that creates reactions similar to the first one while avoiding the latter ones fairly consistently.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Some of the really bad consequences of media addiction are currently limited by the low quality of most ‘bulk’ visual content (reels, daytime TV, YouTube, most video games).
When you get to a stage where you can cheaply generate infinite seasons of Mad Men or Sopranos or Red Dead Redemption quality entertainment, such that you can play a 5000 hour Rockstar campaign or watch 10,000 episodes of your favorite comfy comedy show with no discernible dip in quality, it’s over.
Wall E remains, the failure to predict Ozempic excluded, the most deeply prescient piece of 21st century mainstream science fiction media.
While I get your point that once you allow everyone to basically wirehead, most people will happily wirehead and only stop playing RDR Infinite when their heart finally fails, I am not sure things are so bleak.
Over the past 50 years, the supply of cheap entertainment readily available has increased by orders of magnitude. Back then, you only got whatever was on any of a few channels on TV, everything else required some effort, like going into a video store. Where previous generations might have bought a porn video tape, today the main obstacle is to narrow down what genres and kinks you are looking for out of the millions of available videos. Video games offer all sorts of experiences from art projects to Skinner boxes. If you want resources on any topic under the sun, the internet has you covered. Entire websites are created around the concept of not having to pay attention to one video for more than 15 seconds.
Humanity has not handled this overly gracefully, but it has handled it somewhat. Personally, I am somewhat vulnerable to this sort of thing, but while I sometimes get sucked into a TV series, video game, or book series and spend most of my waking hours for a week or two in there, I eventually finish (or lose interest) and come out on the other side. I am sure there is some level of AGI which could create a world from which I would never want to emerge again, but it will require better story-telling than ChatGPT. Of course, I am typical-minding here a bit, but my impression is that I am somewhere in the bulk of the bell curve of vulnerability. Sure, some people get sucked into a single video game and play it for years, but also some people do waste a lot less time than I do.
More options
Context Copy link
I disagree, I think you’ve got the relationship backwards. People who are already addicted to media demand the meaningless bulk content. They reel at more substantial works. Substantial works would require them to invest themselves in a more enriching way in what they consume. Offering them infinite high quality works wouldn’t get them to actually partake in said works, and this is shown by the fact that currently most people give little attention to those that are already on offer. For them to partake in quality works would presuppose them not being addicted to consumer slop.
More options
Context Copy link
I mean, in such a scenario, we'll get Wall E if we're lucky.
If not, we'll end up with Blindsight. Which, funny enough, predates Wall E by two years.
I'd rather not have either of them, personally.
More options
Context Copy link
What's over? Do you genuinely believe that there will be meaningful cognitive or physical work for baseline humans to do beyond the near future? If we're economically unproductive, we have to pass the time somehow. It's like complaining that the music of the angelic choir in heaven is too good and you don't feel like playing tennis.
Those who want more ever more sophisticated forms of entertainment will probably get them, I'm not enough of a snob to think that someone watching the equivalent of the best prestige TV or better ad their primary hobby is doing something wrong.
I don’t know, I think company and companionship with other biological humans are important. Call me sentimental but if everyone’s going to be living out hyperrealistic fantasies in VR for dopamine for 80 years then I struggle to see why you mightn’t just save the resources and administer them a euphoric fatal heroin dose and be done with it.
I am increasingly absolutely convinced that a fulfilling post-scarcity world will involve mandatory make-work, not 40 hours a week of fake emailing (ideally), but forced interaction with other human beings, teamwork, shared projects, civic engagement, some kind of social credit to encourage basic politeness and decency even if you don’t need them to survive and so on.
I grew up with many people who already live ‘post scarcity’ lives on account of great inherited wealth and the ones who consume all day are universally less happy than the ones who work, even in high pressure jobs, even though they will inherit more than they could make in a thousand years.
Wall-E is about the choice that post-scarcity offers. At the end, when the humans are replanting trees and clearing garbage it’s clear that AI and robotics are good enough in this universe to do this work, but it’s the humans who win when they do it themselves.
More options
Context Copy link
More options
Context Copy link
It'll be over because pure consumers of value who produce nothing and aren't even related to the producers of value will cease to be around sooner or later. Sooner or later some paperclipper will optimize us away, and there'll be no good reason not to do it.
More options
Context Copy link
More options
Context Copy link
I don't believe people want to game or watch TV endlessly on repeat though, I certainly don't. Regardless of how good something is I want to cycle between different sorts of stimuli and types of activities, some providing "fun" and others "meaning".
This seems to be a lot like gambling and many addictions to me. The vast majority have no issues to engage in moderate use while a small minority can't control themselves and self destruct, and young people are more vulnerable.
The fanfic industry gives the lie to this. Many of us do indeed want endless streams the same media with a few changes and wrinkles thrown in. Most of these fics are very derivative even for an inherently derivative art form even where it doesn't make sense - see Stations of Canon - and many are just bad yet we slog through them hoping to find the few that let us recapture the same feeling we got consuming the original work.
More options
Context Copy link
The popularity of thousand-chapter webfics indicates otherwise. If the AI learns to generate cultivation LitRPG isekai with ten thousand chapters (and tailor it to the reader's taste), a lot of people will never touch anything else.
I don't know about thousand chapter webfics, but I've been reading Hajime no Ippo for nigh on 20 years, and it's over 1000 chapters. It's a particular relationship between reader and author, a long running manga like this. You get someone's idiosyncratic direct creative output, without the design by committee aspects of a lot of other media. You watch them grow and develop, not just in their craftsmanship, but in their perspective, which often comes through in how the story evolves. As you age with them, they continue telling a story that hits right at your mutually changing maturity level.
Hajime no Ippo and Berserk are more or less the only manga I still read anymore. Everything else either finished, or I lost interest. Now the question is if I live long enough to see anything resembling an ending to either.
More options
Context Copy link
That's not quite what I'm getting at. I don't really care if someone wants to read a an endless webserial or not, I don't see how that matters. What I tried to respond to was the media addiction part, with the implication that sufficient amount of quality media of ones preferred sort, like endless office episodes for those who are into that or an endless webserial, would lead people to only engaging in that, essentially amounting to a low tech wireheading.
My point is that even if we got endless episodes/chapters/whatever, most people would still want to do a variety of things outside of media consumption.
More options
Context Copy link
Do people pick these up and read forever though? A lot of these webfics are serialized, people reach the end and then just read chapter by chapter.
One of the big draws for me with webfiction is that I am a very fast reader, and if I was buying everything it would bankrupt me. A million word story can give me a nice week of reading, but I wouldn't be spending more than a few hours of leisure a day, and most of the time keeping up with ongoing web releases is ~1hour a week
More options
Context Copy link
Better yet, imagine a story where you are the main character, playing in a rich world with real agency, learning things, judging, fighting, ruling, plot threads springing up around you. We could have that too, a whole new fusion between games and literature. We have that right now, albeit in a limited, experimental form.
That's just called "life".
Kind of. Compare "life" to a game though...
I don't know, someone needs to revamp this "life"
More options
Context Copy link
How about "Life 2.0"? With adventure, romance, high-speed chases, daredevil stunts, whatever you want.
You can have all that already!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Banned for targeted personal attacks (on me).
I wish there was more actually good Xianxia, 3k chapters just isn't enough.
I'm writing one, don't worry. Hopefully it's good. ;D
Send me a link!
Sure, here you go. https://www.royalroad.com/fiction/133044/seed-of-the-radiant-grove
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link