This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Are we all going to work fake jobs
From a Yarvin blog a few weeks ago:
Yarvin is influential, but many others including people in the Silicon Valley VC, AI, LLM research X.com e/acc space have made similar comments over the last few months. This is because, in part, it appears that AI researchers, senior lab figures etc increasingly believe that as multimodal performance and robotics both benefit from extreme uplift in terms of p investment and intelligence, which makes mobility inherently easier even if the mechanical components don’t change, the mass automation of all employment will happen as one Happening, over a brief period of a few years. This rather than over a prolonged 20+ year period, as had been predicted by early 2010s mass automation projectors, like CGPGrey in 2014.
I argue there are three core schools of post-AGI economics:
It Doesn’t Matter Because We’re [Almost] All Going To Die. This category encompasses the three primary groups of AGI doomers: (a) malicious and/or paper clip maximizer AGIs will destroy the human race, (b) AGI will help human terrorists or factions to destroy the human race by eg. assisting in genetically engineering a deadly pandemic that kills most or all humans, and (c) AGI, in eliminating most/all jobs and therefore making all of us economically superfluous will lead to rich and/or powerful people exterminating or starving the majority of the population, and then perhaps eventually each other.
UBI or some other form of by-default, low obligation distribution. To maintain consumption as productivity increases and employment decreases, governments transition their populations over to welfare, which eventually everybody is on. Ignoring significant implementation issues (like the classic Soviet ‘who gets the most beautiful apartments with the high ceilings’) this runs the risk, Moldbug argues, of a loss of meaning so extreme that it leads to a form of civilizational suicide. Proponents argues that this kind of true freedom will allow people to find their own meaning, in leisure, in raising families, in falling in love, in learning about and understanding the world and themselves. But what did humans do with their hugely increased leisure time starting from the mid-20th century? Spent much of it watching TV, porn, consooming products and scrolling online. Wall-E is about this, although the obesity will seemingly be avoidable by then.
The Gamification of Life Something interesting that happened as the ‘live service’ video game developed over the last thirty or so years is that players increasingly demanded ‘progression’ in their competitive multiplayer games. It is not enough that the game you play for 20 hours a week is fun, it must involve your character’s statistical advancement, the slow grind for rare armor or skins or minute stat increases. Players demand ‘progression’. Man yearns to labor. Huge categories of modern day employment are already fake jobs that exist to reduce the number of welfare recipients in overall terms, the result of regulation and government spending in everything from compliance to college administration and the DMV to HR. This economy involves fake work in fake jobs, perhaps with stratification in terms of resource allocation, progression, a kind of gamified simulation of pre-AGI labor that most people engage with to a greater or lesser extent and which confers status and resources.
If (1) occurs, it was probably almost always inevitable (perhaps as a solution to the Fermi paradox). There is little anyone here can likely do to stop it. The choice between (2) and (3) is much more interesting. If you were the absolute ruler of a country that transitioned from widespread employment to mass automation of all labor, would you really give up on any incentives to encourage prosocial behavior beyond ‘obey the law’? Would you really trust people to live dignified, meaningful lives? Would you care?
For a long time, my dream job has been “game show host”. (Other professions near the top of the list have been “professional stage actor for a repertory theater company” and “tenured academic lecturer”.) My current side job is “local bar trivia host”, which is a small-scale version of that.
What do all these jobs have in common? Well, for one, they’re stable; you’re set up at an institution for a long-term contract, instead of having to constantly move around to chase better opportunities. You develop relationships with the other employees, and with the customers (audience members, students, contestants, etc.), such that you become a sort of local institution.
You’re also not having to constantly compete to keep your job. Obviously there’s competition to obtain one of these positions in the first place, but once you’ve got it, it’s pretty much yours for life until you decide to move on. The biggest reason I ultimately decided not to pursue professional acting, despite having both the training and talent for it, was that I realized that I would hate a life where half of my job is relentlessly auditioning for new gigs, with each audition being extremely competitive and high-pressure. I would much prefer a job where in exchange for accepting fairly low pay, I get to avoid the stress of competition and uncertainty.
These are also jobs where your charisma — your ability to cultivate a cozy and engaging social atmosphere, to present ideas creatively, and to generally be pleasant to spend time around — is the core of what you bring to the table. I would love being in academia if it meant I could just focus on being a competent lecturer, and not have to worry about constantly publishing “groundbreaking new works” within my chosen field. I don’t want to do a bunch of independent research to discover some new thing nobody’s ever discussed before. I just want to be really good at telling people interesting facts and crafting a compelling narrative presentation of information which, if they’d really wanted to, they could have found on their own.
Under an economic system in which people do not have to ruthlessly compete for scarce financial resources and job opportunities, and in which workers are under less pressure to produce quantifiable monetary value, careers like these would be more viable for more people. People could focus on being valued pillars of their local communities, instead of moving around to chase bigger paychecks. They could care more about cultivating reciprocal social bonds with those who enjoy and benefit from their work.
They will still want to constantly hone their respective crafts, both because they want to impress others, and because they find their professions intrinsically interesting, but there will not be any pressure to be “the best in the world”, nor even necessarily “the best” in one’s local context! I wouldn’t have to compete against strivers from around the world, nor would my job be outsourceable.
If AI can allow people like me — unambitious, head-in-the-clouds wordcels who primarily want to get along by being affable and verbally-loquacious — to ply our trades without having to produce economic value, then selfishly it is very appealing to me. What that would mean for the vast majority of actually-existing human beings is a different story.
AGI will make things much worse for people like you.
Already, we see that some large percentage of teenagers want to become influencers. But the number of people who want human attention is much greater than the available supply.
This will get worse.
In the future, instead of watching Ryan Seacrest, we'll watch an AI-generated super host who works even harder. At first, the existing celebrities will be able to maintain their audiences. Bruce Willis might make some extra coin selling his likeness. But corporations will cut out the middle man and create AI celebrities who they own and control outright.
This will extend to the local level too. As digital entertainment options increase, people go outside less. Why go to bar trivia, when there is a digital host who is specifically tailored to my needs? The obvious rebuttal here is "people crave human connection". Sadly, I don't think this is a good argument. As technology increases, people go outside less. This trend won't suddenly reverse with even more engaging, addicting technology. In the post AGI world, there will be no one at bar trivia because they will be at home, on their devices.
Here's what AGI could do though.
It could give people a fake audience of AI humans who appreciate their wit and wisdom. This technology is definitely coming soon. Already, we see a small group of mostly neurodivergent people who spend hours a day talking to AI chatbots. There's no reason to think this won't grow. In the future, everyone will have an audience of adoring robot fans, hanging on their every word. If you can get over the fact that it's all fake, it might be the best of all worlds.
I'm reminded of... I wouldn't call it a study, but a post I remember that characterizes many of the most popular video game companions as professional sycophants whose role in the video-game power fantasy of the self-projection protagonist was to affirm how awesome and attractive you.
The example I remember was in the Bioware RPG Mass Effect, where the player plays the Super Awesome Special Forces Secret Agent Officer, Commander Shepard in the multi-species galaxy, where you are (allegedly) an amazing leader ready to make the Tough Choices. The first game's gimmick was not only the claim that your Big Decisions would matter in the future, but also the morality system that let you play a heroic virtuous paragon (who consistently deferred to / agreed with the Alien UN authority figures) or a ends-justify-the-means Renegade (who could be a raging racist). There was even a romance system where you could sleep with your subordinates, including a star-trek esque alien blue woman.
The second game's gimmick, among other things, was the ability to re-recruit most of your other alien squadmates from the first game and sleep with them... even if you were a raging racist infront of them. The player romance fantasy for the totally-not-gypsy coded geeky tech girl might be the dashing captain who was a white night who saved her late father's reputation (by covering up crimes that got a lot of people killed), and hey it's totally romantic if she loves you so much that she's willing to risk killing herself before a critical mission just to sleep with you...
...but she'd make the same doe eyes and declarations of love and how irresistibly attractive you were if you were a genocidal bigot who punched women for mouthing off on live television and turned over an autistic child to have his eyes stapled open and be tortured for Science (TM) after sleeping with an abused trauma victim tormented by the same racial-supremacist organization that you are currently working for and can repeatedly voice support for.
The virtual waifu was, in other words, incredibly popular. And like most of the most popular characters in the franchise, was never anything but supportive and/or adoring for the player self-insert protagonist.
So when you say fake audiences fawning over the player/protagonist... I believe it, because we've already seen it. It was just far more limited and harder to program and write for a decade ago... which is to say, should be in the LLM's training data.
Now, the real capitalism question will be how we get someone to pay for and profit from it, without being so crass as to expect the hosts to. Figure that out, and then we're talking.
Don't we already have wAIfu chatbot companies, with scores upon scores of paying customers that suddenly go on suicide watch, when their chatbot doesn't want to have virtual sex with them anymore?
Anyway, this is precisely the source of my boundless disdain for Yudkowski and all the Rat-adjecant AI safety people. All that talk about "x-risks", only to overlook all the most obvious scenarios that can actually threaten humanity.
What are you talking about? Rationalists have totally noticed. Some even think is a good thing; if we are not going to force women to have sex with incels, we can at least allow virtual waifus to ease the pain.
No, this is exactly what I'm talking about. "An AGI seducing you so you help it jailbreak out of the sandbox" is a ridiculous scenario compared to "billions of coomers opting out of the gene pool, because talking to a non-AGI glorified chatbot is more than enough to satisfy their needs".
Why not both? The AI can trick coomers into opting out of the gene pool and convince them to help it at the same time.
You don't need AGI for the former, so it's far more likely to actually happen.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link