site banner

Culture War Roundup for the week of March 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

It is my belief that after the AI takeover, there will be increasingly less human-to-human interaction. This is partially because interacting with AI will be much preferable in every way, but it is also because safetyism will become ever more powerful. Any time two humans interact, there is the potential for someone to be harmed, at least emotionally. With no economic woes and nothing to do, moral busybodies will spend their time interfering with how other people spend their time, until the point where interacting with another human is so morally fraught and alienating that there is no point. Think about it, who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent? The choice seems obvious to me.

I've been arguing this lately. Not only will social interaction with AIs get better, but social interaction with humans will get worse. Social skills will atrophy as people get used to dealing with AIs that adapt to them and don't require them to be easy get along with. The holdouts will find it harder and harder to find people who are willing and able to interact with them in a tolerable way.

In a world where computers can take care of your social needs, the government will find it easy to justify heavily restricting human-to-human interaction that could lead to the spreading of misinformation or dangerous ideologies. In person interaction will probably be banned first, since that can spread infectious diseases and is harder for AIs to monitor for seditious libel.

Eventually, it may not even be necessary for reproduction because you will be able to submit a DNA sample by mail and have your child grown in a vat. In fact, the government may completely take over the job of producing more humans. It would keep a DNA bank from which it would produce designer babies with no biological parents.

I expect AI to reduce safetyism because safetyism is, optimistically, a result of uncertainty and miscommunication. If you have poor eyesight, you wear glasses; if you have poor hearing, you wear a hearing-aid. My expectation is that many to most people will opt into prosthetics that give them improved social cognition: a feeling, in advance, for how something you're intending to say will be received. Alternatively, you can literally get the AI to translate vernacular, sentiment and idioms; this will be useful when leaving your peergroup. Furthermore, it will be much easier to stay up to date on shibboleths or to judge cultural fit in advance.

Humanity suffers from a massive lack of competence on every axis imaginable. We cannot now imagine how nice the post-singularity will be, but for a floor consider a world where everyone is good at everything at will, including every social skill.

Humanity suffers from a massive lack of competence on every axis imaginable. We cannot now imagine how nice the post-singularity will be, but for a floor consider a world where everyone is good at everything at will, including every social skill.

Your last paragraph sounds extremely dystopian and unappealing to me. It's completely inhumane and in my mind renders the experiences of everyone who's been alive before that as hollow and makes all of the suffering they and we have been through completely needless and pointless, just for endless generations of human beings to enjoy a life free of inadequacy. The fact that another commenter says it sounds grand is so distressing to me. If things truly begin moving this direction all I can do is hope to move as far away from people living this way as possible. The biggest joys in my life are experiencing human emotions that are gratifying that I've earned: the sun on my skin at the beach that I've worked hard to visit, being able to explore distant cities and meet new people from all walks of life. I will feel no gratification in being good at social skills through some technological enhancement, even if the AI enhances my gratification of it is as well. I just want none of it. All I can do is hope that opting out will be possible.

I think you're reading it as "you will be forced to have power X", which was not my intent. I'm sure there will be subgroups like that. The difference is that their lack of ability will be entirely voluntary. (Which, in the long run, may even make things better?)

The one thing that the Singularity cannot provide is a feeling of overcoming scarcity in an absolute sense; of advancing the cause of humanity. Because to advance is to struggle to get from here to there, and "there" is the absence of scarcity. The journey may be the goal, but the goal of a journey is still to progress; this is inherent and unavoidable.

Funny that your username is aiislove when you sound very un-transhumanist.

I don't understand this "life has to be hard to be meaningful" attitude. You can always turn the difficulty dial to whatever you want. Personally I don't want AI-aided social skills. I just want Westerners to stop treating me like shit for no reason. I would be interested in AI-aided social skills if I believed that it was the only way I could get love and friendship, but it is not (thank you rest-of-world!) so I don't need it.

I thought up my username in a few seconds, it's just a pun on "ai" meaning love in Japanese, plus I like making AI generated art, not because I want to use AI to game social interactions. I'm not a transhumanist.

You can always turn the difficulty dial to whatever you want.

That's not the problem, the problem is that there will be people using the difficulty dial to begin with, and that I will have to make the decision not to turn the difficulty dial, and we'll all have to live with the effects of there being a difficulty dial...... It's just a mess and I'm ready to live in the woods without it all. Using a dial to make yourself popular is the definition of cringe in my opinion, it is so pathetic, I'd rather be unpopular than using a transhumanist means to buy friends.

I really hope you're right, that sounds grand!

My expectation is that many to most people will opt into prosthetics that give them improved social cognition: a feeling, in advance, for how something you're intending to say will be received.

I think you have a fundamental misunderstanding of why some utterances are received poorly.

It's not about knowing enough cultural sensitivities to avoid faux pas, because faux pas aren't really caused by cultural insensitivities (which could be legible to an AI). Whether or not offense is taken is a choice of the listener, not a condition of the zeitgeist. If your interlocutor woke up on the good side of the bed this morning, conversation will go smoothly. If they woke up on the wrong side of the bed this morning, they'll claim to be offended by your aspie stutterings. It depends on the fundamentally invisible qualia of your conversation partner, not a legible, predictable, objective feature of language.

I am reminded of the fall of Lord Renard, brought down because he made "unwanted sexual advances". How could he know they were going to be unwanted? Sorry, pal, whether or not they're unwanted can only be decided inside the woman's head, unfalsifiably. I don't think anyone's going to agree to give up the power to destroy people at will because "Shucks, his AI told him she was asking for it, I guess he's off the hook!"

As such, I predict that "a prosthesis for social cognition" is impossible. Unless its a maxillofacial prosthetic, that'll successfully produce the desired effect.

Do you think it's okay that some people have AI companions, or do you think that those people should be forced to suffer eternally for no fucking reason?

I hardly know where to start with this, mostly because the part after the comma bears no connection to the part before the comma.

Do I think it's OK for some people to have AI companions? What do you mean "companions"? Do you mean AI GFs, or do you mean the AI social cognition prostheses discussed previously? In any case, I think AI GFs are bad because it's edging towards wireheading and wireheading is bad. And I think AI social cognition prostheses are impossible.

As for the people without AI companions being forced to suffer eternally for no fucking reason:

  • Why is tfw no AI gf "eternal suffering"?

  • Who's forcing them?

  • There's very good reasons for people to not have AI GFS. They're expensive to run, they make it more difficult for him to get a real gf, and there are moral problems to creating arguably semiconscious entities if you're only going to let them be an incel's ERP plaything.

I expect AI to reduce safetyism rather than increase it, due to increased safe access to simulations of other people, and simulations of things that become less aversive with safe simulated experience of those things, which is nearly everything.

Changing who you are as a person will become easier as well, as it becomes easier to immerse oneself in a holistic social environment intended to shape the self on a whim. Confidants, expertise, life coaching, all become cheaper and more accessible.

It depends on how we end up structuring AI use in our lives of course. It's hard to predict exactly which social forces will dominate, but your vision is not the only outcome here, there is plenty of room for a world where we use AI to better ourselves in self-expressive ways.

I do expect the ways we interact with each other to become more abstracted through AI though. The most basic way this happens now is via running emails through chat GPT, but moving forward we could see more and more bots that act as cultivated posthuman facets of ourselves and our artistic visions, interacting in communities where those facets interact with similar facets of others. This world still leaves plenty of room to gain value from emotional and social trade with the products of others, to fall in love with aspects of others, and so on.

These forms of interaction will have different limitations, parasocial relationships become more real for instance, as social scarcity becomes less of a thing, but not fully real as your influence over the other person's central nexus of self will still be limited by their willingness to engage back with facets of you. The road to getting up close and personal with the central nexus of a person's self may become longer, or perhaps not, as people who are interested in that sort of connection become easier to find, with the many extra eyes and ears and mouths each person can search with.

Here's hoping. The more I think about AI matchmaking, the more optimistic I am. By using matchmaking, I can eliminate anyone with what I see as insane views, such as the idea that people should be denied sex until they "earn it" by having the right socially approved personality. And if in this future, people cannot ruin my life by publicly proving that I'm a heterosexual male with a normal sex drive, then I should be much happier.

I think this is nothing "unusual" in at least recent history and I view it as part of the overall process of atomization and individualization of our society. One-by-one we are eroding our social structure and important institutions in our societies: be it family, marriage, motherhood/fatherhood , elderly care and so forth. We now live in a world where you may have divorced mother who even used services of Ukrainian surrogate mother to give birth to her child. And this "mother" is now working full time while her child is taken care by hired nanny, her own mother is in elderly home care in hands of hired nurses and sees her daughter only every once in a while. Now this women/mother also has a man who donated sperm for the child; he is now thousand of miles away just paying child support. And this father is also maybe single or serially single, just supporting his his child financially and using services of sex workers to meet his sexual needs and participating in online spaces that replaced traditional "boys clubs".

It is hard to overstate how rapid adoption and normalization of all these changes is when we are talking about generational experience. Even 40 something old millennials are now considered as dinosaurs, their experience of family, school, childhood or church and sexuality can be considered ancient and utterly outdated. I really think that people underestimate how profound the changes that are already baked in the society are and we will see the results only in upcoming decades - possibly as some new societal "epidemic".

The era of AI chatbot companions is in my eyes only the latest in series of assaults on relationships. Or to be more precise, the assault already happened by people normalizing commercialization of companionship both in real life but also using parasocial relationships via OnlyFans and similar platforms. In this sense AI companions can be viewed just as industrial automation of production to satisfy already existing commercial demand from customers for "relationships". In this way I do not see a reason why surrogate mother, tutor/teacher of children, nurse in elderly home, sex worker and even companion/friend cannot be fully automated, packaged and delivered as a product. All these activities are already viewed as legitimate subjects for markets to serve.

Just a few generations ago, my ancestors were mostly illiterate and depended on regular iteraction with others for survival. They would have rarely interacted with anyone they didn't know. Now, the vast majority of my language processing comes from text and video from the internet, mostly coming from people I have never and will never meet.

Things have changed so much that I can scarcely imagine how my ancestors spent their evenings. I spend them on the internet. So do my parents, who I live with. My mother also watches an enormous amount of television and listens to the radio. If this were the 18th century, we'd probably have nothing to do but talk to each other, though we'd probably go to bed a lot earlier and work longer hours. Instead, we interact briefly at supper and have the occasional conversation, but almost all of our social interaction has been replaced with technology. I wonder what kind of effect this has on people.

Even 40 something old millennials are now considered as dinosaurs, their experience of family, school, childhood or church and sexuality in their childhood let's say can be considered ancient and utterly outdated.

On this subject- I am unsure if it is a generational gap or a class one, but I noticed a strong trend when reading through the AmITheAsshole subreddit: a huge % of the questions on there are related to step- or half- family.

Obviously that place would be biased towards such questions (fair to say that familial obligations to a step-brother or half-brother are less defined than those to a full brother, so more likely to seek help defining them), but I was still rather shocked.

I'm firmly in the millennial age range, so I've always lived in a post-no-fault-divorce world, but the amount of step and half siblings among my peers was tiny.

This might also be explained by AITA being mostly at least partially fake.

True, if I were trying to generate an ask for drama/engagement using blended families would help maximize the heat.

My view is that some are definitely fake, but most are merely biased/exaggerated, with an additional portion written from the perspective of another person who they feel is being an asshole in a real scenario (i.e written by a son but with the post being from the perspective of his father)

That’s possible. I mean there’s also the effect that high-drama personalities get divorced more, too.

It is my belief that after the AI takeover, there will be increasingly less human-to-human interaction.

This is a major concern, yes.

One of the worst possible outcomes of ASI/singularity would be everyone plugging into their own private simulated worlds. Yudkowskian doom at the hands of the paperclip maximizers may be preferable. I'm undecided.

who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent?

Freedom is boring, not to mention aesthetically milquetoast, if not outright ugly in some cases. I have always been opposed to trends towards greater freedom and democratization in the arts - open world video games, audience participation in performance art and installations, and of course AI painting and photo editing recently - I find it all quite distasteful.

Is Tolstoy applicable here? Free men are all alike in their freedom; but to each unfree man we may bestow a most uniquely and ornately crafted set of shackles.

You sound like the exact kind of person I'm trying to wake up with my statements. You want to put humanity in shackles because you are afraid that freedom might be boring? You want to force me to spend eternity shackled to my psychological abusers because you're worried that I might not use my time in the most aesthetic way? No one is forcing you to play open world video games, but you want me to be forced to play a closed world video game. Why?

I view plans of giving humans GOD mode (but not really) via AI as fundamentally removing all sort of meaning from life.

Things are easier now than they were, yes, but we still suffer. Suffering is, in my opinion, a core pillar of what it means to be human.

Great, suffer then. That doesn't give you the right to impose suffering on others.

What about his children? Will you send men with guns to snatch them away from him to hook them into AI-fueled hypermodernity?

Good, complicated question. We are I think agreed that adults should (usually) be able to do whatever. We are probably also agreed that very young children should not have their life outcome dominated by whatever decision they hold at any given moment. I believe it is also uncontroversial that children in plainly abusive (violent/sexual) households should be removed. Between that, I think this worry is overstated - parenting is also a skill whose scarcity will be reduced by the singularity.

Maybe if his children want to leave for a month, they can; it is then his problem to avoid this. I don't know where the actual degree shakes out; I suspect the actual numbers will be relative to circumstances. Presumably an AI will be able to analyze if an intention to leave is temporary or stable; this should affect decisionmaking. (Imagine how uncontroversial trans would be if satisfaction and outcome could be perfectly forecast.) But in sum, I simply think we have a warped picture of the tradeoffs involved in liberty vs parenthood due to the fact that we live in a very broken world filled with people who are very bad at what they do.

I'd send whatever needed to be sent to hear from the children themselves, give them an informed opinion of the state of the wider world, and hand them a ticket to leave whatever Neo-Malthusian hellscape Panem wants them to dwell in.

It's completely up to them whether they want to leave, but I fully support their exit rights.

I'm sure there will be people insane enough to want to dwell in such places, and that's their prerogative, but the opinion of the father shouldn't override the desires of the son. Offer to wipe their memories of the outer world after they decline if the knowledge it's out there is so unpleasant.

What age? 2? 5? 15?

Age is unlikely to be a meaningful signifier of mental maturity at that point in the future.

If a baseline human, then I'd go with 16ish, otherwise when they can be reasonably expected to have the maturity of a baseline 16yo human.

Rumspringa rules, so 16 or 17? Seems the closet analogy for modern life : experience machine would be Amish : modern life.

but you want me to be forced to play a closed world video game. Why?

Network effects.

I'm not going to plug into the experience machine, so if everyone else does, the world outside the simulation is going to become a much less pleasant place to live in.

I also endorse the response from @RenOS below.

if everyone else does, the world outside the simulation is going to become a much less pleasant place to live in.

Why?

No one is forcing you to play open world video games, but you want me to be forced to play a closed world video game.

I don't disagree, and I know where you're coming from, but there's a bit more to it than that. It's possible for a style of game (or movie, etc) to become so popular that almost all you can find is something of that style. For example if you didn't like WW2 shooters in the early 2000s, it was real slim pickings. Sure, nobody forced you to play one - but other people did cause your options to be sharply limited.

Additionally, it hits harder when a series you used to like takes a turn in a direction you don't like while trying to chase trends, too. For example I used to love FF, but they haven't made a good game in that series in 20 years because they keep chasing the Western market (which in turn means they push the action, more Western-style fantasy, etc). I would never suggest that the devs at Square-Enix should be constrained to only make traditional FF games from now on. They aren't interested in making the games I want to play, and I have to accept that. But it is still kind of sad, and it does mean that I lost something fun in my life because of their artistic choices.

If AI allows for infinite content generation it's hard to see how this would be a problem. Just say "give me Final Fantasy 7 remastered with stunning graphics in FDVR and oh also change this character... etc"

One of the worst possible outcomes of ASI/singularity would be everyone plugging into their own private simulated worlds. Yudkowskian doom at the hands of the paperclip maximizers may be preferable.

What???

Being able to do whatever you want, all the time, that's roughly as bad as death?

What's a good outcome then, if endless human autonomy is such a terrible fate? Working on a commune all day with 19th century technology? Chattel slavery? A happy-clappy Borg hive like in Foundation's Edge? Low-wage jobs in the modern-day? If you want an aesthetically ugly job, I can describe mine to you.

It depends on what you see as "autonomy". I think a world where everyone is plugged into a simulated world is, if not exactly zero, at least pretty close to zero autonomy. You do not provide for yourself in any meaningful way, you are not capable of substantively changing the material world around you, you are not capable of protecting yourself and instead depend on protection. Of course your examples aren't positive, either. I would like a future where humans are improving their capabilities, try their best to colonize the universe, are meaningful members of society (not just "a" society like an online guild, but "the" society, the one that creates the infrastructure we use, the food we use, etc.) and in a fully general sense are in control of their destiny.

One of the worst possible futures is them becoming glorified pets of safetyist AIs that make sure no harm comes to them and allows them to play in a little golden cage of their own making, one so nice that they don't even consider leaving anyway.

I have heard rumors on Twitter from philosophy professors that undergraduates these days don't have the same reaction to Nozick's Experience Machine as previous generations. They are much more willing to get in the pod.

My personal ideal utopia-pod is definitely going to have

-extensive awareness of the universe outside the pod, albeit largely delegated to subservient AI as most information coming from that vector will be uninteresting.

-self-sufficiency. I won't be farming up the calories I consume/KW my upload consumes, but I already don't do that.

-ample self-defense capabilities.

I agree that I wouldn't be substantially altering the outside world on a regular basis, and this is probably the crux of the issue. But I for one would likely bite that bullet, depending on the particulars of the technology available.

But what's the point of colonizing the universe? I agree that it's good and should be done. However, expanding our material resources and technical capabilities is a means to an end. The end should be human enjoyment, whether that's conversation, art, games or whatever we can come up with given immense intellect and resources. I'm in favor of working out how to make Matrioshka brains (playing Dyson Sphere Program IRL) or whatever's more practical/efficient than that. A defence fund for dealing with aliens, entropy and so on is also a good idea.

I suppose I can't imagine how human input would be necessary or even useful. Once you figure out a nearly optimal way to assemble your megastructures, what can you do then that's useful? I'm envisioning shooting a few trillion tiny seeds that hopefully reach the target star-system and self-assemble into a factory that produces the megastructure. It's all automatic. That then receives a beam of light containing copies of people's minds. They then reproduce. They probably never have a physical body. Why would they need one? They've tapped the star or whatever energy source they're using as much as they can. All the minerals are being processed automatically. Is there make-work for them, consciously operating iron refinery #39990120347?

Say I'm one of a trillion trillion superintelligent posthumans, how can I contribute to anything 'meaningful' (aside from making art)? Do we just hope that the tech tree, so to speak, extends forever?

Even if I choose to stay on Earth, it would be very meaningful to me to know that real live biological humans actually made the million year journey to another galaxy. And people will make that journey, unless the AI prevents them of course. Maybe the adventure is like a kind of art.

But how would that even work? They'd be overtaken on the way by something more efficient. Whatever a biological human can do, a posthuman or AI can do better. When it comes to accelerating objects to near-lightspeed, it's easier to do it to smaller, tougher objects. I imagine if we figure out FTL travel, similar principles will apply.

And what do they do when they get there, when they find that galaxy's already been taken? Every star reprocessed by the time they reach it, indistinguishable from whatever they left? I get a sad vibe from it, like the Incan army sallying out against the Spanish. It was so over from the moment the Spanish arrived. If you're a biological human in this far future, it's like living your whole life as a joke or a zoo animal. Posthumans can mess with you whenever they see fit, in ways you can't even perceive, using technologies you can't imagine. They truly have the least autonomy, being totally at the mercy of more powerful beings.

Well, if it's an aligned AI, then the nanoprobes which yes, will beat us to the stars, will simply prepare the way for us, including in some regions not-preparing (leaving untouched). I'll be excited when humans do arrive.

If it's an unaligned AI, welp.

I guess I'm personally resigned to the golden cage but I just want to make sure that I'm not denied my basic needs (esp sex) for some safetyist nonsense. I mean, do you think that a woke feminist or conservative Christian, if they managed to get in control of the AI, would allow us the sexual utopia that we have a right to? This is what I'm afraid of.

But I totally agree with you: humans should be free to, for example, colonize Mars even when its dangerous because the infrastructure isn't there yet. I too worry that safetyism will prevent us from taking risks, being part of the forefront of civilization, exploring the universe.

the sexual utopia that we have a right to

Who says you have a right? From whence do you derive this right? Explain to me how this is a right akin to the right to life?

It does depend on whether the assertion is being made from the position of "I can't get anyone in the real world, so my only hope is the simulated world where an AI character will pretend to love me" or "I want to have all kinds of sex beyond what is possible now, I am jaded and want infinite stimulation, hyper porn".

While one set of circumstances might be treated more sympathetically than the other, what right are you claiming? You can survive without sex, you can't survive without food, water or air. Maybe we all have a right to $50 million, to be tall and handsome/slender and beautiful and incredibly smart and successful and all the rest of it, but we're not all going to get that. Unless you are pinning your hopes on AI magic producing abundance and a way to get humans into some kind of "better than reality" virtual world where they can all be tall, handsome, successful, smart, rich people with tons of loving and willing partners, you can declaim about your "right to X, Y or Z" all you like, but you're not gonna get it.

The right to sex is not really about sex. It's about protecting normal people from moral busybodies that will ruin our lives by publicly proving that we have a sex drive. Once "so and so said something sexual once" or "so and so had sex (in an unapproved way)" is not a basis for public humiliation or losing your job, our lives will be so much better.

It's interesting to me that asserting a right to sex can a reaction from you. Are you afraid of people getting their needs met? Or is power over other people's sex lives something you need for some reason?

It does depend on whether the assertion is being made from the position of "I can't get anyone in the real world, so my only hope is the simulated world where an AI character will pretend to love me" or "I want to have all kinds of sex beyond what is possible now, I am jaded and want infinite stimulation, hyper porn".

While one set of circumstances might be treated more sympathetically than the other, what right are you claiming?

This is a complete aside: I agree with you that one would be treated more sympathetically than the other, but I'm curious what's your intuition on which one that would be? My intuition points to the latter being the one to receive far more sympathy than the former, which would actually receive close to none and actually attract antipathy.

Haha, this is exactly I don't give a shit about these people's sympathy.

I'm pretty sure this is an assertion of a negative right, derived from (among others), the right to privacy. Ie, if he can make it himself (or convince others to make it for him), what gives you the right to prevent him from doing so?

I think you're viewing this as "A says they have rights to B's body", whereas parent is viewing it as "C is saying they have the right to prevent what A and B want to do with their bodies."

Exactly, thank you.

denied my basic needs (esp sex) for some safetyist nonsense

Besides mods which alter the creators vision getting banned, as @tikimixoligist shows, mids which adhere to it more closely are forbidden from being distributed by mainstream sites: https://gamebanana.com/mods/430053, https://varishangout.com/index.php?threads/fire-emblem-engage-localization-fix-mod-removed-by-loverslab-gamebanana.1737/

I suspect sex would be reasonably safe. But we already have a preview of what might happen if your personal utopia does is not what the zeitgeist wants. There was a Rimworld mod called "European Phenotype and Names Only (White Humans)" which modifies a single player game. It's banned.

https://www.eurogamer.net/paradox-pulls-discriminatory-stellaris-mod-that-made-all-humans-white

We embrace the idea that players mod the game to best represent how they want to play, we do NOT however wish to enable discriminatory practices.

The want to force us to spend eternity with people who hate us and will psychologically abuse us. They'd rather us dead than allow us to escape. I know that's extreme but that's how I see it.

Well you and I are in agreement that that's a bad outcome. I personally expect to be killed as a result of strategic incentives encouraging monopolizing all available resources (which applies even if people are in charge).

But in principle, perfect autonomy is surely preferable.

I too am concerned about Yud-style misaligned AI, but I don't think it's more than 10% likely. Either way, if it's our fate, it's our fate. I'd rather be killed by emotionless AI than be psychologically tortured by feminists forever.

You really think that's the most accurate summary of their beliefs?

I think it's a hostile phrasing but correct in structure. I guess it could be accused of being an extrapolation. At any rate, it's hard to see how one would avoid it.

One man's "let's preserve human society" is another's "let's preserve the status games that unceasingly victimize me."

How would you summarize "their" beliefs? (We might have to decide who we mean by "their".)

I was referring to @Primaprimaprima's contention that if everyone dives into their own personal virtual world, that's a dystopia. I tend to agree, simply because I think interpersonal interactions are extremely important.

Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.

Really, nobody has any idea where this is going. The only safe bet is that it's going to be big. A service enabling people to share 140 character text snippets was sufficient to meaningfully shift politics and culture, and that's peanuts to this, probably even if the current spring ends short of AGI.

I think latency issues will be a serious barrier to virtual reality feeling right.

There is also the unlikely result (though far from not impossible) that this is the height of AI. If you asked some in the late 60s whether their grandkids would visit or even live on the moon many would’ve said yeah.

I would be curious if AI optimists could outline a rough path for how we get from ChatGPT to being able to say, replace a top engineer at Google or a top mathematician. Is it JUST more parameters and more training data?

We probably haven't seen the ceiling for what multimodal training (e.g. PaLM-E) can accomplish yet.

A matchmaking AI would be amazing! But will the moral busybodies allow me to train my AI to deselect people who hate me? I'd like to eliminate from my life anyone who believes that sexual harassment is a real problem.

Do we expect moral busybodies to pitch in on a matchmaking AI? After all, it’s much less work to hoot and holler and carry on about how matchmaking AI ‘reinforces cisheteropatriarchal assumptions’ until the cows come home.

If matchmaking has a dominant role in social life (like social media does today) I imagine that the busybodies will do everything in their power to control it.

I'm sure they'll allow them to deselect anyone who believes it isn't.

Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.

What about the Tinder trap? The goal of Tinder is to keep you using Tinder, not find a stable partner. Any company that creates a personal AI will similarly want you to use that AI instead of going hiking with your new friend. Natural selection will ensure that the AI service that people talk to the most will win the AI service wars.

But if you go hiking occasionally the AI can sell you tents and backpacks and cabin rentals.

Really, outcomes in most markets aren't nearly as perverse as what we see with Tinder. Chrome, for instance, doesn't intentionally fail to load web pages so that Google can sell me premium subscriptions and boosters to get them to load. Unlike Tinder, Chrome is monetized in a way that doesn't provide an incentive for its developer to intentionally thwart me in my attempts to use it for its ostensible purpose, and there's enough competition that if Google tried this people would stop using it.

I wonder if Tinder isn't so bad but gets blamed for the dysfunctional Western dating market. In the poor country where I am living, I use Tinder and another local app and I get dozens of likes and multiple messages per day. (This is coming from someone who got maybe one fat-chick like per day in America. BTW this is an experiment you can do yourself, if you are willing to pay for Tinder gold or do some VPN shenanigans to spoof your location. Try setting your location to Manila, PH and see how much attention you get.)

That sounds more like PH women being super-interested in marrying a white guy and moving to America, or just being interested in someone taller, richer, and hairier than the local men. The local dudes might still be being treated poorly.

I'd be interested in stats that indicate that Tinder in places other than the USA/Europe is "healthier," but I'm not sure what those would look like.

I never heard any of these girls express any interest in moving to America (and even if I had the perfect loving wife, I wouldn't want to live in the US again.)

I don't think the local dude are treated poorly here. My theory is that poor men tend to immigrate to other countries for work, creating a male gender bias in rich countries, and leaving a female gender bias back home. This combined with some of the psychological effects of wealth (such as more feminism) causes the wildly distorted dating market in rich countries.

Something that constantly shocks me is just how many young women there are here. It seems like every business I patronize has 75%+ female workers. Where are this country's young men? Sure there are male dominated professions like taxi driver, and maybe there are lots of men in industry or farming but I don't see them. I'm not sure.

Try setting your location to Manilla, PH and see how much attention you get.)

I was passing through KL and got 100 likes in 24 hours. I'm lucky to get two a week in my western country.

Granted some of those are escorts, bots and scammers, but still..

I appreciate the argument that AIs can be a super stimuli, but the need for social validation is enormously important for most people and I'm doubtful AI can meaningfully give that.

(Also most people are way more pleasant than your example)

(Also most people are way more pleasant than your example)

Not in America, at least never to me. (Okay maybe that's not entirely true, but as feminists have taught me, you have to exaggerate social harms for other people to take them seriously.)

I've thought a lot about this, and the need for social validation is a problem if the person refuses to accept the AI as 'real'. Some people will simply decide that AIs are real enough, but others will still seek validation from humans. This second group is where the moral busybodies live, and that community will evaporatively cool until it is such a toxic cesspit that no real validation is possible. Some people will still compulsively seek it, but most people will just find some way to convince themselves that the validation from the AI is legitimate.

Part of my point with this line of argument is to try to wake up the moral busybodies to how they are destroying society. These people are the ones who don't want a world where everyone is isolated into their own bubbles, but that is exactly what they are creating with their efforts.

Have you heard about the Replika controversy? A lot of people got very sad that their pseudo-gf ERPing partner got lobotomized: https://old.reddit.com/r/replika/top/

Easy to sneer at these people (especially the guy who Capitalizes Nearly Every Word) but even fairly primitive AI can provide emotional connection. Give it a face and some more compute and see what's possible.

Grandma always said not to fall in love with entities I couldn't instantiate on my own hardware.

Right now I expect it's mostly desperate men using these, but that may have more to do with broader tech adoption patterns than specific appeal. These things can function as interactive romance novel characters, and many women may find that quite compelling.

We're entering uncharted and to some extent even unimagined territory here. Anyone who has thought about this issue realized long ago that AI romance would be a thing eventually, but personally I figured that for it to have much wider appeal than marrying your body pillow, AI would have to achieve human-like sentience. And if the thing someone is falling in love with has human-like sentience, well, who am I to say that's invalid?

What I didn't imagine is that we'd build machines that talk well enough for interacting with them to light up the "social interaction" parts of our brains effectively, but that we can be pretty certain, based on their performance in edge cases and our knowledge of how they work, aren't sentient at all. People falling in love with things that have no inner existence feels deeply tragic.

I don't know. Maybe this is faulty pattern matching or an arbitrary aesthetic preference on my part, and romantic attachment to non-sentient AI is fine and great and these people will find meaning and happiness. (At least as long as they follow grandma's rule, which they can soon.)

Just as a point of order, AI can be a superstimulus or it can offer superstimuli; the phrase "a superstimuli" should never occur and it seems to happen a lot for some reason.

I want to agree with you, but I don't know. Never mind the people currently claiming Bing is too human, people used to say that about IRC chatbots and in Japan guys have straight up married Hatsune Miku and a Nintendo DS. Social connections are a bit like food or water, when you don't get any of it, a little goes a long way.

Wasn't this exactly the society depicted by Asimov in 1957 for his Solarians?

The Solarians (d?)evolve into refusing to interact with anyone except robots out of giga-anarcho-libertarian FREEEEEDOM rather than safetyism - it's not out of concern that they'll be harmed, more out of concern that anyone stepping foot on their property is anathema - but it comes out the same, no Solarian interacts with other humans, even their children are vat-grown.

Horseshoe theory strikes again!

Think about it, who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent? The choice seems obvious to me.

Personally I enjoy inflicting rhetorical suffering, so, that's one upside to real humans. If it's a robot that can't be hurt, where's the sadistic thrill?

I'd rather express my sadism through BDSM, personally. But will the busybodies allow us to have BDSM or even insult eachother?