I agree that Yud leans heavily on some unrealistic premises, but overall I think he gets big points for being one of the few people really excited / worried about the eventual power of AI at the time, and laying out explicit cases or scenarios rather than just handwaving.
I agree that bay area rationalists can be a little messianic and culty, though I think it's about par for the course for young people away from home. At least you can talk about it with them.
I also think that most x-risks come simply from being outcompeted. A big thing that Yud got right is that it doesn't matter if the AI is universalist or selfish or whatever, it will still eventually try to gather power, since power-gathering agents are one of the only stable equilibria. You might be right that we won't have to worry about deontological AI, but people will be incentivized to build AIs that can effectively power-seek (ostensibly) on their behalf.
I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.
we are in it for the well-being of everyone, too
If you justify your deontology in terms of its consequences, doesn't that make you a consequentialist who thinks that certain rules happen to be the optimal policy?
Okay, well I include some degree of adaptation in my definition of "very intelligent". In fact, adaptation is the main advantage that consequentialists have over deontologists.
Hmmm. I think you're on to something. I think we need to distinguish between utilitarianism done well, and done poorly. I agree it's easy to do poorly - I think that's part of why we love rules so much - they're easier to follow than trying to come up with a good strategy from scratch for every situation. I guess my claim is that, in the presence of enough adversarial intelligence or optimization, following even pretty good rules won't protect you, because the adversary will find the edge cases they can exploit. At that point you have to adjust your rules, and I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.
Okay. I agree it seems hard, but I think there's something like a 15% chance that we can coordinate to save some value.
I personally find it hard to care viscerally, at least compared to caring about whether I could be blamed for something. The only way I can reliably make myself care emotionally is to worry about something happening to my kids or grandkids, which fortunately is more than enough caring to spur me to action.
I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.
Can you give some examples of these crazy views and goals?
I think we can make a more concrete claim, which is that deontologists are doomed in the long run due to competition and natural selection. Their rules will consistently be used against them. Today it's asylum seekers, tomorrow it will be ultra-charming machines that will claim moral primacy over whoever has resources.
I agree with all this. I guess I don't expect it to be in anyone else's interest to run even an uploaded and upgraded version of myself. Perhaps there will be some sort of affirmative action or mind-scraping such that some small vestiges of me end up having any influence. So I would consider your upgrading plan to be an instance of "crushed by competition", though it may be slightly better than nothing.
I have a pretty much identical outlook to you, including the kids thing. The biggest question on my mind is which kinds of futures we could plausibly aim for long term in which humans not crushed either by competition or the state.
This is why I'm afraid of AI. Once most humans are economically and militarily obsolete, and can't go on strike, we will leak power one way or another, and it will eventually end up in the hands of whoever controls the value-producing robots and chip factories.
Like Mewis said above though, you might encourage extortion if you pay off anyone who raises arms against you.
I think that's why they required the defectors to bring a very expensive MiG with them.
To be fair, when I started learning math + stats, I found the use of greek letters intimidating and confusing, especially rarely-used ones like ξ. Of course there's nothing wrong with them besides their unfamiliarity, but I try to start with English letters in my own math writing and only reach for greek letters when I'm running out of those.
Hmmm, are you just saying that you need to "choose your pain", and also accept that trying to do things will sometimes turn out badly? Or something more specific to romance?
I read the article, and was surprised to find I agreed with most of what she said. Every one of her opinions is about as manosphere/redpilled/motte-ish as you could imagine being printed in the NYT in 2023.
The new book being discussed is about how modern feminism has not just failed men, but effectively forbidden productive discussion of their problems. Bravo!
I broadly agree, but I think "if she's still a feminist, you're not the guy" is too strong. My wife had a long enough history of outspoken feminism when I met her that it would have been ridiculous for her to pretend not to be one. But I also never pretended to agree with something I didn't. It worked out fine and she eventually became a stay-at-home-mom.
Most political convos start with her thoughtlessly repeating some slogan, and if I can't reframe the convo immediately, usually instead of saying she's wrong or being dismissive, I ask what it means concretely, as Gaashk suggested above. This usually makes her mildly upset and embarrassed when she notices she hasn't thought about it much at all, and so she's learned to not to do that as much. But in terms of day-to-day life or child-rearing decisions, even though politically we're worlds apart, we usually almost entirely agree on the concrete steps to be taken.
On the few times when something completely beyond the pale (from her point of view) has come up ("you really believe ___!?"), I tell her she needs to talk to me respectfully even if she disagrees, as I am doing to her, even though from my point of view, she's equally misled.
I used to have a trad girlfriend who said something like "if a man is taking a love interest's political opinions seriously in the first place, he's doing it wrong."
Why is she friends with them?
I like your advice overall, but I don't know how I could ask this particular question without making the other person feel like it's an attack.
Okay, but I guess the question is how crappy it can get before there is a viable alternative.
I think ChatGPT is rapidly becoming that alternative. Its politicization is probably the most important front in the culture war right now.
So they'll eat their own, but then continue to operate mostly the same?
Interesting thesis. Perhaps this is part of why some people find things like kids, homeownership, getting degrees, getting promoted at work as meaningful, since those all fit in between "easy" and "almost impossible".
I agree, and this is something I've been thinking about for a while in terms of our larger culture. In the presence of any sort of incentives for growth or change, it's not clear how anyone or anything can meaningfully survive without first paying a huge one-time cost to conquer everything around it, then kill itself to some degree by removing its own ability to grow and change.

I'm really not trying to play gotcha games. I guess we are playing definition games, but I guess I'd say you have to choose which you prioritize: The well-being of everyone, or following rules. If you follow rules only for the sake of the well-being of everyone, then I guess I'd call you a consequentialist. I'm not trying to be clever or counter-intuitive.
More options
Context Copy link