@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

Yes, that's my point. These are equally arbitrary groups, but are legally protected and officially encouraged to advocate for their own interests at all levels of legal and corporate governance.

Okay, now it sounds like you're saying "we always win, except when we lose and become a Neo-Caliphate." How is the mellowing of Islamic countries going? What's the mood like in Afghanistan or Somalia?

The reason why white straight men aren't allowed to organize as a group

It sounds like you're agreeing with me?

they're not an actual cultural group

As opposed to the group "Asians and Pacific Islanders", or the group of all black people worldwide, including Pygmies and Kanye West?

You might be right, but what does the endgame look like? It seems like the issue of immigration in particular has the potential to undo most of the gains that progressives find important. I'm thinking of France, where it seems inevitable that there will be a de facto Islamic party in the near future. In that case it seems like both progressives and conservatives (except the immigrants) lose.

I agree that slave-owning and the civil war is a good example of a right that took a lot of destruction and kicking and screaming to take away.

I guess I'm thinking of basically everything that happened post-civil rights. Straight white men, and white people more generally, now aren't allowed to form their own clubs, be praised as a group, or advocate for their own cultural traditions or interests in almost any way in the west, and I think that change happened without much serious pushback.

EDIT: Sorry, I guess I didn't address your qualifier 'people who have spent a long hard time and earning it'. Are these individual people, the same individuals who did the fighting? If not, does building a civilization count as earning it?

I'm not sure exactly what Dase meant, but my reading is that it evokes the totalizing, moralizing, intrusive, overbearing, over-socialized, crab-bucket, tall-poppy syndrome state of society that tends to arise in human society when there isn't a frontier to escape to. I honestly don't understand the connection to native american governance or living arrangements, but I think it's suppose to evoke otherwise strong chiefs being effectively hen-pecked into submission due to everyone living in close enough quarters to be constantly surveilled.

Islam seems like it's in a much better position than Christianity, at least to me. They have the highest birth rates, advocate for their own interests unapologetically, and have a long history of punishing and assassinating critics and opponents. This causes lots of internecine strife, but I predict these traits will allow rapid expansion within the West.

taking away rights or privileges or respect or acknowledgement or etc. from people who have spent a long hard time and earning it, and trying to do that ussually involves a lot of kicking and screaming and destruction

You would think so, but didn't we just watch this happen to straight white men, and Europeans more generally, with basically no effective pushback? Some days it seems as simple as

  1. Comedians joke about it
  2. Thinkpieces recontextualize it
  3. Comedians mock the stupidest examples of pushback
  4. A few people get cancelled for pushing back

And soon after, countless formal and informal corporate, academic, personal, and government policies change to enforce the new policy. In a way it's impressive how liberal democracies can coordinate to change which groups they marginalize without much violence or state-directed propaganda.

I expect it to become formidable because Vitalik is

I don't think d/acc holds up - it's just so unlikely that man-machine hybrids can remain competitive without a massive deliberate slowdown of development of machine intelligence.

However, I agree that d/acc will remain formidable, because ambitious smart young men need something to work on to distinguish themselves, and an excuse to work on the most economically valuable thing they can, which is AI development. And it's plausible enough to provide such an excuse.

I think any serious proposal for pausing AI development has to be paired with a massively multiplayer, high-status, well-paying make-work plan for the hungry junior AI developers of the world.

motivated by the same impulse of preserving their petty relevance

Guilty as charged. But can you point me towards a higher goal? The problem is, I just don't trust anyone else to act in the interests of my family. Perhaps a beneficial Singleton AI God would, and I think building such a thing might be our best shot at mattering at all to the future. But I'm afraid we'll actually build some perverted thought-policed dystopia, or of the default outcome of being priced out of existence by competitive dynamics.

the longhouse ought to cover the entirety of the light cone

Close, but I think the argument is "if your longhouse doesn't cover the lightcone, you can expect your colonies to spawn their own universe-eating longhouses and come knocking again once they're much bigger than you." Then the options become: Our shitty longhouse forever, or a more competitive, alien longhouse / colonizers to come back and take all our stuff.

As far as I can tell, our only hope is that at some scales the universe is defense-favored. In which case, yes, fine, let a thousand flowers bloom.

My p(doom) went up again when I realized how hard it is for governments to remain aligned with their citizens. As a simple example, they can't seem to raise a finger against mass immigration no matter how unpopular it is, because it has an economic justification. See also: WW1. Replacing humans throughout the economy and military is going to be irresistable. There will probably be another, equally retarded, culture war about how this second great replacement is obviously never going to happen, then not happening, then good that it happened.

TL;DR: Even if we control AIs well, humans are going to be gradually stripped of effective power once we can no longer contribute economically or militarily. Then it's a matter of time before we can't afford or effectively advocate for our continued use of resources that could simulate millions of minds.

Yes, consequentialism are rule-following are special cases of each other. You got me. The usual meaning of the word refers to situations in which they differ, i.e. any rule other than "maximize utility".

Sounds like you still agree with us doomers? We don't expect human greed / competitive pressures to go away any time soon, which is why we're worried about exactly the kinds of money-winning scenarios you propose.

I agree it's kind of a matter of degree. But I also think we already have so much power-seeking around that any non-power-seeking AI will quickly be turned to that end.

I agree, but I also still see most people steadfastly refuse to extrapolate from things that are already happening. For a while, fanciful doom scenarios were all we had as an alternative to "end of history, everything will be fine" from even otherwise serious people.

I'm really not trying to play gotcha games. I guess we are playing definition games, but I guess I'd say you have to choose which you prioritize: The well-being of everyone, or following rules. If you follow rules only for the sake of the well-being of everyone, then I guess I'd call you a consequentialist. I'm not trying to be clever or counter-intuitive.

I agree that Yud leans heavily on some unrealistic premises, but overall I think he gets big points for being one of the few people really excited / worried about the eventual power of AI at the time, and laying out explicit cases or scenarios rather than just handwaving.

I agree that bay area rationalists can be a little messianic and culty, though I think it's about par for the course for young people away from home. At least you can talk about it with them.

I also think that most x-risks come simply from being outcompeted. A big thing that Yud got right is that it doesn't matter if the AI is universalist or selfish or whatever, it will still eventually try to gather power, since power-gathering agents are one of the only stable equilibria. You might be right that we won't have to worry about deontological AI, but people will be incentivized to build AIs that can effectively power-seek (ostensibly) on their behalf.

I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.

we are in it for the well-being of everyone, too

If you justify your deontology in terms of its consequences, doesn't that make you a consequentialist who thinks that certain rules happen to be the optimal policy?

Okay, well I include some degree of adaptation in my definition of "very intelligent". In fact, adaptation is the main advantage that consequentialists have over deontologists.

Hmmm. I think you're on to something. I think we need to distinguish between utilitarianism done well, and done poorly. I agree it's easy to do poorly - I think that's part of why we love rules so much - they're easier to follow than trying to come up with a good strategy from scratch for every situation. I guess my claim is that, in the presence of enough adversarial intelligence or optimization, following even pretty good rules won't protect you, because the adversary will find the edge cases they can exploit. At that point you have to adjust your rules, and I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

Okay. I agree it seems hard, but I think there's something like a 15% chance that we can coordinate to save some value.

I personally find it hard to care viscerally, at least compared to caring about whether I could be blamed for something. The only way I can reliably make myself care emotionally is to worry about something happening to my kids or grandkids, which fortunately is more than enough caring to spur me to action.

I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.