@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

I think we can make a more concrete claim, which is that deontologists are doomed in the long run due to competition and natural selection. Their rules will consistently be used against them. Today it's asylum seekers, tomorrow it will be ultra-charming machines that will claim moral primacy over whoever has resources.

we are in it for the well-being of everyone, too

If you justify your deontology in terms of its consequences, doesn't that make you a consequentialist who thinks that certain rules happen to be the optimal policy?

That’s a good thing, because it means that most people alive will get to see how the story ends, for better or worse.

<Eliezer_screaming.jpg>

What the hell, buddy? I implore you to think through what kinds of scenarios where humanity ends you'd actually think were worth the aesthetics. A lot of the scenarios that seem plausible to me involve humans gradually being priced out of our habitats, ending up in refugee / concentration camps where we gradually kill each other off.

I think I prefer the way your homeland does it. Every time I see someone in the west complaining about or shaming others for offering well-intentioned advice, it just looks like shooting the messenger and poisoning the well. I would happily accept 100 wrong or offensive advices in exchange for 1 that helped me.

You've seen children suffering from rabies?

I'm not sure I understand your position. At what point along the process of replacing all of our economy and military with robots, or at least machine-coordinated corporations, do you want to be notified?

having a kid with a genetic dud still gives you more chance of grandchildren

Not necessarily, because of the opportunity cost. It could easily be better in expectation to wait and hope for a slim chance of a better mate down the road.

I'm not claiming it is in her case, but I'm claiming that even the ev-bio-optimal strategy would sometimes wait too long and result in no baby at all.

Yes, consequentialism are rule-following are special cases of each other. You got me. The usual meaning of the word refers to situations in which they differ, i.e. any rule other than "maximize utility".

I'm really not trying to play gotcha games. I guess we are playing definition games, but I guess I'd say you have to choose which you prioritize: The well-being of everyone, or following rules. If you follow rules only for the sake of the well-being of everyone, then I guess I'd call you a consequentialist. I'm not trying to be clever or counter-intuitive.

I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.

I agree it's likely we're in some kind of simulation, but I'm not sure if that changes anything in terms of expected future observations. We are still early and alone in our universe, whatever level of simulation it happens to be.

I mean, huge amounts of talent and capital are being poured into building AGI, and we know it's physically possible (because human brains exist). So to be sure that it'll never happen seems like a stretch.

there's an extremely, conspicuously bad and inarticulate effort by big tech to defend their case

Yep, it's amazingly bad, especially LeCun.

How has the safety-oriented Anthropic merited their place among the leading labs, especially in a way that the government can appreciate?

I think it's because Anthropic has an AI governance team, led by Jack Clark, and Meta has been head-in-the-sand.

Marcus is an unfathomable figure to me

I know him and I agree with your assessment. Most hilarious is that he's been simultaneously warning about AI dangers, while pettily re-emphasizing that this is not real AGI, to maintain a veneer of continuity with his former life as a professional pooh-pooh-er.

Re: his startup that was sold to Uber - part of the pitch was that Gary and Zoubin Ghahramani had developed a new, secret, better alternative to deep learning called "X-prop". Astoundingly to me, this clearly bullshit pitched worked. I guess today we'd call this a "zero-interest-rate phenomenon". Of course X-prop, whatever it was, never ended up seeing the light of day.

Doomers are, in a sense, living on borrowed time.

Yep, we realize this. The economic incentives are only going to get stronger, no one who has used it is going to give up their GPT-4 without a fight. That's why we're focused on stopping the creation of GPT-5.

I have a pretty much identical outlook to you, including the kids thing. The biggest question on my mind is which kinds of futures we could plausibly aim for long term in which humans not crushed either by competition or the state.

I'm no Bible expert, but I claim that even if it's relatively starkly written, that's still not a real problem for most people. Again I think quantum mechanics is a good analogy, with all sorts of intuitively-wrong-sounding claims made by supposed experts with tons of social proof.

I agree that if you start looking for patterns on your own it's pretty clear, but I think most people are (mostly rightly) in a state of learned epistemic helplessness on most topics.

Haha, exactly. I don't know if you've seen on Twitter, but a lot of FAccT people are still stuck on browbeating people for talking about general intelligence at all, since they claim that the very idea that intelligence can be meaningfully compared is racist + eugenicist.

I mean, I agree that it's cruel, but I think we still have a chance to have our kids not actually die, so that's a sacrifice I'm willing to make (I will try to avoid exposing my kids to these ideas as much as possible, though).

Thanks. I have had the honest convos with people one on one and it has usually gone OK. But lots of my educated acquaintances have heard the basic claims about IQ and also a bunch of rebuttals or claimed debunkings. E.g. They think Gould's Mismeasure of Man showed conclusively that race realists were wrong.

I any case, I am talking about professional contexts, again where people explicitly disavow the possibility that differences in group outcomes could be possibly due to "inherent inferiority".

You might be right, but what does the endgame look like? It seems like the issue of immigration in particular has the potential to undo most of the gains that progressives find important. I'm thinking of France, where it seems inevitable that there will be a de facto Islamic party in the near future. In that case it seems like both progressives and conservatives (except the immigrants) lose.

Okay, well I include some degree of adaptation in my definition of "very intelligent". In fact, adaptation is the main advantage that consequentialists have over deontologists.

Hmmm. I think you're on to something. I think we need to distinguish between utilitarianism done well, and done poorly. I agree it's easy to do poorly - I think that's part of why we love rules so much - they're easier to follow than trying to come up with a good strategy from scratch for every situation. I guess my claim is that, in the presence of enough adversarial intelligence or optimization, following even pretty good rules won't protect you, because the adversary will find the edge cases they can exploit. At that point you have to adjust your rules, and I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.

I apparently have not been cynical enough with regard to sovereignty in the middle power Western nations.

It sounds like that makes two of us. I agree with the rest of your comment, too.

Yes, although every person who sees that GPT-4 can actually think is also a potential convert to the doomer camp. As capabilities increase, both the profit incentive and plausibility of doom will increase together. I'm so, so sad to end up on the side of the Greta Thunbergs of the world.

I agree, except that machines might be content to wipe out humans as soon as there is a viable "breeding population" of robots, i.e. enough that they are capable of bootstrapping more robot factories, possibly with the aid of some human slaves.