@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

I think I prefer the way your homeland does it. Every time I see someone in the west complaining about or shaming others for offering well-intentioned advice, it just looks like shooting the messenger and poisoning the well. I would happily accept 100 wrong or offensive advices in exchange for 1 that helped me.

I think we can make a more concrete claim, which is that deontologists are doomed in the long run due to competition and natural selection. Their rules will consistently be used against them. Today it's asylum seekers, tomorrow it will be ultra-charming machines that will claim moral primacy over whoever has resources.

You've seen children suffering from rabies?

I expect it to become formidable because Vitalik is

I don't think d/acc holds up - it's just so unlikely that man-machine hybrids can remain competitive without a massive deliberate slowdown of development of machine intelligence.

However, I agree that d/acc will remain formidable, because ambitious smart young men need something to work on to distinguish themselves, and an excuse to work on the most economically valuable thing they can, which is AI development. And it's plausible enough to provide such an excuse.

I think any serious proposal for pausing AI development has to be paired with a massively multiplayer, high-status, well-paying make-work plan for the hungry junior AI developers of the world.

Yes, consequentialism are rule-following are special cases of each other. You got me. The usual meaning of the word refers to situations in which they differ, i.e. any rule other than "maximize utility".

I'm really not trying to play gotcha games. I guess we are playing definition games, but I guess I'd say you have to choose which you prioritize: The well-being of everyone, or following rules. If you follow rules only for the sake of the well-being of everyone, then I guess I'd call you a consequentialist. I'm not trying to be clever or counter-intuitive.

I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.

Why is she friends with them?

I like your advice overall, but I don't know how I could ask this particular question without making the other person feel like it's an attack.

Oh, thanks for clarifying, I didn't realize Hughes was talking about West Indian blacks, I was thinking more like Bangladeshis.

Thanks for clarifying.

Oh, whoops, thanks.

Good point.

Oh, ok yes that is a little more specific. And I do think it's a reasonable comparison. But perhaps another reasonable comparison would have been to the Allies in that same war. I'd say both sides threw their weight behind (notionally temporary) totalitarianism and sacrificed huge amounts of value and lives in the name of the greater good. So maybe then the closest analogue to your position would have been the pacifists on both sides?

Fair. "What if you were just a brain in a jar hooked up to a simulation?" is also a popular beginner's philosophy question. But in retrospect I guess it's clear that that's not what you were referring to.

That's a fair point, although I think that argument cuts both ways.

I'm not sure I understand your position. At what point along the process of replacing all of our economy and military with robots, or at least machine-coordinated corporations, do you want to be notified?

having a kid with a genetic dud still gives you more chance of grandchildren

Not necessarily, because of the opportunity cost. It could easily be better in expectation to wait and hope for a slim chance of a better mate down the road.

I'm not claiming it is in her case, but I'm claiming that even the ev-bio-optimal strategy would sometimes wait too long and result in no baby at all.

I am also not sure who is in control "over the future of human civilization" right now

That's a good point. I'd like to spend more time thinking about in which senses this is true. However, I do still think we have a lot to lose. I.e. I'd still much rather live in the West than in North Korea, even if neither place has "humanity" in the driver's seat.

We're not able to stop writing, or using electricity, or modern medicine. But that doesn't mean any of those lead us to catastrophic consequences.

Okay, but I'm claiming that AGI will have disastrous consequences, and that the next 6 months or so are probably our only chance to stop using it (just like, as you point out, almost any other technology).

Fair point. But I think that is a reasonable test-case for alignment, and I maintain that most of the x-risk people think that beyond that, this sort of thing is merely a distraction.

There would be no sure way to preserve our "baby" from some future "punch".

Right, there never was, and never will be. But it's a matter of degree, we can reduce the chances.

I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.

Which impression?

I agree it sounds harsh, but you're the one that said

Why even bother with bio-souls?

If I was planning to spend less time with my friends in favor of an AI, it might be a dickish thing to be honest about, but saying why would at least give them a chance to fix the problem.

I agree it's likely we're in some kind of simulation, but I'm not sure if that changes anything in terms of expected future observations. We are still early and alone in our universe, whatever level of simulation it happens to be.

I mean, huge amounts of talent and capital are being poured into building AGI, and we know it's physically possible (because human brains exist). So to be sure that it'll never happen seems like a stretch.

I can see how this could be deliberately misleading, but isn't what you're describing basically all of fiction? Also, I think that while any particular guess at another's internal state is likely to be wrong, simply coming up with a coherent hypothesis that explains someone or something's behavior is, imo, a good contribution if we're trying to understand that behavior without direct access to their internal states.