@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

Why do you dream of fusion? My understanding is that it has similar characteristics to fission as an energy source - high capex, low fuel costs.

Good point.

Oh, ok yes that is a little more specific. And I do think it's a reasonable comparison. But perhaps another reasonable comparison would have been to the Allies in that same war. I'd say both sides threw their weight behind (notionally temporary) totalitarianism and sacrificed huge amounts of value and lives in the name of the greater good. So maybe then the closest analogue to your position would have been the pacifists on both sides?

Fair. "What if you were just a brain in a jar hooked up to a simulation?" is also a popular beginner's philosophy question. But in retrospect I guess it's clear that that's not what you were referring to.

That's a fair point, although I think that argument cuts both ways.

To add a point in your favor, perhaps every communist revolution ever could point to real harms of the Tsar, or capitalism, or whatever. But what they mostly got was destructive civil war, followed by grinding misery and totalitarianism.

That said, I still buy the argument that in the long run, no un-upgraded human brains will be in effective control of anything important. Robin Hanson basically says, yes that's OK. I guess I'm thinking our only hope is to slow things down enough to upgrade human brains + institutions so they can keep up.

You know who else planned for the future? Hitler! I mean, I agree, but this seems like a fully general argument against planning for the future.

I agree that invoking the devil-we-know to save us from a potentially worse devil is a terrible idea unless we're very sure it's going to be worse. But I'm saying that it's likely that it will be worse. I think you're right to be skeptical, and I'm only like 60% sure myself.

It didn't self-align in time to save our other hominid ancestors.

I'm not sure I understand your position. At what point along the process of replacing all of our economy and military with robots, or at least machine-coordinated corporations, do you want to be notified?

I agree with all of this, but what do you say to those of us that think it'll be a fascist disaster, but think it might be our best hope anyways?

I'll note that on your side you have the brilliant Robin Hanson. But he also seems to be fine with handing off the future to machine descendants.

I wouldn't pay any cost. And I already am a brain a in jar - my skull.

I was just trying to say that conditioning on death kind of avoids the hard question, which is the one you're asking.

I would be willing to endure pretty bad hardship, but not anything, for a chance to survive (in the long run sense)

If we do end up in a stable equilibrium once most jobs are automated, which I doubt, I peredict endless culture wars about which leisure activities are truly life-embracing, and which are thinly-veiled wireheading. E.g. Amish-larpers looking down on homesteaders who use robots, looking down on people who mostly live in the city but go camping with their kids, looking down on those who play video games with their kids, looking down on those who each retire to their respective screens, etc, all the way to opioid users.

I can imagine the central struggle of my life becoming trying to keep my kids from wireheading themselves one way or another.

if we condition on certain death, then yes, quicker is better. But I'd rather still try to survive even if I think the chance of death is high.

having a kid with a genetic dud still gives you more chance of grandchildren

Not necessarily, because of the opportunity cost. It could easily be better in expectation to wait and hope for a slim chance of a better mate down the road.

I'm not claiming it is in her case, but I'm claiming that even the ev-bio-optimal strategy would sometimes wait too long and result in no baby at all.

all she has to do is be realistic about what her value as a 36 year old woman like her is

I guess so, but maybe she just isn't interested in anything less than the best? This could make sense from an ev-psych point of view. If you have kids with a genetic dud, presumably your kids will also struggle more with mating, potentially creating a vicious cycle. I have no idea what the actual relationship between male attractiveness and long-term genetic ROI is, but I could imagine that in some environments, the expected genetic payoff of having kids with a bad enough mate could be close to zero.

This suggests an intervention that could get women to be interested in less-attractive men: censor or hide attractive but unavailable men from them, until her estimation of relative value of the available men goes up. Right now we have the opposite, with media showing unrealistically handsome and high-status men (i.e. James Bond or Tom Cruise) all the time.

The few times I've talked to educated anti-nuclear folks, they've made it clear that they didn't understand the basics of nuclear waste or the dangers of radiation.

Well the idea is that he might still have some use for that money in what he estimates to be his few remaining years.

There is a standard simple way to bet against doomsday. You tell Eliezer that you'll give him $X today, and he'll agree that if he's still alive in say, 2040 that he'll give you back $10X.

I mean, I do think we can stop being worried about being doomed due to AGW. I realize there have been lots of false alarms by people that are hard to distinguish from each other in terms of credibility. From my POV, all I can do is check my own sanity, and then continue to cry wolf (legitimately, in my mind). I might be wrong and you might be right to dismiss me.

I think I understand. You're saying that you don't feel compelled to invite dangerous new possibilities into our lives to make them meaningful or even good enough. I'm not clear if you're mad at the accelerationists for desiring radical change, or for trying to achieve it.

In any case, I'm not an accelerationist, but I think we're in a fundamentally surprising world. On balance I wish we weren't, but imo it doesn't really matter how we wish the world was.

I am also not sure who is in control "over the future of human civilization" right now

That's a good point. I'd like to spend more time thinking about in which senses this is true. However, I do still think we have a lot to lose. I.e. I'd still much rather live in the West than in North Korea, even if neither place has "humanity" in the driver's seat.

We're not able to stop writing, or using electricity, or modern medicine. But that doesn't mean any of those lead us to catastrophic consequences.

Okay, but I'm claiming that AGI will have disastrous consequences, and that the next 6 months or so are probably our only chance to stop using it (just like, as you point out, almost any other technology).

Fair point. But I think that is a reasonable test-case for alignment, and I maintain that most of the x-risk people think that beyond that, this sort of thing is merely a distraction.

Okay, but I'm trying to say that the x-risk people don't care about LLMs saying bad words.

I agree with you, although I think talking about "They took our jerbs" is both a good way to get people to understand that everything is going to change, and also a plausible and relatively mundane route to total human disempowerment.

I think you're letting the SJW crowd conflate things in your mind. The existential risk people are mostly libertarian types aside from this issue.