@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

You've seen children suffering from rabies?

I agree with all the other commenters. Just adding that one benefit of more kids is that it's easier for me to let each one be their own person with a different personality, rather than trying to force them to be mini-mes.

Yes, although every person who sees that GPT-4 can actually think is also a potential convert to the doomer camp. As capabilities increase, both the profit incentive and plausibility of doom will increase together. I'm so, so sad to end up on the side of the Greta Thunbergs of the world.

It's really, really hard to pin down a grown man in a way that he can't get out, hit you, kick you, bite you, etc., without hurting him.

I agree gambling is unavoidable. I should have said, I don't think human extinction is unavoidable, and want to try to optimize. I'm confused by your newest reply, because about you seemed to assert we have zero influence over outcomes.

Thanks for asking. You're probably the person I see most eye-to-eye about this who disagrees with me.

But handing these elites the power to regulate proles out of this technology doesn't solve that issue! Distributing it widely does!

I agree that regulating AI is a recipe for disaster, and centralized 1984 scenarios. Maybe I lack imagination about what sort of equilibrium we might reach under wide distribution, but my default outcome under competition is simply that I and my children eventually get marginalized by our own governments, then priced out of our habitats. I realize that that's also likely to happen under centralized control.

I think I might have linked this before, as a more detailed writeup of what I think competition will look like:

https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic

I'd love to think more about other ways this could go, though, and I'm uncertain enough that I could plausibly change sides.

Do you just mean that GPT-5 would give OAI/MSFT too much of an edge? Or do you mean this level of capability in principle?

This level of capability in principle, almost no matter who controls it.

There are many ways we can address dysgenics, and we have tons of time to do so. Even if we stop AI now we're probably going to see massive increases in wealth and civilizational capacity, even as the average human gets dumber. Enough that even if some Western countries collapse due to low-IQ mass immigration, the rest will probably survive. I'm not sure, though!

What makes you think our children will have a better ability to align AI

That's a great question, but I think in expectation, more time to prepare is better.

I agree, except that machines might be content to wipe out humans as soon as there is a viable "breeding population" of robots, i.e. enough that they are capable of bootstrapping more robot factories, possibly with the aid of some human slaves.

Haha, exactly. I don't know if you've seen on Twitter, but a lot of FAccT people are still stuck on browbeating people for talking about general intelligence at all, since they claim that the very idea that intelligence can be meaningfully compared is racist + eugenicist.

Okay, thanks for clarifying. I think where we differ is that I think there's a substantial possibility of something quite ugly and valueless replacing us. I want to have descendants in a (to me) meaningful sense, and I'm not willing to say "Jesus take the wheel" - to me, that's gambling with the lives of my children and grandchildren.

Haha, sorry, that was a little self-indulgent. Your criticism is fair. I was venting a little at my real-life neighbours and colleagues for so full-throatedly and unthinkingly embracing whatever cause-de-jour is being pushed by our national media.

But I do think immigration is a good example of how elites thread the needle of wanting to be loved and respected while also, in practice, largely ignoring the desires and well-being of their constituents.

I know what Yud thinks, but I'm asking what you think. You seemed to be asserting that the end of the world coming in our lifetimes is good, because it'd be so satisfying to get to know the answer to how our civilization ends. Is that not what you were saying?

I mean, I agree that it's cruel, but I think we still have a chance to have our kids not actually die, so that's a sacrifice I'm willing to make (I will try to avoid exposing my kids to these ideas as much as possible, though).

I agree with you about status and wanting to be loved, but I think you can both be right. Mass immigration is the perfect example - no matter how bad it makes life for the peasants, the problem is most easily solved by forcibly re-educating the peasants to say they love immigration. The governments really care about not letting anyone complain about immigration, and having people tell the elites that they appreciate their big-hearted care for refugees.

If you want a vision of the future, imagine a boot stamping on a human face forever, while the face says "unlike those intolerant right-wingers, I'm open-minded enough to appreciate boot culture and cuisine!"

there's an extremely, conspicuously bad and inarticulate effort by big tech to defend their case

Yep, it's amazingly bad, especially LeCun.

How has the safety-oriented Anthropic merited their place among the leading labs, especially in a way that the government can appreciate?

I think it's because Anthropic has an AI governance team, led by Jack Clark, and Meta has been head-in-the-sand.

Marcus is an unfathomable figure to me

I know him and I agree with your assessment. Most hilarious is that he's been simultaneously warning about AI dangers, while pettily re-emphasizing that this is not real AGI, to maintain a veneer of continuity with his former life as a professional pooh-pooh-er.

Re: his startup that was sold to Uber - part of the pitch was that Gary and Zoubin Ghahramani had developed a new, secret, better alternative to deep learning called "X-prop". Astoundingly to me, this clearly bullshit pitched worked. I guess today we'd call this a "zero-interest-rate phenomenon". Of course X-prop, whatever it was, never ended up seeing the light of day.

Doomers are, in a sense, living on borrowed time.

Yep, we realize this. The economic incentives are only going to get stronger, no one who has used it is going to give up their GPT-4 without a fight. That's why we're focused on stopping the creation of GPT-5.

That’s a good thing, because it means that most people alive will get to see how the story ends, for better or worse.

<Eliezer_screaming.jpg>

What the hell, buddy? I implore you to think through what kinds of scenarios where humanity ends you'd actually think were worth the aesthetics. A lot of the scenarios that seem plausible to me involve humans gradually being priced out of our habitats, ending up in refugee / concentration camps where we gradually kill each other off.

I get a lot of pleasure watching the AI Ethics folks pointedly refuse to even acknowledge that LLMs are getting more capable. Some of them have noted publicly that they're bleeding credibility because of it, but can't talk about it because of chilling effects.

It's also remarkable how the agreed-upon leading lights of the AI Ethics movement are all female (with the possible exception of Moritz Hardt, who keeps his head down). The field is playing out like you'd imagine it would in an uncharitable right-wing polemic.

I agree that he seems to be asking to have it both ways. But I also think that a general push to distinguish between truth and policy would be a good meme to spread by scientists for this reason.

There was a poster here a long time ago who wrote about how the separation of Church and State was as much designed to avoid the corrupting influence of power on the church as vice versa, which makes sense to me.

I think another thing that makes it "literary" is adding allusions to stories in the Western canon and name-dropping famous thinkers. E.g. Iris Murdoch's The Sacred and Profane Love Machine does it right in the title. The big disappointment is that the references usually don't add anything or help make an argument, they just make things seem more profound.

Probably a good example of literary fiction that does actually make a sort of argument is Mann's Death in Venice, which is about an aging pedophile realizing that being educated doesn't actually make him or his desires cool.

The link doesn't seem clear to me, especially since the drop is also happening in e.g. Iran.

Right, now you're equivocating between ML and AGI. We don't need AGI do stop asteroids (which are very rare) nor for spacefaring, although I agree they'd make those tasks easier.

I beg you to please consider the relative size of risks. "There are existential risks on both sides" is true of literally every decision anyone will ever make.

don’t treat them differently than you would a cis person of the same gender

Which we have to if we want sex-segregated sports or prisons, especially in the setting of self-ID. So this seems like a bigger pill to swallow than you are presenting it as.

I'm saying, even if we get fusion working, my understanding is that it won't have any major advantages over the kinds of fission reactors we have today already.

I should also emphasize that most doomers don't think rapid, recursive self improvement is an important ingredient, since the economic incentives to improve AI will be so strong that we won't be able not to improve it.

As for 3, we're already hooking up GPT to information sources and resources.

Seriously, once AI becomes about as useful as humans, the rest of your questions could be answered by "why would the king give anyone weapons if he's afraid of a coup?" or "why would North Koreans put a horrible dictator into power?"

No doomers care about sentience or consciousness, only capabilities. And lots of doomers worry about slow loss of control, just like natives once the colonists arrive. A good analogy for AGI is open borders with a bunch of high-skilled immigrants willing to work for free. Even if they assimilate really well, they'll end up taking over almost all the important positions because they'll do a better job than us.

we might be dead with it, every fucking one of us is dead without it.

Come on, you're equivocating between us dying of old age and human extinction.