@campfireSmoresEaten's banner p

campfireSmoresEaten


				

				

				
0 followers   follows 0 users  
joined 2023 July 10 08:04:18 UTC
Verified Email

				

User ID: 2560

campfireSmoresEaten


				
				
				

				
0 followers   follows 0 users   joined 2023 July 10 08:04:18 UTC

					

No bio...


					

User ID: 2560

Verified Email

I wonder if you could do a Pokemon Snap-style game about a war.

Reminds me of that Norm Macdonald joke from the 90s.

"Well, earlier this week, actor Marlon Brando met with Jewish leaders to apologize for comments he made on “Larry King Live”. Among them, that “Hollywood is run by Jews.” The Jewish leaders accepted the actor’s apology, and announced that Brando is now free to work again."

"the altruistic AI that loves humans scenario is also possible."

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

You're thinking that AI might have some baseline similarity to human values that would make it benevolent by chance or by our design. I disagree. EY touches on why this is unlikely here:

https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/

It's not a full explanation, but I have work I should be getting back to. If someone else wants to write more than they can. There are probably some Robert Miles videos on why AI won't be benevolent by luck.

Here's one:

https://youtube.com/watch?v=ZeecOKBus3Q

I'm not going to watch it again to check but it will probably answer some of your questions about why people think AI won't be benevolent through random chance (or why we aren't close to being skilled enough to make it benevolent not by chance). Other videos on his channel may also be relevant.

My guess is that people think that just going by what they've picked up along the way is enough to understand the doom arguments. Just whatever information has reached them through cultural osmosis.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

How are you on this website without realizing how hard it is to control a superintelligent AI? Have you not thought about that? I think that you are thinking "AI can either be aligned to human values or not. Sounds like 50/50."

In fact, aligning a superintelligence to human values is extremely difficult and extremely unlikely to happen by accident. Human values are a very small slice of the possible spectrum of minds that could exist.

It kind of feels like people vastly overrate the degree to which they understand the arguments of AI doomers. Like they're just going by a few tweets they read. Twitter is not a good way to full understand a contentious subject.

"If this technology was going to make a big impact it would have done so already" is a more difficult heuristic to use than you might think.

Looking back on automobiles, airplanes, the internet, etcetera, do you think you might have said that about them when the technology was still in the process of rolling out?

"P. Krugman 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law' becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”

I would say that usually when a technology gets as big as LLMs it doesn't just fade away into nothingness. There are many obvious use cases, just as there are many obvious use cases to cars, airplanes, and the internet.

In 1940 Orwell wrote that aircraft had hardly been used for anything up till that point besides dropping bombs. But I doubt he would have said that the air travel revolution would never materialize, just that it hadn't materialized yet.

I guess if I was a Tory I would create some sort of "political moonshot plan" designed around trying to make people understand why housing is stupidly expensive (scarcity caused by laws) and how to fix it (make it legal to build stuff where it is illegal because people voted for scarcity, and easier to build stuff where the laws make it artificially difficult as a more subtle way to create scarcity). Worth a try, right?

I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.

I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.

Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.

Let me take a crack at it:

  1. AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.

  2. Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.

"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."

What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.

Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.

You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.

Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.

AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.

Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.

Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.

Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.

We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story: https://gwern.net/fiction/clippy

Further reading: https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/ more links can be provided on specific things you want clarified.

*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging. https://futurism.com/the-byte/ai-realizes-being-tested

The real question (one of them, anyway) is how differently things will play out at UToronto and other universities in the UK and Canada. If Pro-Palestine protestors can make/hold some gains there, that would be geopolitically meaningful if it serves to provide a contrast to the US.

I would like for criminal acts not to be rewarded, but what are the odds that the USG (or whoever) actually escalates? What are they more afraid of, escalating or Ukraine losing?

I would at least consider staying and fighting. Just because I don't like it when people start wars in order to annex land or entire countries.

Do you think they'll mellow out as they get older and become libertarians? Or will they just be consumed by nanobots along with the rest of the human race?

I do kind of suspect that eventually the voters will get at least some of what they want if they continue to win elections. That may be naive of me.

Well, people who want to build more housing could secede from the government. That's the obvious solution when you have a minority of voters who feel very strongly that the majority is fucking them over.

It remains to be seen to what extent voters understand that development being illegal is the problem though.

"Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper"

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

The arguments around superintelligence have nothing to do with whether or not AI is being used for military purposes. It's completely tangential.

I'd like to think that if I was a fighter pilot I would be able to look on the bright side and appreciate it, even if I never got to engage enemy fighters or whatever. But maybe it's the equivalent of "if you'd be satisfied with a million dollars you don't have what it takes to make it".

There is a niche for that, but there is also an empowered activist vanguard who wants to destroy those niches, among other objectives.

Reminds me of this great Etgar Keret essay:

https://etgarkeret.substack.com/p/boohoo-to-you-too

(Israeli short story author, one of his stories was adapted into an indie movie called Wristcutters: A Love Story which you may or may not have heard of)

Fat women can be charming, within a certain threshhold. They have a certain gravitas about them (pun not intended, believe it or not).

The Rationalists would tend to regard that person as the Superior Being, taking for granted the relativity of Beauty and dismissing the importance of a Noble physiognomy and charisma to civilizational achievement.

No I wouldn't. Not necessarily anyway. It's not easy to quantify, and it's not all one thing the way IQ is, but sanity/wisdom/rationality/whatever-you-want-to-call-it matters as much as IQ. If the short weak ugly guy is full of contempt for others and wants to see the people he dislikes suffer and the tall handsome guy is somewhat empathetic then that counts for a lot in my book too.

If you could wave a magic wand that would make you attracted to your wife regardless of her weight, would you? You could still be concerned about the health side of things, just the attractiveness wouldn't be an issue.

It's important to the hypothetical to know that the magic wand has a resale value of $3500 and you can sell it whether you use it or not.

Edit: also don't do any hint dropping. Don't be direct either. The most you can do is go on walks with her and organize healthy meals. But there has to be plausible deniability. Not just plausible deniability, probable deniability.

Women don't like to be told! Chapter 87 of HPMOR, Harry and Hermione.

I think it's probably not a coincidence that Russia waited until after Trump left office to invade Ukraine. I realize that sounds crazy to most MSNBC watchers. At the very least, it seems like they were unaffected by who the US president is.

But that only started once it became clear that Russia was belligerent. The US didn't want to destroy Russia just for the sake of it, they wanted to do that because Russia was a threat to the system of the world.

Also Transnistria! Break-away state from Moldova supported by Russia! I don't know the full story so I don't know if the details are similar to what happened in Georgia. I gotta look into that.

I don't consider NATO an alliance of puppet regimes, I just consider it an alliance. So as far as I'm concerned there's nothing to feel guilty about there.