@campfireSmoresEaten's banner p

campfireSmoresEaten


				

				

				
0 followers   follows 0 users  
joined 2023 July 10 08:04:18 UTC
Verified Email

				

User ID: 2560

campfireSmoresEaten


				
				
				

				
0 followers   follows 0 users   joined 2023 July 10 08:04:18 UTC

					

No bio...


					

User ID: 2560

Verified Email

Being pro freedom in a dictatorship without much of a broader platform is fine with me. It's the most pressing issue, so it's the one to focus on.

To use a somewhat goofy example, if there's an asteroid heading for earth, and some politicians are in favor of the asteroid, it's fine to just be anti-asteroid and nothing else. Anything else would just be a distraction until the asteroid is done with.

Someone blatantly pointing out in the most public way possible that this has always been a fiction, that governments may make figleaf declarations about opposing these types of slander but will never actually enforce them because they actually are inherently conservative entities that are on the side of the privileged and the default, that anyone can make the most vile comments they want and always could without fearing legal reprisals

I don't know if you're an American, but this is just not true. In non-US countries, people have been prosecuted for saying that the bible says that homosexuality is a sin in Canada and I think Finland, for saying that Muhammed was a pedophile, for telling jokes, for saying that Muslims girls are raped by their family members, for saying that Muslim girls are murdered by their family members in honor killings, for saying that Muslims want to kill us, for quoting someone else saying that Islam is a defective and misanthropic religion, for comparing Muslims to Nazis, for saying "Well, when one, like Bwalya Sørensen, and most black people in South Africa, is too unintelligent to see the true state of things, then it is much easier to only see in black and white, and, as said, blame the white."

More: For saying that white people pretend to be indigenous for political or career clout. etc etc etc

"seeing the bad consequences of things they support doesn’t move the needle at all in terms of their worldviews"

It is important to ask others (and yourself) "what would change your mind?" Yudkowsky taught me that.

Reminds me of that Norm Macdonald joke from the 90s.

"Well, earlier this week, actor Marlon Brando met with Jewish leaders to apologize for comments he made on “Larry King Live”. Among them, that “Hollywood is run by Jews.” The Jewish leaders accepted the actor’s apology, and announced that Brando is now free to work again."

The story of the boy who cried wolf has two sides. It's not just a lesson for the boy not to lie, it's a lesson for the villagers too. Just because people who lie about wolves exist doesn't mean wolves don't exist.

Also most historians think the German atrocities in Belgium during the first world war did happen, even if they were exaggerated at the time.

What about Japanese war crimes? Did those never happen either? What about Unit 731? Why would the United States make up fake war crimes only to become complicit in them later by trading the data produced by the research in exchange for immunity?

It's the same ATF agent! That's crazy.

"But nobody forced that (probably very media savvy) professor to go on the air and talk about how humane execution is stupid because murderers should suffer. That "bloodthirsty cruelty is the point." was literally his point."

His words may have been taken out of context, as often happens in documentaries and interviews and interviews that are part of documentaries. It happens to people who one would think are media savvy.

I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.

I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.

Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.

Let me take a crack at it:

  1. AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.

  2. Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.

"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."

What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.

Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.

You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.

Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.

AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.

Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.

Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.

Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.

We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story: https://gwern.net/fiction/clippy

Further reading: https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/ more links can be provided on specific things you want clarified.

*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging. https://futurism.com/the-byte/ai-realizes-being-tested

I will say that the details of October 7th seem like they were clearly designed to make it as hard as possible to respond with restraint.

To me, having sex with an attractive woman in a restroom, even in otherwise ideal circumstances, sounds like it would be at most 50% as enjoyable as the same encounter on a bed or a couch.

Should I start identifying as a demisexual?

"Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper"

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

The arguments around superintelligence have nothing to do with whether or not AI is being used for military purposes. It's completely tangential.

"However, the common elements with these schemes is that they all involved either a small number of conspirators, or had victims that no one really gave a shit about. None of this is reflected in Flight 800, its 230 dead, and the multiple entities implicated."

The United States government performed unethical human experimentations on many, many people. It wasn't all black people, or foreigners, or people in mental hospitals or whatever. Random people from ordinary hospitals were selected to be irradiated. The victims were selected on the basis of convenience. If "a large number of random Americans belonging to no particular group" counts as "victims that no one really gave a shit about", why shouldn't the 230 people on Flight 800?

"A big one is the CIA and State Department. They've traditionally viewed right wing parties in Europe as the enemy, and made efforts to keep them from winning."

Could I have a source? Even if nothing concrete?

My experience is that most people don't have a good enough understanding of how housing costs work to point blame at anything other investment funds for high prices.

"In some ways, perhaps because we’ve been primed by Buddhist monk seminal example, that remains an ultimate attention-getter of Western modernity"

Not really! People have lit themselves on fire in protest many times since then without much public notice. This is a conspicuous exception.

https://en.wikipedia.org/wiki/List_of_political_self-immolations

Thank you for sharing your story.

"Without immigration, Canadas economy would go though a historic collapse. 100% of our economic growth is dependent on immigrants, the housing bubble only keeps going because of this scarcity they bring, and they account for 75% of Canada's demographic growth"

How sure are you of your economic analysis? It seems like a bold claim to me.

"In an ideal world we would deport the food cart guy. He clearly doesn’t share our values."

Free speech is one of our values though.

It's not the economy that makes owning a house unaffordable, it's the regulatory environment.

I don't think HBD has anything to do with "deserve". Most of the prominent HBD-people would agree with that, I think. It's not like someone with a genetic disease like Huntington's "deserves" to be sick.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

How are you on this website without realizing how hard it is to control a superintelligent AI? Have you not thought about that? I think that you are thinking "AI can either be aligned to human values or not. Sounds like 50/50."

In fact, aligning a superintelligence to human values is extremely difficult and extremely unlikely to happen by accident. Human values are a very small slice of the possible spectrum of minds that could exist.

It kind of feels like people vastly overrate the degree to which they understand the arguments of AI doomers. Like they're just going by a few tweets they read. Twitter is not a good way to full understand a contentious subject.

I think the vast majority of Americans of all stripes don't care about Brits playing Americans. If they care they care only very slightly and it's mixed with acceptance that Brits are just really good at acting.

I feel like a movie where flat earth is real and there really is a conspiracy dedicated to protecting it would be great. That's what the Wachowski's should have done instead of Matrix 2.

I have one abiding principle in life, and it's served me well. Never trust a man named "Sneako".

Also I usually see people saying inshallah ironically. Although I realize there's a pathway from ironic to non-ironic, as famously happened with "based".

This sort of extremely sarcastic and antagonistic writing style is against the rules of this forum.

" I hear rogue like and that makes it seem like you are intended to fail until they give you enough honorable mention trophies to buy upgrades that let you win. "

For the record, that's not what roguelike means. Or at least, not what it used to mean. For a long long long time before modern "rogue-lites" came along and got super popular.