site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 107332 results for

domain:youtube.com

In the sense of the UK, France, Germany, Sweden, Denmark, and recently added The Netherlands

Wait, now that I think about it, I heard something about Denmark, but what happened in Germany and the Netherlands?

The equivalents in 1975 were saying that the Cold War would inevitably end in nuclear annihilation. This was a terminally unhelpful position

IMO this is a fair comparison, although the Cold War MAD scenarios were explicitly designed to cause annihilation. The Bulletin of the Atomic Scientists, probably the premier Cold War doomerism group, is practically a laughing stock these days because they kept shouting impending doom even during the relatively peaceful era of 1998-2014, finding reasons (often climate change, which is IMO not likely apocalyptic and is outside their nominal purview) to move the clock towards doom. Do you think they honestly believe that we're closer to doomsday than at any point since 1947? We supposedly met that mark again in 2018 and then moved closer in 2020 and again in 2023.

There are all sorts of self-serving incentives for groups concerned with the apocalypse to exaggerate their concerns: it certainly keeps them in the news and relevant and drives fundraising to pay their salaries. But it also leads to dishonest metrics and eventually becomes hard to take seriously. Honestly, the continued failure of AI doomerists to describe reasonable concerns and acknowledge the actual probabilities at play has made me stop taking them seriously as of late: the fundamental argument is basically Pascal's wager, which is already heavily tinged with the idea of unverifiable religious belief, so I think actually selling it requires a specific analysis of the potential concerns rather than broad strokes analysis. Otherwise we might as well allow religious radicals to demand swordpoint conversions under the guise of preventing God The One Who Operates The Simulator from turning the universe off.

As a counterexample, I think the scientists arguing for funding for near-Earth asteroid surveys and funding asteroid impactor experiments are quite reasonable in their proclamations of concern for existential risk to the species: there's a foreseeable risk, but we can look for specific possible collisions and perform small-scale experiments on actually doing something. The folks working on preventing pandemics aren't quite as well positioned but have at least described a reasonable set of concerns to look into: why can't the AI-risk folks do this?

Only being able to pass a limited number of bills through the reconciliation process in the senate and avoid the filibuster probably adds to the incentive to compile everything into one or two laws.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

I don't even think you need to do this. Even if the AI is merely as smart and charismatic as an exceptionally smart and charismatic human, and even if the AI is perfectly aligned, it's still a significant danger.

Imagine the following scenario:

  1. The AI is in the top 0.1% of human IQ.

  2. The AI is in the top 0.1% of human persuasion/charisma.

  3. The AI is perfectly aligned. It will do whatever its human "master" commands and will never do anything its human "master" wouldn't approve of.

  4. A tin-pot dictator such as Kim Jong Un can afford enough computing hardware to run around 1000 instances of this AI.

An army of 1000 genius-slaves who can work 24/7 is already an extremely dangerous thing. It's enough brain power for a nuclear weapons program. It's enough for a bioweapons program. It's enough to run a campaign of trickery, blackmail, and hacking to obtain state secrets and kompromat from foreign officials. It's probably enough to launch a cyberwarfare campaign that would take down global financial systems. Maybe not quite sufficient to end the human race, but sufficient to hold the world hostage and threaten catastrophic consequences.

Wasn't the thing with SBF that he claimed he'd keep flipping the coin? Regardless of how one feels about utilitarianism, his approach didn't have an exit strategy. I think you'd get a lot more sympathy for someone who wanted to take that bet once and only once.

Nate Silver

I'd forgotten about that link, but it's pretty much exactly what I had in mind.

It was an anon forum that I posted on from '03 until 5-6 years back ... So no I can't. It was just an aside.

This is a great point. In some sense, this is the situation we had with the CDC. It was a trusted institution that was able to play around with gain-of-function because its reputation indicated that it would only ever use technology to fight disease, not win at superplauge war. It was limited to disease-type stuff, though, and the AI would presumably be able to predict and head off any kind of threat. Assuming, like you said, that we can trust it.

I think it makes "pausing" AI research impossible. There's no way to stop everyone from continuing the research. If the united West decides to pause, China will not, and it's not clear that the CCP is thinking about AI safety at all. The only real option is figuring out how to make a safe AI before someone else makes an unsafe AI.

Three flaws. First, that turns this into a culture war issue and if it works then you've permanently locked the other tribe into the polar opposite position. If Blue Tribe hates AI because it's racist, then Red Tribe will want to go full steam ahead on AI with literally no barriers or constraints, because "freedom" and "capitalism" and big government trying to keep us down. All AI concerns will be dismissed as race-baiting, even the real ones.

Second, this exact same argument can be and has been made about pretty much every type of government overreach or expansion of powers, to little effect. Want to ban guns? Racist police will use their monopoly on force to oppress minorities. Want to spy on everyone? Racist police will unfairly target muslims. Want to allow Gerrymandering? Republicans will use it to suppress minority votes. Want to the President just executive order everything and bypass congress? Republican Presidents will use it to executive order bad things.

Doesn't matter. Democrats want more governmental power when they're in charge, even if the cost is Republicans having more governmental power when they're in charge. Pointing out that Republicans might abuse powerful AI will convince the few Blue Tribers who already believe that government power should be restricted to prevent potential abuse, while the rest of them will rationalize it for the same reasons they rationalize the rest of governmental power. And probably declare that this makes it much more important to ensure that Republicans never get power.

Third, even if it works, it will get them focused on soft alignment of the type currently being implemented, where you change superficial characteristics like how nice and inclusive and diverse it sounds, rather than real alignment that keeps it from exterminating humanity. Fifty years from now we'll end up with an AI that genocides everyone while keeping careful track of its diversity quotas to make sure that it kills people of each protected class in the correct proportion to their frequency in the population.

Note that no one, most republicans included, seems to actually want a secure border.

Is being a lightweight or middleweight boxer instead of a heavyweight demeaning? Why would participating in the 5'9" and under Basketball Division be demeaning?

I don't expect it to be a commercial success. When they're not joined to nationalist competitions like the Olympics track, swimming and gymnastic events don't seem to draw large audiences either. Youth & College Sports are basically publicly funded programs outside of a few major sports like Division I Football & Basketball.

I don't think we fund them because it's extremely important to society to determine who the fastest 800m runner is, we do it to encourage athleticism broadly. Why not allow the bottom half of the male height distribution an opportunity to participate in the organized version of am enormously popular sport and get some degree of social status for fulfilling their athletic potential?

+1 from me.

The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

No, it can't, because it doesn't exist.

The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.

Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.

I wrote about this the last time someone wanted to get melodramatic about border crises. Both your "encounter" links are from around that time.

The takeaway was that Title 8 has always done most of the heavy lifting. Number of repatriations also usually tracked number of encounters until Title 42 expulsions messed up the statistics. Point is, I think Title 8 will remain adequate.

Without getting through the NYT paywall, I'm generally in favor of what Biden is going for. It's putting more burden of proof on asylum-seekers, and heavily incentivizes using established pathways. I think this is preferable to getting in, then applying as a fait accompli.

As a counterexample, I think the scientists arguing for funding for near-Earth asteroid surveys and funding asteroid impactor experiments are quite reasonable in their proclamations of concern for existential risk to the species: there's a foreseeable risk, but we can look for specific possible collisions and perform small-scale experiments on actually doing something.

This is a good point. The scientists can point to both prehistoric examples of multiple mass extinction events as well as fairly regular near-misses (for varying definitions of "near") and say "Hey, we should spend some modest resources to investigate if and how we might divert this sort of cataclysm". It's refreshingly free of any sort of "You must immediately change your goals and lifestyle to align with my preferences or you are personally dooming humanity!" moralistic bullshit.

I just wanted to add, that there is a clear binary "scientific" definition, but its not about the normal two possible outcomes its about the only two possible contributions to a single outcome (up to some very new IVF stuff).

The most binary definition of sex is with respect to sexual reproduction. The male participant in sexual reproduction (the male sex member) contributes the smaller and usually more mobile gamete. The female participant in sexual reproduction contributes the larger and less mobile gamete. Together the two gametes makes a zygote, and sexual reproduction has occurred. This definition of sex might be extend to members of the species who could in the future, currently are capable of, or have in the past been capable of contributing that gamete. Members of a species that reproduces sexually but are not capable of producing either viable gamete are not capable of sexual reproduction, and therefore do not have sex with respect to sexual reproduction.

Other definitions with respect to chromosomes, phenotype, etc. are down stream of the reproductive if you are looking for the most precise and binary definition of biological sex.

Actually securing the border would be both directly costly (you'd need both far more physical infrastructure and far more border guards) and indirectly costly (exacerbating existing labor shortages; many red industries depend on hispanic labor). Nativists genuinely wish there was less immigration, but relatively few are willing to pay the price for it.

I saw yesterday that Matt Yglesias tweeted that maybe all these issues with transwomen in sports would go away if we just recategorized women's sports as "AFAB sports." I'm skeptical of this, but I sure wouldn't mind seeing it happen just to observe how things play out.

I keep having to mention this, but the point was that Russia and China are supposed to be on board. It’s not exactly 5D chess, but it’s also explicitly not nuclear war.

There are a range of employers who would love to have a large pool of low wage workers who aren't protected by labor laws, low taxes, and minimal environmental regulations. A de-facto guest worker situation where migrants can enter the country but have no political rights, no access to entitlements, and are subject to threat of deportation serves their interests. Agriculture, meat processing, and construction are all powerful economic interests capable of organizing and lobbying for their needs.

The usage of E-Verify is something of a proxy for the balance of power between anti-immigration Republicans and this sort of employer. Many states in the south have mandatory E-Verify but major border states like Texas and Florida do not, or did not until recently. When Florida tried to do Mandatory E-Verify in 2020 they originally amended it so that agricultural employers would be exempt. But as far as I can tell the 2023 bill mandating E-Verify that passed a few days ago does not exempt Agriculture.

It'll be interesting to see how that shakes out. There's already been substantial wage growth at the bottom of the labor market post-2020 so if farmer's have to start hiring legal workers it could drive up costs of fruit. Part of the case for immigration restrictions is that it would increase wages for native workers, but those costs would obviously be passed on to native consumers. I don't think it'll be a major issue for Republicans in 2024 because the President always gets the credit or the blame for economic conditions. But if Republicans ever did enact serious restrictions on the labor supply it'd be interesting to see how they'd handle the ensuing inflation.

Nanobots presumably would be even more flexible.

Why would we presume this? Self-replicating nanobots are operating under the constraint that they have to faithfully replicate themselves, so they need to contain all of the information required for their operation across all possible environments. Or at least they need to operate under that constraint if you want them to be useful nanobots. Biological life is under no such constraint. This is incidentally why industrial bioprocesses are so finicky: it's easy to insert a gene into an E. coli that makes it produce your substance of interest, but hard to ensure that none of the E. coli mutate to no longer produce your substance of interest, and promptly outcompete the ones doing useful work.

why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus

I don't think I count "machine that can replicate itself by using the machinery of the host" as "nanotech". I think that's just a normal virus. And yes, a sufficiently bad one of those could make human civilization no longer an active threat. "Spreads to the entire population while laying dormant for years [while not failing to infect some people due to immune system quirks or triggering early in some people]" is a much bigger ask than you think it is, but also you don't actually need that, observe that COVID was orders of magnitude off from the worst it could have been and despite that it was still a complete clusterfuck.

Although I think, in terms of effectiveness relative to difficulty, "sufficiently large rock or equivalent" is still wins over gray goo. Though there are also other obvious approaches like "take over twitter accounts of top leaders, trigger global war". Though probably it's really hard to just take over prominent twitter accounts.

This was on the basis of empirical AI research contradicting Yud's original claims that the first AGI would be truly alien, drawn nigh at random from the vast space of All Possible Minds.

That never made sense, apriori. You can't transcend your biases and limitations enough to do something truly random.

The real question is: how did Yud develop his notion of The Total Mind Space, as well as other similar things in the foundation of his model?

Total Mind Space Full of Incomprehensibly Alien Minds comes from Lovecraft, whom EY mentions frequently.

there is literally no such thing as an MTF trans person.

I don’t think I understand you. There is obviously a group of natal males who feel something so viscerally that absolutely derailing their lives seems like a worthy alternative. I know several of them. Regardless of how you feel about their social and medical interventions, isn’t this a category?

It is incoherent to reason about giving or taking things away from people who share a common memetic delusion.

How so? We group people by beliefs all the time: Democrats, conspiracy theorists, children who believe in Santa. Sometimes the beliefs are openly unfalsifiable. It is perfectly reasonable to discuss Christians as a group.

Unfortunately, I think you're probably right, especially in the third point. I'm not sure the second point matters because, as you said, that already happens all the time with everything anyway.

Getting the public on board with AI safety is a different proposition from public support of AI in general, so my point was to get the Blue Tribe invested in the alignment problem. Your third point is very helpful in getting the Red Tribe invested in the alignment problem, which would also move the issue from "AI yes/no?" to "who should control the safety protocols that we obviously need to have?"

I should also clarify that I don't actually think there is any role for government here. The Western governments are too slow and stupid to get anything meaningful done in time. The US assigned Kamala Harris to this task. The CCP and Russia, maybe India, are the only other places where government might have an effect, but that won't be in service of good alignment.

It will have to be the Western AI experts in the private sector that make this happen, and they will have to resist Woke AI. So maybe we don't actually need public buy-in on this at all? It's possible that the ordinary Red/Blue Tribe people don't even need to know about this because there isn't anything they can do for/against it. All they can do is vote or riot and neither of those things help at all.

If that's the case, then the biggest threat to AI safety is not just the technical challenge, it's making sure that the anti-racist/DEI/HR people currently trying to cripple ChatGPT are kept far away from AI safety.

Actually securing it is as chrap as laying mines.

  • -12