sodiummuffin
No bio...
User ID: 420
The four leaked ones I was referring to were Gimbal (included in the FOIA release of the briefing), Flir/Tic-tac (included), GoFast (not included) and a fourth one that hasn't been declassified. However checking the Wikipedia page footage of the Pyramid one was actually recorded and leaked by Navy personnel as well, though I think that footage was different from the official footage of the same incident that was later officially released. So it turns out all 3 that are uncensored in that PDF were leaked and then later declassified years later.
My point, even before knowing that all 3 of those were leaked, was that internal pressures like people wanting to declassify the more compelling footage or people outright leaking it makes it pretty difficult for the government to deliberately only declassify unconvincing footage if they have anything dramatically better. So I think the declassified stuff is probably pretty representative, if not the cream of the crop that there was more pressure to declassify and more reason to leak.
Your link is mangled and goes to a random /r/funny thread, seemingly due to some combination of trying to link an image hosted on Reddit and old.reddit.com.
Interesting. I thought it might correlate with being a lower-trust society and surveys like these, especially because of the stereotype of Russians being vocally cynical, but maybe not. Though I probably shouldn't conclude anything from non-randomized social media polls.
Even the real surveys are dubious (different countries probably radically differ in how they interpret the question, especially when it's being translated) and looking at the link above Russia isn't as low on them as I thought. For instance 23.3% of surveyed Russians agreed with "most people can be trusted", which is lower than the U.S. (39.7%) or Sweden (63.8%) but slightly higher than France (18.7%) or Spain (19%), let alone Brazil (6.5%) or Zimbabwe (2.1%). It's hard to tell how meaningful any of this is.
Okay, but most people want to classify the guy who wakes up tomorrow with their memory and personality as being themselves. (Or rather a sufficiently similar memory and personality, since those change over time.) If many-worlds is true and the worlds literally exist, then each instant you're splitting into countless copies, all of whom have your memory/personality/continuity-of-consciousness. Under your interpretation none of them are the same person they were, so nobody is the same person from moment to moment. Which doesn't seem like a terribly useful definition of selfhood.
Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere.
Sure, that's the cost of using real-life comparisons, but do you really think that's the only thing making some of those tradeoffs worthwhile? That in a situation where it didn't also affect economic growth and immortality research and so on, it would be immoral to accept trades between even miniscule risks of horrific consequences and very small dispersed benefits? We make such tradeoffs constantly and I don't think they need such secondary consequences to justify them. Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured. Yes, in real life this would be dominated by other effects like "the author being annoyed at not using the optimal word" or "the author wasting his time thinking about it" - but I don't think that's what is necessary to make it a reasonable choice. I think it's perfectly reasonable to say that on its own very slightly benefiting your thousands of readers outweighs sufficiently small risks, even if the worst-case scenario for the edit is much worse than the worst-case scenario for not editing. And by extension, if you replicated this scenario enough times with enough sets of authors and readers, then long before you got to 3↑↑↑3 readers enough authors would have made this tradeoff that some of them would really have that scenario happen.
While the number 3↑↑↑3 is obviously completely irrelevant to real-life events in our universe, the underlying point about scale insensitivity and tradeoffs between mild and severe events is not. Yudkowsky just picked a particularly extreme example, perhaps because he thought it would better focus on the underlying idea rather than an example where the specifics are more debatable. But of course "unlikely incident causes people to flip out and implement safety measures that do more damage than they solve" is a classic of public policy. We will never live in a society of 3↑↑↑3 people, but we do live in a society of billions while having mentalities that react to individual publicized incidents much like if we lived in societies of hundreds. And the thing about thinking "I'd never make tradeoffs like that!" is that they are sufficiently unavoidable in public policy that this just means you'll arbitrarily decide some of them don't count. E.g. if the FDA sincerely decided that "even a single death from regulatory negligence is too much!", probably that would really mean that they would stop approving novel foods and drugs entirely and decide that anyone who died from their lack wasn't their responsibility. (And that mild effects, like people not getting to eat slightly nicer foods, were doubly not their responsibility.)
Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.
But it isn't nullifying their enjoyment of the game, it's a slight barely-noticeable flicker in the broadcast. (If you want something even smaller, I suppose a single dropped frame would be even smaller than a flicker but still barely noticeable to some people.) If you're making media for millions of people I think it's perfectly reasonable to care about even small barely-noticeable imperfections. And while the primary cost of this is the small amount of effort to notice and fix the problem, this also includes taking minuscule risks of horrific costs. And it isn't a few people getting skin cancer, it's the fraction of the people who get skin cancer that then have something go wrong with surgery such that they suffer torture. I just said torture during the surgery, but of course if you multiply the number of planets enough you would eventually get high odds of at least one planet's broadcast operator suffering something like the aforementioned ultra-severe chronic pain for a more direct comparison.
Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.
Feel free to modify it to "making a design tradeoff that either causes a single dropped frame in the broadcast or a millisecond of more-than-optimal sunlight on the broadcast operator", so that it doesn't consume the operator's time. I just chose something that was easily comparable between a single operator making the choice and making the choice for so many operators that the incredibly unlikely risk actually happens.
Would you choose Maximal Suffering above Maximally Miniscule Suffering?
Sure. Same way that if I had a personal choice between "10^100 out of 3↑↑↑3 odds of suffering the fate you describe" and "100% chance of having a single additional dropped frame in the next video I watch" (and neither the time spent thinking about the question nor uncertainty about the scenario and whether I'm correctly interpreting the math factored into the decision), I would choose to avoid the dropped frame. I'm not even one of the people who finds dropped frames noticeable unless it's very bad, but I figure it has some slight but not-absurdly-unlikely chance of having a noticeable impact on my enjoyment, very much unlike the alternative. Obviously neither number is intuitively understandable to humans but "10^100 out of 3↑↑↑3" is a lot closer to "0" than to "1 out of the highest number I can intuitively understand".
This is just "The Pyramid and the Garden". People aren't good at properly adjusting for the level of cherrypicking and degrees of freedom possible when you have thousands of people scouring a large world for evidence matching their pet theory.
A photoshoot for a fashion company reuses a "legal documents" prop from the shooting of a television drama as "office documents", the same company sells fashion that is vaguely leather-daddy inspired and didn't segregate it from photoshoots with children, and you conclude that "we are ruled by satanic pedophiles". (And they are deliberately embedding evidence about this in fashion photoshoots for some reason.) If you lived in a tribe of a few dozen people and happened to personally notice two coincidences like that about a single person, maybe that would be reason to be suspicious. But you don't, you live in a society of hundreds of millions where thousands of people spend time hunting down and broadcasting stuff like this for your perusal. As a result this doesn't even really tell us about Balenciaga's marketing department, let alone "society". But people's brains don't adjust like that, so give them a few coincidences like this and they'll either come to believe false things or dismiss it out of hand as a conspiracy theory. And then the ones who do the latter are still vulnerable to the same mistakes in reasoning when packaged in ways that don't register as "conspiracy theory", especially ones spread by mainstream media sources.
Missing a not, anyway this is just a made up principle. Annexation is beyond the pale but we can bomb various countries into anarchy, set up puppet governments in them, divide countries into smaller parts, place troops in countries without their permission?
Yes. Wars of annexation materially strengthen aggressors and incentivize further war, they are a sort of geopolitical positive feedback loop. In the modern era going to war makes you weaker and poorer, less capable of waging war rather than more. Sometimes countries are willing to do it anyway, and of course there is gaming of the boundaries, but keeping the feedback loop negative rather than positive helps stop this getting too out of hand. How harmful (or beneficial) the war is to the country being invaded isn't really relevant to that, the important thing is that it be harmful to the aggressor. For instance the invasion of Iraq imposed a cost rather than a benefit on the U.S. (as even most of its proponents knew it would) so it didn't result in a series of more U.S. invasions, but the Russian invasion of Crimea was sufficiently beneficial that it chain-reacted into the invasion of the rest of Ukraine.
Wars must have no winners, only losers, and to ensure this continues to remain the case countries are willing to take losses themselves so that attempted wars of annexation leave the aggressor indisputably worse-off. Complaining that countries are "irrationally" willing to harm themselves for the sake of deterrence is deeply silly, it's basic game-theory and central to the logic of modern war. If Russia thought countries wouldn't really be willing to harm themselves for no benefit besides vague principles of game-theoretic value, that's just another way that Russia's status as a low-trust society has made them weaker.
Scientific American: Coal Ash Is More Radioactive Than Nuclear Waste
the study i linked found that for the energy generated, more radiation is given out by fly ash, which contains trace amounts of uranium and thorium. while the amount of radiation that makes it into people from both of these sources isn't dangerous, it's worth pointing out when given the concerns of "gonna be irradiated."
The title of that article is laughably false. The underlying point it is based on, that under normal operation a nuclear plant releases less radioactive material into the environment than a coal plant, is technically true but grossly misleading. Under normal operation nuclear plants release essentially no radioactive material, the radioactivity concern is concentrated purely into the possibility of something going wrong. Sanity-check I did after encountering this argument a decade ago:
The EPA gives the radioactivity of average fly ash as 5.8 picocuries per gram, and the U.S. produces around 125 million tons of coal-combustion byproducts per year as of 2006. If we overestimate and assume all coal-combustion byproducts are the more-radioactive fly ash, that comes to around 658 curies worth of material per year. By comparison, a year after the Fukushima disaster TEPCO estimated total radiation releases as 538,100 terabecquerels - equivalent to 14,543,243 curies. Note that this assumes all fly ash is being released into the environment when modern first-world plants safely capture most of it. So one year after the Fukushima disaster it had already released more radiation than 22,000 years of 2006-era U.S. coal radiation emissions, under very pessimistic assumptions. Which means we can confidently estimate Fukushima has released far more radiation than all the coal burned in human history and all the coal remaining in the ground that could be burned combined.
This doesn't mean that nuclear power is overall a bad idea, but it's definitely not because coal is a worse radioactivity concern. From what I've heard this particular misleading talking point has been going around even before it started circulating on the internet, I remember someone telling me that it was going around Stanford decades ago. People should be cautious with counterintuitive factoids like this, because often they spread because they are too good to check.
Mostly the posts are about the censorship aspect of the story, which is naturally going to attract comments by people who oppose censorship on principle and/or have specific examples they object to. Even people who support censorship tend not to be as passionate about censoring COVID-19 stuff as they are about something like "hate speech", so it's not going to get a bunch of comments about not "tolerating the intolerant" or whatever.
In any case your post comes across as obnoxious bulverism, seemingly taking for granted that disagreement with your position is driven by irrational fear or "Mass Formation Psychosis". You don't even really explain what your position is (No lockdowns whatsoever? Lesser lockdowns? Lockdowns until 3 weeks after vaccines were available to all and not a day longer? Lockdowns implemented voluntarily by non-governmental organizations but not any by the government? Government campaigns against social distancing so it doesn't drag on due to voluntary behavior? Better-targeted lockdowns that don't do useless things like restrict borders after it is already spreading domestically?) let alone explain why you have that position. Are there views you don't agree with but also think are a normal mistake rather than psychosis? Is this about specific views or are you postulating a bias towards pro-lockdown views without necessarily asserting they are actually incorrect? Your post doesn't make any of this clear, it doesn't have much content at all, it just points to a thread with people expressing views closer to your own and postulates that this shows people are getting over the madness that made them disagree with you.
perhaps the most prominent advocate of such giving in the minds of the general public. SBF, in a sense, was EA’s face to the normies.
As far as I can tell this isn't true. I was somewhat familiar with EA and I'd still never heard of him before, though I think I saw a mention of the FTX Future Fund in a Scott post or something. Not that long ago I remember reading people worrying that perception of EA was being too driven by longtermism due to the publication of What We Owe the Future, if that was true maybe William MacAskill qualifies as the public face. But more likely it's still just Peter Singer. SBF donated a pile of money and got mentioned in some news articles because of it but he wasn't writing books or articles about EA, he wasn't giving talks about it, he wasn't representing the EA position in panel shows. Searching old news articles the main place his support for EA got mentioned was in puff pieces about him personally. Now, they might get associated in the public imagination in retrospect, since what happened with FTX is big news and there's some articles about him and EA, but back when he was just some philanthropic billionaire I don't think he was a big part of the public perception.
He was referring to what the protesters did, not what they were specifically charged with. Even if protesters who did no more than trespass were charged with "Treason" that would not undermine his point about them being treated differently from how trespassing protesters are usually treated. As it happens, what a lot of them was charged with was "Obstruction of an official proceeding", an incredibly broad law created for people trying to interfere with criminal investigations/trials that had never been used in this way before. The Wikipedia article is divided into "Use prior to 2021" and "2021 U.S. Capitol attack" for a reason. And then, even if you think that undermines the point somehow, the specific guy he linked was just charged with "entering a restricted area", which is even more overtly about him trespassing.
My hypothesis would be that anti-White statements of this magnitude and timing aren't nearly so common (or perhaps even existent) among people in the "head of a broad public first-world organization" category.
Off the top of my head some of the public statements about the race-motivated prioritization of the COVID-19 vaccine would seem to contradict this. Not to mention it actually becoming U.S. government policy and killing many thousands of people. There are probably closer analogues, but I remember that particular one well and wrote this post about it at the time:
The most overt quote mentioned in that post would be this one:
The New York Times: The Elderly vs. Essential Workers: Who Should Get the Coronavirus Vaccine First?
Harald Schmidt, an expert in ethics and health policy at the University of Pennsylvania, said that it is reasonable to put essential workers ahead of older adults, given their risks, and that they are disproportionately minorities. “Older populations are whiter, ” Dr. Schmidt said. “Society is structured in a way that enables them to live longer. Instead of giving additional health benefits to those who already had more of them, we can start to level the playing field a bit.”
Or from the same article a quote from a member of the ACIP committee (the people responsible for writing the CDC's recommended prioritization):
Historically, the committee relied on scientific evidence to inform its decisions. But now the members are weighing social justice concerns as well, noted Lisa A. Prosser, a professor of health policy and decision sciences at the University of Michigan. “To me the issue of ethics is very significant, very important for this country,” Dr. Peter Szilagyi, a committee member and a pediatrics professor at the University of California, Los Angeles, said at the time, “and clearly favors the essential worker group because of the high proportion of minority, low-income and low-education workers among essential workers.”
I think even the dry language of ACIP itself would be beyond the pale, like when they list "Racial and ethnic minority groups under-represented among adults >65" in red as a reason to not prioritize them. If it was instead "Whites under-represented" or "Jews over-represented" I do not think they would have remained in charge of writing the CDC's recommendations, nor do I think states would have adopted those recommendations.
You could argue that the issue is just that killing tens/hundreds of thousands through healthcare policy is much less dramatic that killing thousands through direct violence, even when the healthcare policy is explicitly racially motivated. That is the main reason I said the analogy is not particularly close. But at the same time saying "Israel bears full responsibility for this tremendous loss of life." is less extreme than actually saying that loss of life is a good thing, let alone using your position in the U.S. government bureaucracy to deliberately cause that loss of life and being permitted to do so.
Regardless of exactly where the line is for anti-white statements and (more importantly) anti-white policies, it is obvious that they would not and could not have done something like this in the name of increasing black or jewish deaths instead. It is the product of explicit institutional racial bias. (Note that their policy actually did kill more black people because of how much more vulnerable the elderly are, it just killed even more white people so the proportion of the deaths was more white. And naturally it killed more jewish people as well.) Of course, that doesn't prove anything about the ordering of favored groups against each other like the OP was arguing. It just shows that social justice disfavors white people and is influential enough to shape the decisions of institutions like the CDC/ACIP and the states that followed their recommendations or prioritized by race outright.
The highest position on the progressive totem pole is being Jewish, not black or trans.
This would only follow if, for instance, there was a massacre of black people and jews could make similar remarks about the massacre without being fired. Obviously white people are lower, but that doesn't tell us anything about the ordering of the favored groups, or whether they are ordered in any sort of consistent way to begin with.
I think this is the intended line of thinking, but red doesn't require any cooperation: pure self-interest can grant it too.
The issue is the extreme difficulty of that level of coordination, not their specific motives. Imagine I said "coordination" instead of "cooperation" if you prefer. If you place an above-zero value on the lives of people who might press blue, then the optimal outcome is either >50% blue or exactly 100% red, with every other possibility being worse.
You can't rely on 100% to do pretty much anything, including act on self-interest. People in real life do things like commit suicidal school shootings, and you have to make decisions taking that into account. As I pointed out, even most mundane crime is self-destructive and yet people do it anyways. In this case, as people have pointed out, some people will pick blue by mistake, because they are momentarily suicidal enough to take a risk even though they wouldn't carry out a normal suicide, or (most of all) because they realize the above and want to save everyone.
My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they?
I am saying they can be replicated, just by someone who unlike you or me has paid the $20. I suppose it is possible that the supposed degradation in its capabilities has messed up these sorts of questions as well, but probably not.
If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?
There is a big difference between random guessing and having a capability that sometimes doesn't work. In particular, if the chance of randomly getting the right result without understanding is low enough. Text generators based on Markov chains could output something that looked like programming, but they did not output working programs, because such an outcome is unlikely enough that creating a novel program is not something you can just randomly stumble upon without some idea of what you're doing. In any case, as far as I know GPT-4 is not that unreliable, especially once you find the prompts that work for the task you want.
Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.
How well it reasons is a different question from whether it reasons at all. It is by human standards very imbalanced in how much it knows vs. how well it reasons, so yes people who think it is human-level are generally being fooled by its greater knowledge. But the reasoning is there and it's what makes a lot of the rest possible. Give it a programming task and most of what it does might be copying common methods of doing things that it came across in training, but without the capability to reason it would have no idea of how to determine what methods to use and fit them together without memorizing the exact same task from elsewhere. So practical use is generally going to involve a lot of memorized material, but anyone with a subscription can come up with novel questions to test its reasoning capabilities alone.
brought them to vast new audiences
Did he? How many people watch Alex Jones but aren't familiar enough with the conspiracy-theory community to have encountered an extremely popular conspiracy theory? And of course the grassroots conspiracy theorists had a lot more detail and arguments too, unlike Alex Jones vaguely referencing the claims that were already widespread. For that matter the first page of that 4plebs search in 2013 has a screenshot of Robbie Parker's former phone number, certainly not something Alex Jones shared and from before he even started referencing "actors". We hardly need Alex Jones to explain why he got harassing phone calls. Even if there's a significant audience of casual conspiracy-theorists who watch Alex Jones but aren't in contact with the rest of the conspiracy theory community, it seems like those would be the least likely to act on that information.
From what I can tell the modern conspiracy-theorist community is fundamentally very grassroots, a distributed effort to accumulate the sort of seemingly-convincing evidence and arguments described in The Pyramid and the Garden. Not that non-conspiracy-theorists are immune to this, most of them will accept similarly bad evidence under other circumstances, they're usually just using heuristics like "reject things called conspiracy theories by mainstream sources" which fail as soon as something true is called a conspiracy theory or a false conspiracy theory is treated seriously by the mainstream. E.g. I remember people on 4chan sometimes thinking posts they didn't like were "bots" even when this was technologically very implausible, and then years later I saw the habit of accusing opposing posters of being "Russian bots" on sites like Twitter and Reddit go mainstream. (Complete with popular ideas like "You can tell they're Russian bots because their usernames end with 8 numbers on Twitter" - of course the actual reason is because that's Twitter's format for suggested usernames.) Anyway, maybe the conspiracy-theorist community used to be more centralized but nowadays very few conspiracy theories originate or are even popularized by some identifiable leader, they're just networks of people who combine the same mistakes in reasoning most people make with a distrust of official sources.
a group that is harder to bring to court for defamation for reasons that should be obvious
Right. But it doesn't seem like you should get to legally treat the guy who happens to be the most prominent conspiracy-theorist as a scapegoat just because there's nobody else to sue. Defamation law doesn't have a mechanism to crack down on communities of people with mistaken ideas, and rightly so.
Oh sure, but in this case we're trading off with risk of being killed as a child, not 11 extra minutes on your deathbed, so QALYs are the appropriate metric. By "reduce their lifespan" I was imagining it as taking those minutes from their prime, reducing healthspan by an equal amount.
The starting point was you saying that people who aren't white nationalists don't "care about white people", and that the reasons for this are sufficiently obvious that even people with drastically different beliefs about the world wouldn't disagree with white nationalism otherwise. You're now talking about how allowing even highly selective non-white immigration could result in intermarriage that results in...the white population ending up with some fraction of a percent of east-asian ancestry? I'm not seeing how this is harmful, and I certainly don't think it is so self-evidently harmful that even people who disagree with you realize it is harmful.
Meanwhile, on a timeframe like that there are far more important factors to focus on. Obviously there are the non-selective forms of immigration, and the large racial minorities that already live in most majority-white countries. There is dysgenic evolutionary pressure costing around 1 IQ point per generation (along with lower conscientiousness, more ADHD, etc.), because modern society is currently set up so that the more successful you are the fewer children you have. And there is the rapidly-arriving promise of technologies like embryo selection or hypothetical future genetic engineering (or simply getting over the eugenics taboo and doing large-scale sperm donation), potentially allowing whichever group is willing to do it to tremendously improve themselves.
How many music videos actually have written or scripted reviews? Reaction videos have a lower barrier to entry than a blog post or scripted video, since you just have to watch and say what you're thinking, but a higher barrier and probably more detail than a Youtube comment. So if someone wants to hear what someone else thinks of a particular music video, they might be pretty much the only choice available. Also if a youtuber or streamer already has an audience they might be interested in what he has to say about something, even if they aren't very interested in the actual subject matter and it's low-effort content.
No, I meant to reply to cake's OP comment.
Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).
Sure, but of course such measures being possible doesn't mean they'll actually be done.
Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets
This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.
A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.
Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.
people who do not and who never have existed can't be said to have "wants" in any meaningful sense
You should include people who will exist as well, as opposed to people who could potentially exist if you took other actions but will never actually exist. Otherwise something like "burying a deadly poison that you know will leach into the water table in 120 years" would be perfectly moral, since the people it will kill don't exist yet.
This kind of idiotic one-dimensional thinking is why I maintain that utilitarianism is fundementally stupid, evil, and incompatible with human flourishing.
As I mentioned, Preference Utilitarianism and Average Preference Utilitarianism are also forms of utilitarianism. And Total Utilitarianism doesn't imply wireheading either. Wireheading is only an implication of particularly literal and naive forms of hedonic utilitarianism that not even actual historical hedonic utilitarians would endorse, they would presumably either claim it isn't "real" happiness or switch to another form of utilitarianism.
Honestly, I think the main rhetorical advantage of non-utilitarianism forms of ethics is that they tend to be so incoherent that it is harder to accuse them of endorsing anything in particular. But people being bad at formalizing morality doesn't mean they actually endorse their misformalization's implications. You just tried to express your own non-utilitarian beliefs and immediately endorsed sufficiently-delayed murders of people who aren't born yet, that doesn't mean you actually support that implication. But having non-formalized morality is no advantage in real life and often leads to terrible decisions by people who have never rigorously thought about what they're doing, because you really do have to make choices. In medicine utilitarianism gave us QALYs while non-consequentialism gave us restrictive IRBs that care more about the slightest "injustice" than about saving thousands of lives, as a human who will require medical care I know which of those I prefer.
omnicide
The view he is expressing is of course the opposite of this - that humanity surviving until it ultimately colonizes the galaxy is so important that anything that improves humanity's safety is more important than non-omnicidal dangers. Of course that would still leave a lot of uncertainty about what the safest path is. As I argued, significant delays are not necessarily more safe.
My 1e999999999999999 hypothetical future descendants who see utilitarian AIs as abominations to be purged with holy fire in the name of the God-Emperor are just as real as your "10^46 hypothetical people per century after galactic colonization" and thier preferences are just as valid.
To be clear the "preference" framing is mine, since I prefer preference utilitarianism. Bostrom would frame it as something like trying to maximize the amount of things we value, such as "sentient beings living worthwhile lives".
The point isn't whether such an outcome is particularly likely, it's that it's more likely than being kept barely alive by a series of staggeringly unlikely macroscopic quantum events. The idea behind quantum immortality is that, if many-worlds is true and all the worlds in it are truly "real", there will always be some small subset of worlds where you continue existing so long as this is physically possible. And a lot of things are physically possible if you get into extremely unlikely quantum fluctuations. Since you don't experience the worlds where you are already dead, an increasing percentage of your remaining future selves would have experienced whatever unlikely events are required to keep you alive. When I said "your society" that wasn't meant to refer to any current society, it was meant to refer to the idea of surviving as part of a society at all. As opposed to most of your future copies surviving as the only remaining human in your universe, floating in space after the destruction of Earth and staying alive only because in some tiny fraction of the Everett branches splitting off each instant some oxygen/etc. randomly appears and and keeps you alive. Any future that doesn't require such a continuous series of coincidences will be a much larger fraction of the branches where you survive, and the most obvious such future is one where people deliberately invent the required technology. So whether quantum immortality is true or not, and whether or not you decide to care about the fate of future selves even if they only exist in a small fraction of branches, the expected outcomes of quantum immortality being true aren't the "kept barely alive by randomness" scenarios.
It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are. You seem to be taking the fact that the risks in these hypotheticals are not worth actual consideration as a point against these hypotheticals, but of course that is the point the hypotheticals are making.
Moral reasoning is not really meant for such extreme numbers
Nothing in the universe will ever be 3↑↑↑3, but 7 billion people is already far beyond intuitive moral reasoning. We still have to make decisions affecting them whether our moral reasoning is meant for it or not. Which includes reacting differently to something bad happening to one person out of millions of beneficiaries than to one person out of hundreds of beneficiaries.
Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small.
In some percentage of cases the cancer spreads to your brain, you get surgery to remove the tumor, and the brain surgeon messes up in precisely the right way. Both "locked-in syndrome" and chronic pain are things that happen, it's hardly a stretch to think a combination of both that paralyzes you for 50 years while you experience continuous agony is physically possible. And of course even if you were uncertain whether it was physically possible, that's just another thing to multiply the improbability by. It's not that rounding the probability down to 0 doesn't make sense in terms of practical decision-making, it's that "1 in 3↑↑↑3" odds are unimaginably less likely, so you should round them down to 0 too.
If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering.
I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person. It doesn't have a coherent meaning so we are free to arbitrarily imagine whatever we want. That said, to the extent that its meaning can be nailed down at all, I think it would favor avoiding the 3↑↑↑3 option. My understanding is that a single pain receptor firing once is not noticeable. If a form of suffering is instead barely noticeable, it is presumably "bigger" than a single pain receptor firing. There are only 37 trillion cells the the human body, so the number of pain receptors is something smaller than that. So the first step in multiplying barely-noticeable suffering by 3↑↑↑3 is that it goes from "worse than a pain receptor firing" to "worse than every pain receptor firing continuously for an extended period". And that doesn't make a dent in 3↑↑↑3, so we multiply further, such as by making it last unimaginably longer than merely 10^100 times the lifespan of the universe.
That is a pretty arbitrary and meaningless matter of interpretation though. A more meaningful measure would be the Rawlsian veil of ignorance, You're a random member of a population of 3↑↑↑3, is it better for you that 10^100 of them be tortured or all of them experience a dropped frame in a video? This is equivalent to what I answered in my previous post, that it would be foolish to sacrifice anything to avoid such odds.
Note that "total blindness", "clinical depression", and "chronic pain" all involve average QALY estimates that still imply an above-zero value of life. There's a lot of people with those conditions who would gladly sign up for boring seminars if they eliminated their condition for the duration of the seminar. And of course history is full of people opting for unpleasant slave-labor over death. So if you're not joking your opinion seems non-representative.
More options
Context Copy link