This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Oh geez. Who radicalized you?
You could be interested in reading up on FHI's Windfall Clause, «a policy proposal for an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI».
As you know, I expect some sort of 3 (which makes me unremarkable here). For the purpose of novelty, let's discuss a variant closer to 1, where the size of the plebeian population, or more likely its resource expenditure, is capped at some absolute value or a modest share of total economic output, while their political power to negotiate a bigger cut is tacitly of explicitly eliminated. This will necessarily mean hard caps on access to transformative tech, from radical life extension to transhuman self-modification, to AI or mind upload creation/access, to raw energy/matter utilization, to types of structures allowed to control and the very software primitives allowed to conceive of, and in my book will be not much better than Eating The Poor (or, bluntly, Not-Powerful – all those «financially independent» smartasses go to the chopping block just the same), though of course I'd rather be rate-limited to the normal 2020's life expectancy and subjective abundance than culled, more or less obviously, in like 3-15 years.
In fact I do not expect life to get any better than it is now and would be pleasantly surprised if it never got way worse.
The argument for it is long-term trajectory of more egalitarian scenarios. The class of people who'll benefit from ownership of transformative AI will be fairly small, unusually intelligent, conscientious, good at optimization of business processes, and long-term oriented. In two words, high-agency. We already see the outline of this elite group. If a half-ape like me can think about eventual cool and useful things to do on an astronomical scale, to scale my agency up, they ought to be able to feel it already. It might be akin to Anders Sandberg's plan. As you can see, in 10k years Sandberg plans to not only contain multitudes but let them out, which is to say, there'll be some big N of Sandberg copies doing various fancy things that'll require substantial compute and matter (seeing as all low-hanging fruit will be picked before then). Altman observes, correctly, that there's lotta energy in the universe – but not so "lotta" as to make the question of apportionment moot. N Sandbergs are N-1 plebs who don't get equal agency to a sandberg copyclan.
Crucially, Sandberg expects his copies to broadly share the same value function, thus he is more than happy to share with them his allotted fraction of the Cosmic Endowment. Barring copies, I'd bet he'd be equally happy with people sharing his philosophical outlook, aesthetic and interests; and probably personal friends, relatives and such (though I'd trust him to not be obscenely clannish). Obviously, that selects against 99% or more of the Earth's population.
Sandberg is just a public speaker – but we can expect actual AGI profiteers to reason along the same lines, and be more clannish at that. And are they wrong?
Suppose we naively equalize this power, or just adapt current political institutions to it, such that in a few generations a plebeian can secure resources to start his own copyclan and bite off some share of the light cone. What would they make of it? Would they not devolve into puddles of high-maintenance hedonium? Or, worse, would they not spill into ugly rat races over artificially scarce artifacts to secure positional goods, invent increasingly absurd sports, flaunt their cognitive limitations, vote for some even more buffoonish Trumps, and generally mode-collapse into God-monkeys replaying behavioral loops from Savannah? Worst of all, would they not succumb to Moloch in His basest form, the Blight from Sandberg's own worldbuilding exercise, like Scott warned in his meditation?
I'm less of an elitist than you, and you're far from the worst offender, but frankly it's very hard for me to imagine that, if I were to make the decision that people upstream of of Altman or Hassabis will soon be positioned to make, I'd have the heart to play Prometheus. I would, however, try to spread the prerequisites of high agency. I'd be enticing baseline humans to partake of Ambrosia, the Fruit of Knowledge and the water of Mnemosyne before giving them Fire.
But that's only in hopes of increasing the share of actors who'd be motivated and capable to do interesting things with what they can take – in other words, who'd be capable of being reasoned with and similar to me, similar enough to not have great regrets about ceding effectively the whole light cone to them. In terms of outcomes, it's not that different from inflating my own clan or copyclan, only more humane (and local traditionalists would say it's actually more evil than just letting them die). And the chance of success is lower – as you say, «unlikely due to lack of public buy-in».
Power dynamics do not send people who'd take such risks out of idealism or aesthetic preferences to the top.
Conveniently, utilitarians tell us that human lives, happiness points and QALYs are fungible, so it makes little difference on the cosmic scale if you uplift the current 8 billion half-apes, or let them expire (but ethically, e.g. doubling down on addictive entertainment production, SusTainaBility propaganda, birth control and child substitutes and industrializing this novel Canadian practice of recommending euthanasia to unhappy poor people), and generate a more aligned population from the small chosen seed.
The big difference lies in odds of success, so the choice is straightforward – even without the brute consideration of kin preference.
Yeah, but that doesn't preclude giving them, say, equivalent of Earth's worth of resources. It doesn't necessitate murdering them by restricting anti-aging or mind uploading tech.
I don't expect there to be that much interesting stuff to do in Reality. Space ~undifferentiated at scale. Agency = compute.
What would that involve?
I think most would agree that killing someone, to swap them for someone new, is not good.
Maybe it's copium, but I really don't believe that it's likely. It's a coherent view, and it does make sense from purely selfish perspective, sorta - but moral intuitions would scream. I mean, really? (not literal) post-scarcity achieved, now let's go kill everyone except close family and such? Kill actual living 10B humans, replace with new instances, personally designed?
All of that motivated by just wanting to grab, say, 10% more resources (otherwise allocated equally between existing humans)?
Anyway. From Perfect Imperfection:
(...)
More options
Context Copy link
More options
Context Copy link