This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
After Zizians and the efilist bombing I have tried to pay more attention to the cross section of ethical veganism, rationalists, and nerdy utilitarian blogs.
A Substack titled "Don't Eat Honey" was published. Inside, the argument is made that to buy or consume honey is an unethical act for insect suffering-at-scale reasons. According to the essay, bees, like livestock, suffer quite a lot at the hands of beekeepers. That's a lot of bees. Thus the title: don't eat honey.
This particular post is high on assumption and light on rigor. It received outrage. Another post on Bentham's blog on insect suffering I recall as higher quality material for understanding. Did you know that composting is an unethical abomination? I'd never considered it!
'Suffering' presents an incommensurable problem. Suffering is a social construct. Suffering is the number and intensity of firing pain receptors over time. Suffering is how many days in a row I experienced boredom as a teenager. Still, science attempts to define and quantify suffering. An equation works out the math: how conscious a cricket is in relation to man, a cricket's assumed capacity to feel pain, the length of time it spends feeling pain, and so on. My prediction is we will figure out the consciousness part of the equation with stable meaning before we ever do so for suffering.
We will manage to rethink, remeasure, and find additional ways of suffering. People always have. Today, plants do not feel "pain", but tomorrow, pain may not a prerequisite for suffering. Maybe starvation becomes a moral imperative. If the slope sounds too slippery, please consider people have already built a (relatively unpopular) scaffolding to accept and impose costs at the expense of human comfort, life, and survival. Admittedly, that suffering may present an incommensurable problem doesn't negate any imperative to reduce it. Find more suffering? Reduce that, too. It does give me reason to question the limitations and guard rails of the social technology.
According to Wikipedia, negative utilitarians (NU) are sometimes categorized as strong NUs and weak NUs. This differentiates what I'd call fundamentalists --- who follow suffering minimizer logic to whatever ends -- to the milder "weak" utilitarians. The fundamentalist may advocate for suffering reduction at a cost that includes death, your neighbor's dog, or the continued existence of Slovenia-- the honey bee capitol of the world. Our anti-honey, anti-suffering advocate has previously demonstrated he values some positive utility when it comes to natalism, but much of his commenting audience appears more in the fundamentalist category.
One vibe I pick up from the modern vegans is that the anti-suffering ethics are the ethics of the future. That our great-grandchildren will look backwards and wonder how we ever stooped so low as to tolerate farming practice A or B. I don't doubt we'll find cost effective, technological solutions that will be accepted as moral improvements in the future. I am not opposed to those changes on principle. Increase shrimp welfare if you want, fine.
My vague concern is that this social technology doesn't appear limited to spawning technological or charitable solutions. With things like lab meat showing up more frequently in the culture war I'd expect the social technology to spread. So far, however, vegans remain a stable population in the US. Nerdy utilitarian bloggers are yet to impose their will on me. They just don't think I should eat honey.
Of all the things I did not expect to see in a "J'Accuse!" post, composting would have been high on the list if I had ever contemplated the ethical and moral issues involved. In letting worms break down food scraps to create soil. Like they've been doing ever since the first worms crawled through soil breaking down humus.
When I read stuff like that (if your food scraps are already fly-infested, be sure to humanely kill the insects before disposing of your rubbish), I have to wonder are these people living in the world of nature at all? Like, they're writing as though they were all born and raised on a space station that never saw a crumb of non-metallic, non-artificial surfaces in all their born days.
I swear, I am getting N.I.C.E. vibes from this attitude of "nature, ugh, organic life is so gross and icky" about, well, every darn natural process in the world of animal life. From "That Hideous Strength":
Anyone else reading that excerpt and thinking 'Based'? Wouldn't it be excellent to carve out a new artificial world, make better animals and plants according to one's wishes? Live as long as one likes without regard for age?
Not the specifics of perfectly cleaning the world, that could take many angles. One might make a jungle of talking animals, or an endless lived-in leafy suburbia or a Willy Wonka wonderland or all of those things simulated within a ball of computronium. But isn't that the logical endpoint of ever increasing mastery and control of the world? What's the alternative, stasis?
I can sense that many people don't like this vision but isn't this what we're doing, irregardless of objections? Unless you think 'no people mustn't live forever' or 'we mustn't have children' or 'technological advancement must stop' then you endorse indefinite growth in numbers and in power of worldshaping and knowledge ("All stable processes we shall predict. All unstable processes we shall control"), so eventually something like this will happen.
That is why he wrote it that way. He's describing a character, a type of character even, not just a caricature.
I'm all for building artificial worlds. I'm skeptical "better" plants and animals are possible; we've altered plants and animals before, and we can doubtless alter them far more radically in the future, but what makes those alterations "better"? "Living as long as one wants, regardless of age" used to be something I was very excited for, less so after contemplating the downsides. All the pathways to serious immortality I'm aware of involve making the sum of me fully legible, and the risks of that very likely outweigh any possible benefit, assuming it's even possible.
The alternative is thinking that our mastery is not ever-increasing in the way you seem to mean. Technology can and has greatly increased, and maybe it will greatly increase even more, but technology is not the same thing as mastery. If you want a highly reductive example of the difference between the two, compare the original Snow White film to the remake. The people who made the remake had vastly more technology, vastly more resources, vastly more experience in filmmaking to draw on; more "mastery", right? So why was the original a masterpiece, and the remake a trash disaster? Again, that's a highly reductive example, it seems to me that the principle generalizes quite widely.
I don't think we are moving toward ever-increasing mastery. I don't think we have to stop tech advancement either. I think what will happen next is pretty similar to what has happened before: we'll build something wondrous, and then the contradictions will assert themselves and it will all fall apart.
Technology is the concentration of power. Concentrated power is individual power. There is almost certainly a level of individual power that society, as we understand the term, can't contain or channel, and once that level is achieved society will simply fail. Society maintains technology; when society fails, likely the technology will fail as well, and then it's back down the curve for the survivors.
Maybe this time will be different. I wouldn't bet on it, though.
Would you rather be "fully legible" or fully dead? Easy choice as far as I'm concerned.
Fully dead, and it is indeed an easy choice.
The immortality you pine for would open you up to the most perfect and degrading form of slavery conceivable.
While a very nice scifi story, there's very little reason to think that reality will pan out that way.
It suffers from the same failure of imagination as Hanson's Age of Em. We don't live in a universe where it looks like it makes economic sense to have mind uploads doing cognitive or physical labor. We've got LLMs, and will likely have other kinds of nonhuman AI. They can be far more finely tuned and optimized than any human upload (while keeping the latter recognizably human), while costing far less in terms of resources to run. While compute estimates for human brain emulation are all over the place, varying in multiple OOMs, almost all such guesses are far, far larger than a single instance of even the most unwieldy LLM around.
I sincerely doubt that even a stripped down human emulation can run on the same hardware as a SOTA LLM.
If there's no industrial or economic demand for Em slaves, who is the customer for mind-uploading technology?
The answer is obvious: the person being uploaded. You and me. People who don't want to die. This completely flips the market dynamic. We are not the product; we are the clients. The service being sold goes from "cognitive labor" to "secure digital immortality." In this market, companies would compete not on how efficiently they can exploit Ems, but on how robustly they can protect them.
There is no profit motive behind enslaving and torturing them. Without profit, you go from industrial-scale atrocities to bespoke custom nightmares. Which aren't really worth worrying about. You might as well refuse to have children or other descendants, because someone can hypothetically torture them to get back at you. If nobody is making money off enslaving human uploads, then just about nobody but psychopaths will seek to go through the expense of torturing them.
I'm inclined towards your skeptical take - I think we as humans always fantasize that there are powerful people/beings out there who want to spend resources hurting us, when the real truth is that they simply don't care about you. Sure, the denizens of the future with access to your brainscan could simulate your mind for a billion subjective years without your consent. But why would they?
The problem is that there's always a risk that you're wrong, that there is some reason or motive in post-singularity society for people to irreversibly propagate your brainscan without your consent. And then you're at the mercy of Deep Time - you'd better hope that no beings that ever will exist will enjoy, uh, "playing" with your mind. (From this perspective, you won't even have the benefit of anonymity - as one of the earliest existing minds, it's easy to imagine some beings would find you "interesting".)
Maybe the risk is low, because this is the real world we're dealing with and it's never as good or bad as our imaginations can conjure. But you're talking about taking a (small, you argue) gamble with an almost unlimited downside. Imagine you had a nice comfortable house that just happened to be 100m away from a hellmouth. It's inactive, and there are guard rails, so it's hard to imagine you'd ever fall in. But unlikely things sometimes happen, and if you ever did, you would infinitely regret it forever. I don't think I'd want to live in that house! I'd probably move...
That is a far more reasonable take, but once again, I'd say that the most likely alternative is death. I really don't want to be dead!
There also ways to mitigate the risk. You can self-host your uploads, which I'd certainly do if that was an option. You could have multiple copies running, if there's 10^9 happy flourishing self_made_humans out there, it would suck to be the couple dozen being tortured by people who really hate me because of moderation decisions made on an underwater basket weaving community before the Singularity, but that's acceptable for me. I expect that we would have legal and technical safeguards too, such as some form of tamper-protection and fail-deadly in place.
Can I guarantee someone won't make a copy of me that gets vile shit done to it? Not at all, I just think there are no better options even given Deep Time. It beats being information-theoretically dead, at which point I guess you just have to pray for a Boltzmann Brain that looks like you to show up.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link