This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
After Zizians and the efilist bombing I have tried to pay more attention to the cross section of ethical veganism, rationalists, and nerdy utilitarian blogs.
A Substack titled "Don't Eat Honey" was published. Inside, the argument is made that to buy or consume honey is an unethical act for insect suffering-at-scale reasons. According to the essay, bees, like livestock, suffer quite a lot at the hands of beekeepers. That's a lot of bees. Thus the title: don't eat honey.
This particular post is high on assumption and light on rigor. It received outrage. Another post on Bentham's blog on insect suffering I recall as higher quality material for understanding. Did you know that composting is an unethical abomination? I'd never considered it!
'Suffering' presents an incommensurable problem. Suffering is a social construct. Suffering is the number and intensity of firing pain receptors over time. Suffering is how many days in a row I experienced boredom as a teenager. Still, science attempts to define and quantify suffering. An equation works out the math: how conscious a cricket is in relation to man, a cricket's assumed capacity to feel pain, the length of time it spends feeling pain, and so on. My prediction is we will figure out the consciousness part of the equation with stable meaning before we ever do so for suffering.
We will manage to rethink, remeasure, and find additional ways of suffering. People always have. Today, plants do not feel "pain", but tomorrow, pain may not a prerequisite for suffering. Maybe starvation becomes a moral imperative. If the slope sounds too slippery, please consider people have already built a (relatively unpopular) scaffolding to accept and impose costs at the expense of human comfort, life, and survival. Admittedly, that suffering may present an incommensurable problem doesn't negate any imperative to reduce it. Find more suffering? Reduce that, too. It does give me reason to question the limitations and guard rails of the social technology.
According to Wikipedia, negative utilitarians (NU) are sometimes categorized as strong NUs and weak NUs. This differentiates what I'd call fundamentalists --- who follow suffering minimizer logic to whatever ends -- to the milder "weak" utilitarians. The fundamentalist may advocate for suffering reduction at a cost that includes death, your neighbor's dog, or the continued existence of Slovenia-- the honey bee capitol of the world. Our anti-honey, anti-suffering advocate has previously demonstrated he values some positive utility when it comes to natalism, but much of his commenting audience appears more in the fundamentalist category.
One vibe I pick up from the modern vegans is that the anti-suffering ethics are the ethics of the future. That our great-grandchildren will look backwards and wonder how we ever stooped so low as to tolerate farming practice A or B. I don't doubt we'll find cost effective, technological solutions that will be accepted as moral improvements in the future. I am not opposed to those changes on principle. Increase shrimp welfare if you want, fine.
My vague concern is that this social technology doesn't appear limited to spawning technological or charitable solutions. With things like lab meat showing up more frequently in the culture war I'd expect the social technology to spread. So far, however, vegans remain a stable population in the US. Nerdy utilitarian bloggers are yet to impose their will on me. They just don't think I should eat honey.
Of all the things I did not expect to see in a "J'Accuse!" post, composting would have been high on the list if I had ever contemplated the ethical and moral issues involved. In letting worms break down food scraps to create soil. Like they've been doing ever since the first worms crawled through soil breaking down humus.
When I read stuff like that (if your food scraps are already fly-infested, be sure to humanely kill the insects before disposing of your rubbish), I have to wonder are these people living in the world of nature at all? Like, they're writing as though they were all born and raised on a space station that never saw a crumb of non-metallic, non-artificial surfaces in all their born days.
I swear, I am getting N.I.C.E. vibes from this attitude of "nature, ugh, organic life is so gross and icky" about, well, every darn natural process in the world of animal life. From "That Hideous Strength":
Anyone else reading that excerpt and thinking 'Based'? Wouldn't it be excellent to carve out a new artificial world, make better animals and plants according to one's wishes? Live as long as one likes without regard for age?
Not the specifics of perfectly cleaning the world, that could take many angles. One might make a jungle of talking animals, or an endless lived-in leafy suburbia or a Willy Wonka wonderland or all of those things simulated within a ball of computronium. But isn't that the logical endpoint of ever increasing mastery and control of the world? What's the alternative, stasis?
I can sense that many people don't like this vision but isn't this what we're doing, irregardless of objections? Unless you think 'no people mustn't live forever' or 'we mustn't have children' or 'technological advancement must stop' then you endorse indefinite growth in numbers and in power of worldshaping and knowledge ("All stable processes we shall predict. All unstable processes we shall control"), so eventually something like this will happen.
That is why he wrote it that way. He's describing a character, a type of character even, not just a caricature.
I'm all for building artificial worlds. I'm skeptical "better" plants and animals are possible; we've altered plants and animals before, and we can doubtless alter them far more radically in the future, but what makes those alterations "better"? "Living as long as one wants, regardless of age" used to be something I was very excited for, less so after contemplating the downsides. All the pathways to serious immortality I'm aware of involve making the sum of me fully legible, and the risks of that very likely outweigh any possible benefit, assuming it's even possible.
The alternative is thinking that our mastery is not ever-increasing in the way you seem to mean. Technology can and has greatly increased, and maybe it will greatly increase even more, but technology is not the same thing as mastery. If you want a highly reductive example of the difference between the two, compare the original Snow White film to the remake. The people who made the remake had vastly more technology, vastly more resources, vastly more experience in filmmaking to draw on; more "mastery", right? So why was the original a masterpiece, and the remake a trash disaster? Again, that's a highly reductive example, it seems to me that the principle generalizes quite widely.
I don't think we are moving toward ever-increasing mastery. I don't think we have to stop tech advancement either. I think what will happen next is pretty similar to what has happened before: we'll build something wondrous, and then the contradictions will assert themselves and it will all fall apart.
Technology is the concentration of power. Concentrated power is individual power. There is almost certainly a level of individual power that society, as we understand the term, can't contain or channel, and once that level is achieved society will simply fail. Society maintains technology; when society fails, likely the technology will fail as well, and then it's back down the curve for the survivors.
Maybe this time will be different. I wouldn't bet on it, though.
I think they could've made a better Snow White film than the original, it's just that they didn't want to. They wanted to make a bad film and did so.
Mastery isn't the problem, it's bad people using great resources to achieve bad goals. Now I see it, there's a pleasing symmetry in our tags "Just build nuclear plants" and "nuclear levels of sour" and what we're saying.
However I do agree that there are serious risks with progress and power concentration, it will probably end in tears for the vast majority for us for the same fundamental reason, bad people wanting bad things.
I don't see a collapse pathway though, only greater acceleration. Technology forms society. Writing and agriculture enabled settled states, steam engines enabled modern society. Powerful AI will enable transhuman or posthuman society. Maybe that does look more like an oligarchy where a few enjoy limitless technological power and can suppress everyone else. It may well be bad for those who aren't a chosen few or a singular one. Nevertheless I expect that it'd be much more highly developed than modern civilization in technological sophistication and scale.
Even if there's a full nuclear exchange induced by destabilizing technology, would the survivors really give up on securing more wealth, more power, more security through technological superiority? I believe they'd think 'damn, we should've struck first' or 'this time let's hide our schemes more effectively' or 'at least we've got the most remaining resources, we can try again'. They'd still know all the things we'd know, they'd be back at it again sooner or later, probably sooner and with a more ferocious sense of determination. A full nuclear exchange isn't certain either, it's hard to foresee what happens. I agree that there will be ever-greater instability and disruptions but that's just part of the transition from one kind of society to the next. The general trend is that even occasional setbacks (using rooted in social decline) are overcome - the Bronze Age Collapse, the fall of Rome and the Black Death only temporarily inhibited a larger trend of acceleration. Ideally acceleration should be channelled in a more pro-social way than it is but it seems an irresistible trend. Only if this time is different should we expect it to fail.
I'm pretty sure no one involved in the process actually said "Our goal is to make a bad film". I'm pretty sure a lot of people involved in the process were trying as hard as they possibly could to make a blockbuster. Maybe all of them. And again, they had orders of magnitude more technology than Walt Disney had, but the technology didn't actually solve the problem of making a good movie even a little bit.
Just so. Humans inevitably human, for good or ill. They'll human with sticks and rocks, and they'll human just as hard with nanocircuitry and orbital launch vehicles and nuclear fusion.
Are you familiar with Bostrom's Vulnerable World Hypothesis? If not, I'd recommend it. The standard assumption is that tech advancements proceed in a stable fashion, that the increase in individual/breaking power is balanced by an increase in communal/binding power. I don't think that assumption is valid, not only for future tech, but very likely for tech that already exists. What we have available to us at this moment is probably enough to crash society as we know it; all that is required is for the dice to come up snake-eyes. Adding more tech just means we roll more dice. Maybe, as you say, some future development jacks the binding power up, and we get stable dystopia, but honestly I'd prefer collapse.
You're correct that we bounced back from the black death and so on. But consider something like Bostrom's "easy nukes" example. There, the threat is baked into tech itself. There's no practical way to defend against it. There's no practical way to live with it. You can suppress the knowledge, likely at grievous cost, but the longer you have it suppressed, the more likely someone rediscovers it independently. Bostrom's example is of course a parable about AI, because he's a Rationalist and AI parables are what Rationalists do. It seems to me, though, that their Kurzweilian origins deny them the perspective needed to see the other ways the shining future might be dismayed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link