Ask @desolation about the FairTax
User ID: 647
Scott has posted a discussion of the conversation about eugenics, framed as an actual conversation. I found it thought-provoking, as he made better arguments for both sides than I am used to seeing from either.
A: Given that mild, consensual forms of eugenics have historically led to extreme, horrifying versions, we have reason to believe the topic is a slippery slope which ought to be avoided outright.
B: This proves too much, as there are plenty of other ideas with similar history but much higher body counts. Thus eugenics ought to be carefully investigated rather than tabooed outright.
In the footnotes, he also presents C: Ehrlich did nothing wrong, and sometimes expected-value calculations don’t plan for the long tails. Democracy, as a form of distributed consent, is our best way to square this circle. This (correctly, IMO) leaves Scott uncomfortable. I appreciate that he included it.
I was not at all familiar with Ehrlich’s work, or with the quintessentially-McNamara history of Indian aid programs. Both add some valuable context for the argument. Oh, and I guess Scott talks about HBD a little bit; that’ll be catnip for this community, but it’s really secondary to the main thrust. Seriously, just read the article for a better version than anything I can write.
A brief argument that “moderation” is distinct from censorship mainly when it’s optional.
I read this as a corollary to Scott’s Archipelago and Atomic Communitarianism. It certainly raises similar issues—especially the existence of exit rights. Currently, even heavily free-speech platforms maintain the option of deleting content. This can be legal or practical. But doing so is incompatible with an “exit” right to opt back in to the deleted material.
Scott also suggests that if moderation becomes “too cheap to meter,” it’s likely to prevent the conflation with censorship. I’m not sure I see it. Assuming he means something like free, accurate AI tagging/filtering, how does that remove the incentive to call [objectionable thing X] worthy of proper censorship? I suppose it reduces the excuse of “X might offend people,” requiring more legible harms.
As a side note, I’m curious if anyone else browses the moderation log periodically. Perhaps I’m engaging with outrage fuel. But it also seems like an example of unchecking (some of) the moderation filters to keep calibrated.