site banner

Culture War Roundup for the week of January 16, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Following up on a discussion with @drmanhattan16 downthread:

I keep hearing about fascist infiltration or alt-right infiltration into spaces, including themotte, but no one seems to actually be showing examples...

But now I find myself wondering if this has happened in more progressive spaces that were open to debate.

I think the answer is usually going to be "yes."

A couple months ago, during some meta-discussion of disappearing threads, I wrote up my thoughts on conspiracy theories as countersignaling. As long as there's incentive to appear cool, independent, unique, there is incentive to push the boundaries of acceptability. It's called "edgy" for a reason.

One of the common cultural touchstones for edge is forbidden knowledge. As a result, anywhere you find edgy status games, you'll find someone claiming to know whatever it is They don't want you to know. Except...if one can just say it out loud, how cool and secret can it really be? The theorist is incentivized to play up their edge, a rebel who won't be cowed rather than an attention-seeker. As an aside, antisemitism is past its heyday because it's not very good for this. Enough people pattern-match it to "attention-seeker" that it loses its edge. This is the result of decades of memetic immune response to those status games. Of course, given that one very definitely can get banned for it, it retains edgy credentials...sometimes.

(Note that I'm not claiming the antisemites here are just edgy. I understand you're pretty serious about the subject. The motte is a weird place and has other status games; personally, I think that COVID skepticism has a grip on more of the edgelords.)

In the end, some people will find themselves drawn to signal their edge. Those who do so overtly will usually end up banned, unless they signal something really milquetoast, in which case they're probably "cringe." Those with a little more tact, though...they are incentivized to find something under the radar. To maintain that sweet, sweet plausible deniability while still getting a rise out of the opposition. They need something that will prove their status as an independent free-thinker who doesn't fall for the party line.

And they take the black pill.

Note that I'm not claiming the antisemites here are just edgy. I understand you're pretty serious about the subject. The motte is a weird place and has other status games

I hope you'll forgive me for ignoring the main thrust of your post to go off on this tangent, but I've had this rolling around in my head for a while. I hope the rest of you will forgive me for poking fun at things that I'm often guilty of myself.

How to Win Friends and Influence People: the Rationalist Edition
  1. Extreme (emotional) Decoupling. Emotion is weakness, rationality is strength. Utilitarianism and consequentialism are our gods; the more gruesome the morally correct action you're willing to undertake, the greater human good you're able to invoke, the better. Examples: Eliezer 'melt all GPUs' Yudkowsky, Abortion/forced sterilization/[policy harming black people] is eugenic and therefore net good, censorship is worthwhile when you're bearing the lofty weight of future quadrillions of the human pan-galactic human-AI-hybrid diaspora on your shoulders. Humility is for normies and low-status chuds, not you, you beautiful prism of rarefied logic, you.

  2. Long, internally consistent logical chains based on premises with monstrous error bars/uncertainty (see previous points). The longer and harder to follow, the easier it is to obscure and deflect criticism, and the greater your boost in status.

  3. Literature references. Point score is directly correlated with obscurity; actually having read the the work in question is optional. Bonus points for linking SSC pieces, double bonus points if they're from 2016 or earlier.

  4. Write like a high-schooler who just discovered the wonders of a thesaurus. IQ is life. Everyone knows that vocabulary size is correlated to IQ, which is correlated to g, which determines your worth as a human being and position in the hierarchy. What better way to give your stock a little bump than to sprinkle in a few five syllable words that fell out of common use somewhere in the 19th century?

  5. Why post a succinct list with references when you can write a 30,000 character multipost that is a struggle to get through? This (1) gives you wriggle room to claim any characterization of your thesis is a strawman and (2) allows you to...

  6. ...respond to people with a half dozen links to your corpus of 10,000 word posts amounting to a small novella for them to read! Remember, the goal is gaining status, not clear communication of ideas or mutually working towards a model of the world. Obscurantism serves the former, brevity will only hurt you. And potentially get you in hot water with the mods.

  7. Complain about the normies in academia, MSM, HR, government, your life, etc vocally and frequently. This communicates that you're smarter than them, and remember, criticism is always easier than defending a thesis or building something worthwhile and thus disproportionately easier for gaining status.

How to Win Friends and Influence People: the Rationalist Edition

My reaction at this point:

At this point, reading an article a comment like this one, you already know what the next “narrative beat” has to be.

The fact that this has to be the next narrative beat in an article like this should raise red flags. Another way of phrasing “this has to be the next narrative beat” is that it’s something we would believe / want to believe / insert at this place in our discourse whether it was true or not. That means we need to be on extra special good epistemic behavior when we try to consider whether it’s true in this individual case, understanding that we’ll have a strong bias towards assuming “yes” that needs to be counteracted.

So I checked your points against both the latest 7 top-levels in the thread 1, 2, 3, 4, 5, 6, 7, and my memory of popular comments.

Extreme (emotional) Decoupling.

Not true of any of the top-levels. Some popular comments are like this but some are also earnest presentations of a situation that doesnt fit any mayor narrative, or being vocally angry at the outgroup.

Long, internally consistent logical chains based on premises with monstrous error bars/uncertainty

Not true of any top-levels nor of popular comments. Long arguments tend to make their points in more detail rather than make more points. Including multible examples for something. Adding visceral details to a situation youre asking people to consider. Countering first-order objections. Even just repeating yourself in different words. Very long comments mostly invest in parallel rather than serial argumentation.

Literature references.

Not true of any of the top-levels. Occasionally in popular comments.

Write like a high-schooler who just discovered the wonders of a thesaurus.

True of one of the top-levels, and I would guess a similar rate for popular comments. I think the author of that one top-level is ESL and that probably contributes. Then again so am I, so maybe comments that dont seem pretentious to me (or my own) do seem that way to natives.

Why post a succinct list with references when you can write a 30,000 character multipost that is a struggle to get through?

Not true of any of the top-levels. Popular comments are sometimes very long and a struggle to get through, but its not clear that they could be effectively shortened into a list of references. The last multipost I remember is this, if you want to try at compression. Its also built around a literature reference and has a pretty decoupled premise, but its doesnt seem bad to me.

...respond to people with a half dozen links to your corpus of 10,000 word posts amounting to a small novella for them to read!

OK I think at this point there are two people doing this and only one of them where you actually find it annoying.

Complain about the normies in academia, MSM, HR, government, your life, etc vocally and frequently. This communicates that you're smarter than them, and remember, criticism is always easier than defending a thesis or building something worthwhile and thus disproportionately easier for gaining status.

Depending on how you count it, up to 4 of the top-levels. Id say one of them is mainly about that. It seems hard to avoid criticising academia, MSM, HR, or government in CW posts, and "normies" doesnt restrict the description very much. I think this is actually a bit less common/intense in popular comments.

Based.

I am disappointed that this post didn't include the word "quokka", though point 1 covers part of it--it's basically applying #1 to yourself. So much high decoupling that you not only don't recognize when your words seem monstrous to others, you no longer recognize other people acting monstrously to you.

  1. Literature references.

Or anime.

I am disappointed that this post didn't include the word "quokka"

Quincy is too cute, didn't have it in me.

...respond to people with a half dozen links to your corpus of 10,000 word posts amounting to a small novella for them to read

please do this more, @everyone! If someone finds your writing interesting, they're often want to read more, and conveniently-located links are much nicer on this platform than scrolling through a comment history.

Abortion/forced sterilization/[policy harming black people] is eugenic and therefore net good

this doesn't win friends among rationalists. this wins friends among a subset of the alt right / neoreactionaries. those don't have much overlap. Most HBD-believing-rationalists are either left-leaning, centrist, or libertarian, and even most who lean far-right don't want to forcibly sterilize black people. Even if that was the right move because nietzche or nature or w/e, it's still not something any rationalists believe.

Assume that any professional field that doesn't involve code, protein folding, or pure math is trivially understood.

This is generally my strategy and it works pretty well. It's landed me ~1000% returns in the stock market in 6 months, a new house, and a great career, essentially because I picked a professional field and put 1-2 days of effort into researching it rather than assuming that the experts would outcompete me. It's truly shocking how incompetent the average professional is.

Is this not your experience? Do you not live in a world where professionals constantly get obvious things wrong? It seems to me like most people in all fields basically just glide through life with a bare minimum of understanding necessary to do their day-to-day work, with very little understanding of even the field to which they've devoted their lives.

To be clear I'm sure there are plenty of fields that take ages to learn, and professionals much smarter than I could be given 100 years of study. But for me, that assumption--that I can gain a competitive edge over an experienced professional given only a couple of days' study of their field--has yielded spectacular results wherever I've applied it.

10x returns in 6 months has to be mostly luck, right?

For programming, my guess is you're just very smart and competent (as a ssc offshoot we select for that a bit), so you're better than most via that, and then selected a niche that's relatively underexplored but is still profitable. I'm pretty sure you wouldn't be able to, in a few days, come up with a string searching or GPU matrix multiplication algorithm that significantly improves on currently used ones.

Yeah, I admit that I'm being a bit contrary by even presenting this as disagreeing with you when really I'm just making my own point here. The stock returns were probably mostly luck (though it wasn't a particularly high-risk strategy) and the programming was probably mostly a result of me being good at math. I absolutely agree I couldn't do string searching/GPU matrix multiplication, or probably even things 1/10 that hard.

I guess a better (less argumentative) way of phrasing my point would be that domain-specific knowledge is extremely important, but there are also more base-level skills (such as math, critical reasoning, charisma, etc.) that feed into many different professions. I think there are many "experts" in fields like programming, sales, psychology, etc. that lack those more base-level skills and thus can be outperformed by people who have them.

You're arguing against the EMH, not professional expertise

I'm arguing against the expertise of stock market traders and programmers (the two fields I studied for a few days). I'm also arguing against expertise more generally, but programming especially is the field where just a few days' study was enough to be in, say, the 99th percentile of programmers in the (admittedly fairly obscure) language I was studying. Stock market trading too, but that can easily be ascribed to luck.

It's well known that doctors make terrible returns doing technical analysis while on break and investing in IMB rather than drug companies where they have a comparative edge

Sure, but I bet there are some doctors who consistently outperform the market too, even when doing technical analysis for companies in unrelated fields. I never claimed that the average person could outcompete experts, just that some people can and do, and that assuming that you can't is a great way to ignore $100 bills lying on the ground.

this doesn't mean you could perform open heart surgery, land a 747, or design a bridge after reading up about it on Wikipedia.

747's can generally land themselves, and I absolutely could design a bridge without reading about it at all--just perhaps not a very large or efficient one. I get your point though--there are certainly things experts are 10^10x better at than I will ever be. I just think that generally, if you are a fairly intelligent person, treating expert claims with skepticism will often yield great results.

But code is just math, math is just code, and protein folding is just the intersection of the two!

I feel personally called out, but in my defense I'll say that all of that is more justifiable than

  • Flat-out lying and especially gaslighting to advance your political agenda

  • Pretending to have amnesia about previous rounds of the discussion

  • Fearmongering, sneering, concern-trolling and going for other emotionally manipulative tactics because you lost the argument and don't wanna admit it

  • Manipulating procedural outcomes by doxxing, vote-brigading, reporting technicalities, attacking the infrastructure and so on, plus the whole Alinsky rulebook.

So long as rats/mottizens, generally speaking, do not commit these sins (perhaps on account of lacking the psychopathic aptitude), whereas their opponents stick to them religiously, I'll say a sperging-out chud is more deserving of attention than a person endowed with such common decency.

I had to reread this post a few times as I forgot the overall topic of the discussion as these bullet points are a near perfect guide to preventing a misleading/deceptive Wikipedia page that you personally agree with from ever being fixed by good-faith editors trying to make factual corrections. Getting the article in question into the preferred slanted state is a different and often more difficult undertaking usually requiring a fair amount of clique building and clout amassing within the community, but once its where you want it this is basically playbook for defending bad-faith edits on Wikipedia from any principled sieges. As a long time contributor over there this so perfectly describes a great deal of other editors I've known over the years its appalling. Well done.

I feel personally called out

No no, of course not. Everyone is guilty of some of the things, but it's not a caricature of one person who is guilty of all of the things. And as Naraburns points out, mostly weakmen.

So long as rats/mottizens, generally speaking, do not commit these sins (perhaps on account of lacking the psychopathic aptitude), whereas their opponents stick to them religiously, I'll say a sperging-out chud is more deserving of attention than a person endowed with such common decency.

For Reasons, I was part of a minority that was a bit rootless in North America and didn't really fit in anywhere. Insofar as I'd identify with any community, it would probably be some flavor of rationalism, and if I found something closer to my heart I'd vote with my feet and leave. But people here articulate a worldview that I was struggling towards explaining to friends and family for years in a much more inchoate manner.

I still can't help but find some habits and norms oscillating between amusing and irritating.

It's pretty annoying that 16 years ago Yudkowsky wrote a blog post that was deliberately unintuitive due to scope insensitivity (seemingly as some sort of test to spark discussion) and as a result there are people who to this day talk about it without considering the implications of the contrary view. In real life we embrace ratios that are unimaginably worse than 1 person's torture vs. "3↑↑↑3 in Knuth's up-arrow notation" dust specks. People should read OSHA's accident report list sometime. All human activity that isn't purely optimized to maximize safety - every building designed with aesthetics in mind, every spice to make our food a bit nicer, every time we put up Christmas decorations (sometimes getting up on ladders!) - is built at the cost of human suffering and death. If the ratio was 1 torturous work accident to 3↑↑↑3 slight beneficiaries, there would never have been a work accident in human history. Indeed, there are only 10^86 atoms in the known universe, even if each of those atoms was somehow transformed into another Earth with billions of residents, and this civilization lasted until the heat-death of the universe, the number of that civilization's members would be an unimaginably tiny fraction of 3↑↑↑3, and thus embracing a ratio of 1 to 3↑↑↑3 would almost certainly not result in a single accident throughout that civilization's history.

A more intuitive hypothetical wouldn't just throw out the incomprehensible number and see who gets it, it would make the real-life comparisons or try to make the ratio between the beneficiaries and the cost more understandable. The easiest way to do this with such extreme ratios is with very small risks (though using risks is not actually necessary). For instance, lets say you're helping broadcast the World Cup, and you realize there will shortly be a slight flicker in the broadcast. You can prevent this flicker by pressing a button, but there's a problem: a stream of direct sunlight is on the button, so pressing it will expose the tip of your finger to sunlight for a second. This slightly increases your risk of skin cancer, which risks getting worse in a way that requires major surgery, which slightly risks one of those freak reactions to anesthesia where you're paralyzed but conscious and in torturous pain the whole surgery. (You believe you have gotten sufficient sunlight exposure for benefits like Vitamin D already, so more exposure at this point would be net-negative in terms of health.) Is it worth the risk to press the button?

If someone thinks there's something fundamentally different about small risks, the same scenario works without them, it just requires a weirder hypothetical. Let us say that human civilization has created and colonized earth-like planets on every star in the universe, and further has invented a universe-creation machine, created a number of universes like ours equal to the number of atoms in the original universe, and colonized at least one planet for every star in every universe. On every one of those planets they broadcast a sports match, and you work for the franchised broadcasting company that sets policy for every broadcast. Your job consists of deciding policy for a single question: if the above scenario occurs, should franchise operators press the button despite the tiny risk? You have done the research and know that, thanks to the sheer number of affected planets, it is a statistical near-certainty that a few operators will get skin cancer from the second of finger sunlight exposure and then have something go wrong with surgery such that they experience torture. Does the answer somehow change from the answer for a single operator on a single planet, since it is no longer just a "risk"? Is the morality different if instead of a single franchise it's split up into 10 companies, and it works out so that each company has a less than 50% chance of the torture occurring? What if instead of 10 companies it's a different company on each planet making the decision, so for each one it's no different from the single-planet question? Even though the number of people in this multiverse hypothetical is still a tiny fraction of 3↑↑↑3, I think a lot more people would say that it's worth it to spare them that flicker, because the scale of the ratio has been made more clear.

In real life we embrace ratios that are unimaginably worse than 1 person's torture vs. "3↑↑↑3 in Knuth's up-arrow notation" dust specks. People should read OSHA's accident report list sometime. All human activity that isn't purely optimized to maximize safety - every building designed with aesthetics in mind, every spice to make our food a bit nicer, every time we put up Christmas decorations (sometimes getting up on ladders!) - is built at the cost of human suffering and death. If the ratio was 1 torturous work accident to 3↑↑↑3 slight beneficiaries, there would never have been a work accident in human history.

This isn't a fair assertion because you neglect the difference your hypothetical makes in "slight beneficiaries". Dust specks truly do have no noticeable effect on people. Things like aesthetic buildings, time saved putting up Christmas decorations, spice in food, etc. can easily have enough of an effect on people to change their lives. I would never choose torture above dust specks, but (depending on knock-on effects) I could easily be convinced to allow someone to be injured for enough time saved elsewhere, or for an aesthetic building.

Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere. It's not so simple as just "we prefer aesthetic buildings to safe people" because there are SO MANY principles in play in real life, from economics (the harm of mandating safety everywhere--can we give the government that much power?) to technology (if we care that much about safety then we'll never reach immortality etc.) to philosophy (maybe God put us here for a reason and that includes suffering sometimes) to X-risk (why worry about workplace accidents when we could worry about nukes?) to pragmatism (resources better spent elsewhere) to game theory (if you focus on safety, some other country will outcompete you, or some business rival will) and honestly I could go on and on with other considerations which immediately take precedence above safety when you try to make real life into a thought experiment.

In short, life isn't a thought experiment, and in this case it doesn't work to say it proves something about the dust specks.

More importantly, moral intuition doesn't generally need to be built to account for such enormous numbers. I expect that anyone calculating their risk of skin cancer is losing far more utility to the calculation itself than they are to the risk of skin cancer. Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.

The other thing is, your last example is still much more mathematically favorable towards the "dust specks" side than the original question was. Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.

I realize I'm fighting the hypothetical here, but at some point when the numbers are so absurd you kind of have to fight it. The whole point (which I disagree with with numbers this big) is that "shut up and multiply" just works, so here's a counter-experiment for you:

  1. Define "Maximally Miniscule Suffering" as something like: "One iota of a person's field of vision grows a fraction of a shade dimmer for a tiny fraction of a second. They do not notice this, but their qualia for that moment is reduced by an essentially imperceptible amount. This suffering has no effect on them beyond the moment. Do this for 3^^^3 people."

  2. Define "Maximal suffering" as something like:

a. Stretch out a person's nerves to cover an entire planet. Improve their brain so that they can feel all of these nerves. Torture every single millimeter of exposed nerve. Do similar things for emotional, psychological torture, etc.

b. Do (a) until the heat death of the universe

c. Do (b) for 100 trillion people

d. Repeat (c) once for each time any member of (c) experienced any hope

e. Repeat (d) until nobody experiences any hope

f. Find a new population and repeat (a-e)

g. Repeat (f) 10^100 times. Select one person from each repetition of (f) who has suffered the most out of their cohort, and line them up randomly.

h. If all 10^100 people from (g) aren't lined up in order of height, repeat (g).

Would you choose Maximal Suffering above Maximally Miniscule Suffering? Because mathematically, and in terms of EY's point, I don't see how this differs from the original dust speck thought experiment.

Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere.

Sure, that's the cost of using real-life comparisons, but do you really think that's the only thing making some of those tradeoffs worthwhile? That in a situation where it didn't also affect economic growth and immortality research and so on, it would be immoral to accept trades between even miniscule risks of horrific consequences and very small dispersed benefits? We make such tradeoffs constantly and I don't think they need such secondary consequences to justify them. Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured. Yes, in real life this would be dominated by other effects like "the author being annoyed at not using the optimal word" or "the author wasting his time thinking about it" - but I don't think that's what is necessary to make it a reasonable choice. I think it's perfectly reasonable to say that on its own very slightly benefiting your thousands of readers outweighs sufficiently small risks, even if the worst-case scenario for the edit is much worse than the worst-case scenario for not editing. And by extension, if you replicated this scenario enough times with enough sets of authors and readers, then long before you got to 3↑↑↑3 readers enough authors would have made this tradeoff that some of them would really have that scenario happen.

While the number 3↑↑↑3 is obviously completely irrelevant to real-life events in our universe, the underlying point about scale insensitivity and tradeoffs between mild and severe events is not. Yudkowsky just picked a particularly extreme example, perhaps because he thought it would better focus on the underlying idea rather than an example where the specifics are more debatable. But of course "unlikely incident causes people to flip out and implement safety measures that do more damage than they solve" is a classic of public policy. We will never live in a society of 3↑↑↑3 people, but we do live in a society of billions while having mentalities that react to individual publicized incidents much like if we lived in societies of hundreds. And the thing about thinking "I'd never make tradeoffs like that!" is that they are sufficiently unavoidable in public policy that this just means you'll arbitrarily decide some of them don't count. E.g. if the FDA sincerely decided that "even a single death from regulatory negligence is too much!", probably that would really mean that they would stop approving novel foods and drugs entirely and decide that anyone who died from their lack wasn't their responsibility. (And that mild effects, like people not getting to eat slightly nicer foods, were doubly not their responsibility.)

Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.

But it isn't nullifying their enjoyment of the game, it's a slight barely-noticeable flicker in the broadcast. (If you want something even smaller, I suppose a single dropped frame would be even smaller than a flicker but still barely noticeable to some people.) If you're making media for millions of people I think it's perfectly reasonable to care about even small barely-noticeable imperfections. And while the primary cost of this is the small amount of effort to notice and fix the problem, this also includes taking minuscule risks of horrific costs. And it isn't a few people getting skin cancer, it's the fraction of the people who get skin cancer that then have something go wrong with surgery such that they suffer torture. I just said torture during the surgery, but of course if you multiply the number of planets enough you would eventually get high odds of at least one planet's broadcast operator suffering something like the aforementioned ultra-severe chronic pain for a more direct comparison.

Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.

Feel free to modify it to "making a design tradeoff that either causes a single dropped frame in the broadcast or a millisecond of more-than-optimal sunlight on the broadcast operator", so that it doesn't consume the operator's time. I just chose something that was easily comparable between a single operator making the choice and making the choice for so many operators that the incredibly unlikely risk actually happens.

Would you choose Maximal Suffering above Maximally Miniscule Suffering?

Sure. Same way that if I had a personal choice between "10^100 out of 3↑↑↑3 odds of suffering the fate you describe" and "100% chance of having a single additional dropped frame in the next video I watch" (and neither the time spent thinking about the question nor uncertainty about the scenario and whether I'm correctly interpreting the math factored into the decision), I would choose to avoid the dropped frame. I'm not even one of the people who finds dropped frames noticeable unless it's very bad, but I figure it has some slight but not-absurdly-unlikely chance of having a noticeable impact on my enjoyment, very much unlike the alternative. Obviously neither number is intuitively understandable to humans but "10^100 out of 3↑↑↑3" is a lot closer to "0" than to "1 out of the highest number I can intuitively understand".

To be clear here, I have two main points:

  1. Some categories of pain are simply incomparable to others (either because they're simply different or because no amount of 1 suffering will ever equal or surpass the other)

  2. Moral reasoning is not really meant for such extreme numbers

Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured.

Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small. I have plenty of other ways to fight the hypothetical too--things like time considering the choice (which you mentioned), the chance that a better word choice will help other people or help the book sell better, etc.

The point in fighting the hypothetical is to support my point #2. At some point hypotheticals simply don't do a very good job of exposing and clarifying our moral principles. I generally use "gut feelings" to evaluate these thought experiments, but these gut feelings are deeply tied to other circumstances surrounding the hypothetical, like the (much, much greater) chance that a better word choice will lead to better sales or a substantially better reader experience for someone.

Common sense says you shouldn't worry about carpal tunnel when typing. It's easy to say "ok ignore the obvious objections, just focus on the real meat of the thought experiment" but hard to convince common sense and ethical intuition to go along with such a contrived experiment. I'll try and reverse it for you, so that common sense/ethical intuition are on my side but the meat of the argument is the same.

Let's go back to my original scenario of Maximally Miniscule Suffering vs. Maximal Suffering. You are immortal. You can either choose to experience all of the suffering in Maximal Suffering right away, or all of the suffering in Maximally Miniscule Suffering right away.

I think this gets to the heart of my point because

  1. If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering. The former is simply a different type of suffering that I don't think ever adds up to the latter. I would much rather see in black and white for a practically infinite amount of time than experience a practically infinite amount of torture.

  2. By the time you're finally through with maximal suffering in 10^10^100 years or so you will basically be totally insane and incapable of joy. But let's ignore that and assume that you'll be fine. I bring this up because I think even though I say "let's ignore that", when it comes to ethical intuition, you can't really just ignore it, it will still play a role in how you feel about the whole scenario. The only way to really ignore it is to mentally come up with some add-on to the thought experiment like "and then I'm healed so that I am not insane", which fundamentally changes what the thought experiment is.

It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are. You seem to be taking the fact that the risks in these hypotheticals are not worth actual consideration as a point against these hypotheticals, but of course that is the point the hypotheticals are making.

Moral reasoning is not really meant for such extreme numbers

Nothing in the universe will ever be 3↑↑↑3, but 7 billion people is already far beyond intuitive moral reasoning. We still have to make decisions affecting them whether our moral reasoning is meant for it or not. Which includes reacting differently to something bad happening to one person out of millions of beneficiaries than to one person out of hundreds of beneficiaries.

Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small.

In some percentage of cases the cancer spreads to your brain, you get surgery to remove the tumor, and the brain surgeon messes up in precisely the right way. Both "locked-in syndrome" and chronic pain are things that happen, it's hardly a stretch to think a combination of both that paralyzes you for 50 years while you experience continuous agony is physically possible. And of course even if you were uncertain whether it was physically possible, that's just another thing to multiply the improbability by. It's not that rounding the probability down to 0 doesn't make sense in terms of practical decision-making, it's that "1 in 3↑↑↑3" odds are unimaginably less likely, so you should round them down to 0 too.

If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering.

I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person. It doesn't have a coherent meaning so we are free to arbitrarily imagine whatever we want. That said, to the extent that its meaning can be nailed down at all, I think it would favor avoiding the 3↑↑↑3 option. My understanding is that a single pain receptor firing once is not noticeable. If a form of suffering is instead barely noticeable, it is presumably "bigger" than a single pain receptor firing. There are only 37 trillion cells the the human body, so the number of pain receptors is something smaller than that. So the first step in multiplying barely-noticeable suffering by 3↑↑↑3 is that it goes from "worse than a pain receptor firing" to "worse than every pain receptor firing continuously for an extended period". And that doesn't make a dent in 3↑↑↑3, so we multiply further, such as by making it last unimaginably longer than merely 10^100 times the lifespan of the universe.

That is a pretty arbitrary and meaningless matter of interpretation though. A more meaningful measure would be the Rawlsian veil of ignorance, You're a random member of a population of 3↑↑↑3, is it better for you that 10^100 of them be tortured or all of them experience a dropped frame in a video? This is equivalent to what I answered in my previous post, that it would be foolish to sacrifice anything to avoid such odds.

It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are.

Yes, this is essentially how I think morality and decision-making should work. Going back to your word choice example, the actual word choice should matter not at all in a vacuum, but it has a chance of having other effects (such as better book sales, saving someone's life from suicide, etc.) which I think are much more likely than the chance that typing in the extra word causes chronic torturous pain.

In real life, small harms like stubbing a toe can lead to greater harms like missing an important opportunity due to the pain, breaking a bone, or perhaps snapping at someone important due to your bad mood. If we could ignore those side effects and focus on just the pain, I would absolutely agree that

your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are

With the appropriate caveats regarding computation time and other side effects of avoiding those extreme consequences.

I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person.

See this is kind of my point. I don't think we can just say that there's "net utility" and directly compare small harms to great ones. I agree that it doesn't necessarily make much sense to just "add up" the suffering though, so here's another example.

You're immortal. You can choose to be tortured for 100 years straight, or experience a stubbed toe once every billion years, forever. Neither option has any side effects.

I would always choose the stubbed toe option even though it adds up to literally infinite suffering, so by extension I would force infinite people to stub their toes rather than force one person to be tortured for 100 years.

edit: One more thing, it's not that I think there's some bright line, above which things matter, and below which they don't. My point is mainly that these things are simply not quantifiable at all.

This is an interesting thought experiment and I'm glad you've brought it to my attention. I appreciate it and think this place could use more of these.

Be kind, don't weakman... I'm a little conflicted because it's presumably healthy for the Motte and adjacent spaces to be introspective and self-critical, but we're still a group, the rules still apply. "Poking fun" is always risky business under the rules anyway, but the criticism you've assembled here barely rises above the level of pure, vapid sneer. Allowing that it also applies ("often") to you doesn't really change the fact that you're essentially framing certain behaviors as low status without effortfully addressing the relative merits of those behaviors. I appreciate that you refrained from literally calling out neckbeards and fedoras, but even so what you've mostly succeeded at here is just textbook nerd-bashing. So, please don't do that.

Strong disagree -- I'd say this is lighthearted enough to even rise to the level of 'kind' (or at least 'not unkind'), but it is surely true and necessary.

I'd definitely rather read this than a bunch of posts about how leftists are a bunch of pussies (even if though I am personally sympathetic to the underlying complaint) -- this is a bad warning.

it is surely true and necessary

Yeah, I disagree that it is true, and strongly disagree that it is necessary.

I'd definitely rather read this than a bunch of posts about how leftists are a bunch of pussies

How about neither? Because you know, "leftists are a bunch of pussies" is also something we would moderate.

The post is not quintessentially awful. It didn't get a ban. I expressed my own reservations in the warning. But it drew multiple reports and I felt like it was worth my time to point out that this is not really a good example of people who disagree having a fruitful discussion about that disagreement. This is more like a good example of how to playfully signal to someone that you regard them as low-status. I might even be persuaded that it is "not [at least entirely] unkind," but the rule isn't "be not unkind."

How about neither? Because you know, "leftists are a bunch of pussies" is also something we would moderate.

Just downthread of another marginal mod warning currently on the front page, for your reading pleasure:

https://www.themotte.org/post/317/culture-war-roundup-for-the-week/55391?context=8#context

I thought the policy was "tone over content"? CPAR's tone here is lighthearted and funny (also self-deprecating; "I hope the rest of you will forgive me for poking fun at things that I'm often guilty of myself."), and the content is something we could all take to heart. (ie. "necessary")

Yeah, I disagree that it is true

There's literally several responses to the effect of 'I feel seen' -- obviously the post is engaging in hyperbole, but most of the points are reformulations of classic complaints about the rational-o-sphere.

I'm disturbed that something that feels like it could be lifted from a c. 2012 Scott-post is attracting reports, and moreso that the correct response is not seen as "screw 'em if they can't take a joke".

I am concerned too, it is blowing my mind that that post was reported enough to get a warning. And yeah, it might not be facts, but it has a lot of truth to it. I wonder if it's hitting some people harder than others? Or maybe it's strategic, a retaliation against raptr or left wingers in general for some slight.

I'd say the "WEF conspiracies are an IQ test" post crosses the line a lot more than this.

The WEF post spends 7 paragraphs @ 1k words on non-accusatory exposition about the details of the WEF itself and the history of right-wing beliefs about the WEF, limiting the 'attacking people' part to the title and a short 100-word conclusion that makes a valuable strategic point.

OP by contrast is peppered with unfair generalizations and jabs all the way through, with no evidence to back it up.

Now, I don't really care about personal attacks or unfairness or jabs, my only issue with chris's post is that it's in large part wrong, but it is much more 'rule-breaking' than rafa's when we weight usefulness with bite. "necessary, true, kind: pick two".

The WEF post literally says "you're dumb for believing this", calls people "an embarrassment", and contains politically coded slurs like "rightoid". Anyone posting anything similar about the Blues would get banned, and I doubt anyone would bother defending it.

This is lighthearted poking fun at people. I don't particularly like it either but nowhere close to the other.

it's (vaguely) "criticism of your team" - "rightoids believing the WEF conspiracies is an embarrasment for us, and makes us less effective / likely to accomplish anything"

It's not. "Rightoids believing WEF conspiracies" are not part of his team, and this place is about discussion, not being effective or accomplishing anything.

The WEF post spends 7 paragraphs @ 1k words on non-accusatory exposition about the details of the WEF itself and the history of right-wing beliefs about the WEF, limiting the 'attacking people' part to the title and a short 100-word conclusion that makes a valuable strategic point.

I won't try to defend my post; if people take it as bullying and mean-spirited it's not my place to argue, only knock it off. That being said - I could have written seven paragraphs on each point, but would that have changed the fundamental argument I was trying to make or just obscured it? Was that length beneficial to the WEF post, or could detail have been cut in the interest of clarity and efficiency?

I've read the rationale behind making post length the low-bar to be cleared for many posts, and I even agree with it to an extent. That being said, it's still a kludge and should be treated as such rather than exalted as a terminal value or a virtue. It advantages the verbose and eloquent without improving their arguments, it encourages bad writing habits and degrades the quality of discourse as discussions fragment and people get hung up on minor, non-central points to your argument. The purpose of writing is to entertain or convey information, and while there should be latitude for the former, many trying to do the latter write far too much. In my opinion, for what that's worth.

I won't try to defend my post; if people take it as bullying and mean-spirited it's not my place to argue, only knock it off.

Say what? You wrote the post man, you know better than anyone else how bullying and mean spirited you meant it to be. You copped a warning for the op because it got a lot of reports apparently - what would you do if it was reported by people who have decided you are a leftist and therefore should be shut up? Would you still knock it off to accommodate them?

Say what? You wrote the post man, you know better than anyone else how bullying and mean spirited you meant it to be.

I generally believe the onus is on the writer to craft something for their audience to appreciate. If the audience doesn't like it or find it useful, either find a new audience or change your style. Telling them that they're wrong seems to be a bit futile.

I've also just adopted a general heuristic of 'if enough people are telling you you're being an asshole, you're probably being an asshole.' I recognize that can be particularly dangerous and opens you up to manipulation by bad actors, but it also transformed my life in college from unhappy friendless loner to being a relatively popular and successful guy.

But I am impressed that people cared enough to argue with mods on my behalf...

what would you do if it was reported by people who have decided you are a leftist and therefore should be shut up? Would you still knock it off to accommodate them?

Well, yeah, probably. If the community wanted to be an echo chamber, who am I to say otherwise?

If you are trying to avoid being an asshole, why did you write a snarky list of people's faults?

Well, yeah, probably. If the community wanted to be an echo chamber, who am I to say otherwise?

A member of the community.

But I am impressed that people cared enough to argue with mods on my behalf...

Yes, other members of the community defended you. It seems we made a mistake.

This matters a lot to me, because some people around here interpret everything I write in the most mean-spirited way possible, when I haven't tried to be mean spirited on the motte in years. Which doesn't mean I can't be mean spirited by accident of course, but take the phrase "You son of a bitch!" for example - depending on your mood, you might read that as anger. But it isn't necessarily angry, it could be excited - "You (magnificent) son of a bitch!", or it could be dismayed - "You son of a bitch! (I can't believe you've done this)", or so on. But if you are in a mood to read it as anger, it will change the tone of the whole post. That doesn't make everyone who interpreted it in the spirit you meant it wrong though. Nor does it make those who interpreted it as anger right. What makes them right is you then saying that the way you meant it doesn't matter compared to what they think. You gave them that power, and as a result made yourself irrelevant.

More comments

I agree entirely with all of that*, don't mind arbitrarily harsh, short, or slur-filled posts if they are interesting, and as said above only dislike your OP to the extent I materially disagree (rationalists aren't bad because they're willing to do dangerous promethean moral acts, and malaria nets aren't very dangerous or promethean). (* - except specifically cutting detail improving the WEF post, the detail was nice).

I would say about the same amount, but yeah. Both posts are pretty much just sneering at people.

deleted

Oh. Alright, my apologies.

On the one hand, I too dislike arguing with Mods. On the other hand, I object to people apologizing for greatness. It's a real conundrum.

I thought your post was funny and didn't seem mean-spirited. I could see how someone would take umbrage, of course. (+2 cents)

All of his criticisms are on point, though. Those are all bad habits, and they're all endemic, and by framing it as a personal statement, he leaves people free to apply the statements to themselves as they personally consider it appropriate. A lot of posts I've written are obviously guilty of the things he's pointing out, and are the worse for them.

"Poking fun" is always risky business under the rules anyway, but the criticism you've assembled here barely rises above the level of pure, vapid sneer.

I strongly disagree, as one of the people the post was most obviously aimed at. Cogent criticism is valuable, and this is, in fact, cogent criticism.

All of his criticisms are on point, though.

Surely you don't actually mean that?

I was a bit hesitant on the mod button, for all the reasons I already mentioned. I recognize that there is some hyperbole there, and some humor, and some self-deprecation, and I always feel a bit schoolmarmish wagging a finger at that sort of thing. But like--

Literature references. Point score is directly correlated with obscurity; actually having read the the work in question is optional. Bonus points for linking SSC pieces, double bonus points if they're from 2016 or earlier.

What's the "on point" criticism, here--that we quote Scott too much? What's the "bad habit"--that we don't actually read the books we quote from and talk about? (This seems clearly false!)

The fact that we have our own status games is interesting, and worth talking about. And there are surely times and places to enjoy an amusing roast. But a lot of the stuff in this list is not actually bad, and most of the rest is unobjectionable if stripped of the pejoration and mockery. To treat e.g. complex vocabulary as a signal of low status is textbook anti-intellectualism. Yes, some people use big words strictly to appear smart, but treating people that way without further evidence requires an uncharitable take on their motives. Writing lengthy posts is frequently mocked in many places on the internet, but some problems are complex and demand extended reflection--assuming you want to do more than make a joke at someone else's expense. While many of the attitudes called out in this post are indeed counterproductive or otherwise objectionable, most of the behaviors are not in themselves problematic, particularly given a charitable interpretation of the writer's intent. If we're going to criticize such behaviors, we should do it in a thoughtful way--not by resorting to mockery that seems crafted to shame others away from effortful participation and thoughtful discussion.

It's not very fair if it's taken as a generalization about what we're doing here. It's absolutely on point as a description of how we do it wrong – not the only failure mode of this community, especially after the move, when we've gained some cocksure low-effort right-wingers, but the most prevalent one among the old guard. I agree 100% with @FCfromSSC that this list is a self-improvement opportunity for me.

Yes, some people use big words strictly to appear smart, but treating people that way without further evidence requires an uncharitable take on their motives.

Which is itself a problem with rationalists. In order to properly deal with people, you need to be able to conclude bad faith, and you need to be able to do this based on less than 100% clear evidence, because false negatives are as damaging as false positives. This is where quokkas come from--rationalists refusing to realistically consider the possibility that someone is acting in bad faith. We say "be charitable" because a lot of people aren't charitable enough, but there are also people who are too charitable and should ignore that advice (and it's hard to aim advice at only the people who need it.)

And here, it's not even just about bad faith. When someone uses big words that aren't needed for his point, he may be acting in bad faith, or he may just be bad at communicating. But even if he's an honest person who's just bad at communicating, he's still bad at it; it's not behavior we want to emulate, and it still deserves criticizing. If doing things poorly is low status, then yes, this is low status--communicating poorly is something we want to avoid.

Quokka is "rationalist who doesn't question progressive ideas like universal love and tolerance, gender and race equality, .....", not "rationalist who argues with trolls because they might be good faith". Having extended arguments with bad faith trolls doesn't really hurt you beyond wasting small amounts of time, whereas earnestly believing in universal love and sacrifice-for-all-humans-equally means your fortune or life is spent helping Open Philanthropy buy malaria nets instead of some other worthier cause.

edit: I might be wrong about the use of the term quokka, but still pretty sure 'arguing with bad faith trolls' isn't particularly bad.

A quokka is a creature that doesn't realize that people might want to hurt it. The metaphor from there is fairly direct.

You love this... hard-to-pin-down pattern of reasoning, and I don't love to have to keep asking you not do it. Nevertheless here we go again.

Quokka is "rationalist who doesn't question progressive ideas like universal love and tolerance, gender and race equality, .....", not "rationalist who argues with trolls because they might be good faith".

The explicit definition of quokka as a mental archetype is the guy who does not account for bad faith of other parties. It's not about wasting time on trolls on anonymous forums, per se. But it absolutely is about a robust mode of engagement with bad actors.

Here's the original thread by 0x49fa98. Here are the most relevant parts:

The quokka, like the rationalist, is a creature marked by profound innocence. The quokka can't imagine you might eat it, and the rationalist can't imagine you might deceive him. As long they stay on their islands, they survive, but both species have problems if a human shows up

In theory, rationalists like game theory, in practice, they need to adjust their priors. Real-life exchanges can be modeled as a prisoner's dilemma. In the classic version, the prisoners can't communicate, so they have to guess whether the other player will defect or cooperate. ...

The problem is, this is where rationalists hit a mental stop sign. Because in the real world, there is one more strategy that the game doesn't model: lying. See, the real best strategy is "be good at lying so that you always convince your opponent to cooperate, then defect"

Rationalists = quokkas, this explains a lot about them. Their fear instincts have atrophied. When a quokka sees a predator, he walks right up; when a rationalist talks about human biodiversity on a blog under almost his real name, he doesn't flinch away ...

The main way that you stop being a quokka is that that you realize there are people in the world who really want to hurt you. There are people who will always defect, people whose good will is fake, whose behavior will not change if they hear the good news of reciprocity

I think Bostromgate is a good illustration.

Quokka is "rationalist who doesn't question progressive ideas like universal love and tolerance", not "rationalist who argues with trolls because they might be good faith".

It's both, actually. I've seen the latter argued numerous times by right-wingers, specifically about right-wing trolls.

Right-wingers arguing that rationalists ... shouldn't listen to right-wing trolls?

More comments

Surely you don't actually mean that?

I can and do. I assure you that my next effort-post will be better if, before I post it, I compare it to that list and edit accordingly.

What's the "on point" criticism, here--that we quote Scott too much? What's the "bad habit"--that we don't actually read the books we quote from and talk about? (This seems clearly false!)

That here, too, one's reference game being on-point can cover for a startling lack of engagement with the concepts behind those references. Further, that style trumping substance is always a danger, and one way it happens is by cribbing from better authors to provide gravitas to an argument that it cannot generate under its own power. There are a number of writers here who possess above-average rhetorical style, but style is not truth, and forgetting that is a constant danger for all of us.

While many of the attitudes called out in this post are indeed counterproductive or otherwise objectionable, most of the behaviors are not in themselves problematic, particularly given a charitable interpretation of the writer's intent.

...I see it exactly flipped. The behaviors are not in and of themselves problematic, but when combined with a poor attitude or mindset, they're counterproductive and objectionable. And while it might be uncharitable to accuse individual posters, noting the problem in aggregate seems like a reasonable way to express what is, at the end of the day, a complaint about general atmosphere. General atmosphere matters here; our rules are drafted explicitly to protect it, and changes for the worse are worth noting and pointing out.

General atmosphere matters here; our rules are drafted explicitly to protect it, and changes for the worse are worth noting and pointing out.

Yes--but with kindness, and charity.

Insofar as general atmosphere matters here, "you should be ashamed of your vocabulary, verbosity, and valuing of intellect over emotion" is not a vibe that should be cultivated.

If your vocabulary is being used poorly--and excessive wordiness is using it poorly--you should be ashamed of it, at least to the extent that you should be ashamed of doing things badly at all.

Insofar as general atmosphere matters here, "you should be ashamed of your vocabulary, verbosity, and valuing of intellect over emotion" is not a vibe that should be cultivated.

All I can say is that I did not read it as a general condemnation of those traits, and still don't. It probably helps that I have been very clearly guilty of several of these, and agree that they are problems, so it strikes me as less an attack and more just necessary truth delivered with some humor.

Write like a high-schooler who just discovered the wonders of a thesaurus. IQ is life. Everyone knows that vocabulary size is correlated to IQ, which is correlated to g, which determines your worth as a human being and position in the hierarchy. What better way to give your stock a little bump than to sprinkle in a few five syllable words that fell out of common use somewhere in the 19th century?

Maybe this would have been true pre-2017 or so, but there is inconsiderable disagreement about supposed intellectual-supremacy in rationalist communities. I think what unites these communities is a general skepticism of mainstream narratives by the media, credentialed experts, or mainstream science. And also, considerable self-critique and introspection. Posts which express skepticism of rationalist beliefs, like EA, are as up-voted as posts which endorse them, maybe even more so, which you typically don't see in other communities. This clearly goes against the stereotype of singlemindedness you describe.

Also, an affinity for making up new terms. Maybe we could call it neologophilia.

Yes, this is terrible, especially when they're puns.

Thanks man.

Oof, I feel this one.

I think you're missing

8: Go meta. If someone has an idea, you should try applying that idea to itself. It's fine if it's a stretch, the idea of applying an idea to itself means that you probably read Godel, Escher, Bach, which means that you are Smart and therefore Good. Or at least it proved that you read a summary of it. Or interacted a lot with people who did.

(I say this as someone who does (8) way too much, including arguably right now)

It's true! That also reminds me, I was expecting exponentially more meta threads with the move here. I've been sorely disappointed so far.

I...even though the quote gives an excuse for why you’ve posted this in response to me, I have a visceral feeling as if I were being called out in particular. Is this what the kids call feeling “seen”?

An amusing post but it critiques itself doesn't it? What are you doing if not criticizing instead of defending a better thesis? Where are these better norms?

It's true, and it's also awash in other hypocrisies. I could use the John Stewart 'I'm just a comedian, bro defense because I mostly was just trying to entertain, but if you want:

  1. An expectation of more citations and sources for claims being made, or if the data doesn't exist/can't be collected, acknowledgement of that fact.

  2. Embracing brevity, concision and clear communication as terminal values rather than long manifestoposts (obviously some leeway for people writing personal stories or stream of consciousness rants).

  3. Some self-awareness when mocking others for status-signaling.

  4. Embracing intellectual humility (something akin to the old 'epistemic status: xyz...')

To some extent, this is just me imposing my values on others which is why I tried not to be explicitly prescriptive. The community should be what the community wants to be. Hopefully someone out there laughed.

Blue Tribe is never going to forgive the Rats for shattering the illusion you have to be a dumb redneck ti disagree with them, are they?

I think it's more of the "I feel bad for you" / "I don't think about you at all" situation.

Who doesn’t think about the other one here?

In the context of the show Don spent all day thinking about the guy he told he didn't think about at all. He even went out of his way to sabotage him by leaving behind Ginsberg's ad pitch so he could only do his own because he knew that his was inferior.

Blues aren't even aware of us. We are merely conservatives/fascists/whatever-boo-term with tech jobs to them.

Ehhh, given the number of NYT/new Yorker articles about "the dark secret of tech reactionaries: they could be hiding in your boardroom!", they clearly have some kind of hangup.

Like, ignoring tone entirely, we could compare the number of articles about furries to nrx/ssc/etc., and I think furries get less media attention despite being a significantly bigger thing in tech.

There literally is a whole community that is explicitly dedicated to sneering at us. They have their own forum, where they do nothing but repost things from here and other rat-adjacent spaces and mock them.

I meant almost all blue tribers, not that miniscule number of sneer clubbers.

(Most) blues don't think about sneerclub either.

I think blues think about fascists/conservatives quite a bit, as evidenced by twitter.

But that could be thinking about just about anyone ;)

Are we thinking about specific progressives all that often?

That doesn't really answer the question. They get plenty angry about conseratives/fascists/whatever-boo-terms.

Blues don't think about or are even aware of the grey tribe. "Grey tibe" is just conservatives with tech jobs from their point of view.

If the Blues freak out about conservatives, and to them we're just conservatives with tech jobs, are they not thinking about us at all, or are they freaking out? They even have a subreddit devoted to trolling us.

I dunno, they seem historically to have been aware that ‘climate change deniers’ weren’t personally stupid rednecks even if they tended to round the difference off to being evil.

Shattering the illusion is a bit of a strong word to use when I'd estimate >95% of the population has never heard of rationalists, and I don't think it's the source of my amusement at these habits, but if it makes you feel better: I, ChrisPrattAlphaRaptor, high pope of the Church of the Blue Tribe absolve you of your sins. Go forth and live in virtue, my son.