site banner

Culture War Roundup for the week of December 11, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Three months ago, LessWrong admin Ben Pace wrote a long thread on the EA forums: Sharing Info About Nonlinear, in which he shared the stories of two former employees in an EA startup who had bad experiences and left determined to warn others about the company. The startup is an "AI x-risk incubator," which in practice seems to look like a few people traveling around exotic locations, connecting with other effective altruists, and brainstorming new ways to save the world from AI. Very EA. The post contains wide-ranging allegations of misconduct mostly centering around their treatment of two employees they hired who started traveling with them, ultimately concluding that "if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world."

He, and it seems to some extent fellow admin Oliver Habryka, mentioned they spent hundreds of hours interviewing dozens of people over the course of six months to pull the article together, ultimately paying the two main sources $5000 each for their trouble. It made huge waves in the EA community, torching Nonlinear's reputation.

A few days ago, Nonlinear responded with a wide-ranging tome of a post, 15000 words in the main post with a 134-page appendix. I had never heard of either Lightcone (the organization behind the callout post) or Nonlinear before a few days ago, since I don't pay incredibly close attention to the EA sphere, but the response bubbled up into my sphere of awareness.

The response provides concrete evidence in the form of contemporary screenshots against some of the most damning-sounding claims in the original article:

  • accusations that when one employee, "Alice", was sick with COVID in a foreign country and nobody would get her vegan food so she barely ate for two days turned into "There was vegan food in the house and they picked food up for her, but on one of the days they wanted to go to a Mexican place instead of getting a vegan burger from Burger King."

  • accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

  • accusations that they asked Alice to "bring a variety of illegal drugs across the border" turned into "They asked Alice, who regularly traveled with LSD and marijuana of her own accord, to pick up ADHD medicine and antibiotics at a pharmacy. When she told them the meds still required a prescription in Mexico, they said not to worry about it."

The narrative the Nonlinear team presents is of one employee with mental health issues and a long history of making accusations against the people around her came on board, lost trust in them due to a series of broadly imagined slights, and ultimately left and spread provable lies against them, while another who was hired to be an assistant was never quite satisfied with being an assistant and left frustrated as a result.

As amusing a collective picture as these events paint about what daily life at the startup actually looked like, they also made it pretty clear that the original article had multiple demonstrable falsehoods in it, in and around unrebutted claims. More, they emphasized that they'd been given only a few days to respond to claims before publication, and when they asked for a week to compile hard evidence against falsehoods, the writers told them it would come out on schedule no matter what. Spencer Greenberg, the day before publication, warned them of a number of misrepresentations in the article and sent them screenshots correcting the vegan portion; they corrected some misrepresentations but by the time he sent the screenshots said it was too late to change anything.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

From a long conversation with Habryka, my impression is that a lot of EA community members were left scarred and paranoid after the FTX implosion, correcting towards "We must identify and share any early warning signs possible to prevent another FTX." More directly, he told me that he wasn't too concerned with whether they shared falsehoods originally so long as they were airing out the claims of their sources and making their level of epistemic confidence clear. In particular, the organization threatened a libel suit shortly before publication, which they took as a threat of retaliation that meant they should and must hold to their original release schedule.

My own impression is that this is a case of rationalist first-principles thinking gone awry and applied to a domain where it can do real damage. Journalism doesn't have the greatest reputation these days and for good reason, but his approach contrasts starkly with its aspiration to heavily prioritize accuracy and verify information before releasing it. I mention this not to claim that they do so successfully, but because his approach is a conscious deviation from that, an assertion that if something is important enough it's worth airing allegations without closely examining contrary information other sources are asking you to pause and examine.

I'd like to write more about the situation at some point, because I have a lot to say about it even beyond the flood of comments I left on the LessWrong and EA mirrors of the article and think it presses at some important tension points. It's a bit discouraging to watch communities who try so hard to be good from first principles speedrun so many of the pitfalls broader society built guardrails around.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

People occasionally ask whether the ratsphere is just reinventing the wheel of philosophy (my response then). I suspect that EA is similarly reinventing the wheel of non-profit profiteering.

This is something I've been thinking about a lot lately, but so far all I have to show for it is a scattered mess of loosely-connected (as though by yarn and pushpins) thoughts. Some of them are even a bit Marxist--we live in a material world, we all have to eat, and if you aren't already independently wealthy then your only options for going on living are to grind, or to grift (or some combination of the two). And the Internet has a way of dragging more and more of us into the same bucket of crabs. AI is interesting stuff, but 99% of the people writing and talking about it are just airing views. MIT's recent AI policy briefs do not contribute any technical work to the advancement of AI, and do not express any substantive philosophical insight; all I see there is moralizing buzzwords and wishful thinking. But it is moralizing buzzwords and wishful thinking from top researchers at a top institution discussing a hot issue, which is how time and money and attention are allocated these days.

So for every one person doing the hard work of advancing AI technology, there seem to be at least a hundred grasping hands reaching out in hopes of being the one who gets to actually call the shots, or barring that at least catches some windfall "crumbs" along the way. For every Scott Alexander donating a damn kidney to strangers in hopes of making the world an ever-so-slightly better place to live, there are a hundred "effective altruists" who see a chance to collect a salary by bouncing between expenses-paid feel-good conferences at fancy hotels instead of leveraging their liberal arts degree as a barista. And I say that as someone with several liberal arts degrees, who works in academia where we are constantly under pressure to grift for grants.

The cliche that always comes to my mind when I weigh these things is, "what would you do, if money were not an issue?" Not in the "what if you had unlimited resources" sense, but like--what would the modal EA-AI acolyte do, if they got their hands on $100 million free and clear? Because I think the true answer for the overwhelming majority of them is something like "buy real estate," not "do more good in the world." And I would not condemn that choice on the merits (I'd do the same!) but people notice that kind of apparent hypocrisy, even if, in the end, we as a society seem basically fine with non-profits like "Black Lives Matter" making some individual persons wealthy beyond their wildest dreams. I can't find the link right now (but I thought it was an AAQC?) but someone here did a Likewise, there was a now-deleted deep dive into the Sound of Freedom guy's nonprofit finances posted here a while back, and he was making a lot of money.

So if you want to dig in, the 2020 return is here and the 2021 is here.

As far as most concerning stuff, there is a pretty large amount of money flowing out to Ballard and his wife. $335,000 of salary to Ballard in 2021 and $113,858 of salary to his wife. These aren't super eye popping numbers, but it is a pretty high amount.

The second thing is that they seem to be hoarding a lot of cash. They have like $80 million cash on hand, and are spending much less than they raise. This isn't inherently an issue if they're trying to build an organization that's self-sustaining, but it does mean as a donor your money is not likely going to actual stuff in the short or medium term.

Speaking of that actual stuff, they don't seem to spend most of what goes out the door on their headline-generating programs. A pretty big chunk of their outflow is just grants to other 501(c)(3)s, which is not something you need to be spending millions in executive compensation for. As best I can figure, in 2021 they did just shy of $11 million of grants to other nonprofits. It's a little tricky to suss out their spending on program expenses versus admin, but they claim for outside the US a total of just shy of $8 million in program expenses.

Legal expenses are also very high (at over 1.5 million). Not sure if they're involved in some expensive litigation or what is going on there. Travel is also really high at 1.9 million, but given the nature of their organization, a good chunk of that is likely programmatic.

Now it looks like, even if maybe he did (?) save some kid(s) from trafficking along the way, it was mostly a grift? Anyway, the point is, stories like this abound.

So it would be more surprising, in the end, if the rationalist community had actually transcended human nature in this case. And by "human nature" I don't even mean greedy and grubbing; I just mean that anyone who isn't already independently wealthy must, to continue existing, find a grind or a grift! As usual, I have no solutions. This particular case is arguably especially meta, given the influence AI seems likely to have on the grind-or-grift options available to future (maybe, near-future) humans. And maybe this particular case is especially demonstrative of hypocrisy, given the explicit opposition of both effective altruism and the ratsphere to precisely the kind of grind-or-grift mentality that dominates every other non-profit world. But playing the game one level higher apparently did not, at least in this case, translate into playing a different game. Perhaps, so long as we are baseline homo sapiens, there is no other game available to us.

there are a hundred "effective altruists" who see a chance to collect a salary by bouncing between expenses-paid feel-good conferences at fancy hotels instead of leveraging their liberal arts degree as a barista

Yeah, I think that's so. If you're in the geographical bubble, there's a good chance that if you can't parlay your way into some kind of Silicon Valley start-up with free money from venture capitalists, the next best thing is to hop aboard the EA train. Especially if you knock together something about AI-risk. There's money in that, now (see Microsoft, Altman, and OpenAI). Put together a convincing pitch that you're working on existential risk, and there are donors, grant-makers, and a lot of deep pockets who will listen attentively.

Right now this makes it fertile ground for hucksters, scammers, and the like.

Right now this makes it fertile ground for hucksters, scammers, and the like.

Or also (I imagine, I'm not actually familiar) relatively sincere people, who do care about the goals in question, but also care about living well, or social status, or whatever else.