site banner

Culture War Roundup for the week of December 11, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Three months ago, LessWrong admin Ben Pace wrote a long thread on the EA forums: Sharing Info About Nonlinear, in which he shared the stories of two former employees in an EA startup who had bad experiences and left determined to warn others about the company. The startup is an "AI x-risk incubator," which in practice seems to look like a few people traveling around exotic locations, connecting with other effective altruists, and brainstorming new ways to save the world from AI. Very EA. The post contains wide-ranging allegations of misconduct mostly centering around their treatment of two employees they hired who started traveling with them, ultimately concluding that "if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world."

He, and it seems to some extent fellow admin Oliver Habryka, mentioned they spent hundreds of hours interviewing dozens of people over the course of six months to pull the article together, ultimately paying the two main sources $5000 each for their trouble. It made huge waves in the EA community, torching Nonlinear's reputation.

A few days ago, Nonlinear responded with a wide-ranging tome of a post, 15000 words in the main post with a 134-page appendix. I had never heard of either Lightcone (the organization behind the callout post) or Nonlinear before a few days ago, since I don't pay incredibly close attention to the EA sphere, but the response bubbled up into my sphere of awareness.

The response provides concrete evidence in the form of contemporary screenshots against some of the most damning-sounding claims in the original article:

  • accusations that when one employee, "Alice", was sick with COVID in a foreign country and nobody would get her vegan food so she barely ate for two days turned into "There was vegan food in the house and they picked food up for her, but on one of the days they wanted to go to a Mexican place instead of getting a vegan burger from Burger King."

  • accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

  • accusations that they asked Alice to "bring a variety of illegal drugs across the border" turned into "They asked Alice, who regularly traveled with LSD and marijuana of her own accord, to pick up ADHD medicine and antibiotics at a pharmacy. When she told them the meds still required a prescription in Mexico, they said not to worry about it."

The narrative the Nonlinear team presents is of one employee with mental health issues and a long history of making accusations against the people around her came on board, lost trust in them due to a series of broadly imagined slights, and ultimately left and spread provable lies against them, while another who was hired to be an assistant was never quite satisfied with being an assistant and left frustrated as a result.

As amusing a collective picture as these events paint about what daily life at the startup actually looked like, they also made it pretty clear that the original article had multiple demonstrable falsehoods in it, in and around unrebutted claims. More, they emphasized that they'd been given only a few days to respond to claims before publication, and when they asked for a week to compile hard evidence against falsehoods, the writers told them it would come out on schedule no matter what. Spencer Greenberg, the day before publication, warned them of a number of misrepresentations in the article and sent them screenshots correcting the vegan portion; they corrected some misrepresentations but by the time he sent the screenshots said it was too late to change anything.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

From a long conversation with Habryka, my impression is that a lot of EA community members were left scarred and paranoid after the FTX implosion, correcting towards "We must identify and share any early warning signs possible to prevent another FTX." More directly, he told me that he wasn't too concerned with whether they shared falsehoods originally so long as they were airing out the claims of their sources and making their level of epistemic confidence clear. In particular, the organization threatened a libel suit shortly before publication, which they took as a threat of retaliation that meant they should and must hold to their original release schedule.

My own impression is that this is a case of rationalist first-principles thinking gone awry and applied to a domain where it can do real damage. Journalism doesn't have the greatest reputation these days and for good reason, but his approach contrasts starkly with its aspiration to heavily prioritize accuracy and verify information before releasing it. I mention this not to claim that they do so successfully, but because his approach is a conscious deviation from that, an assertion that if something is important enough it's worth airing allegations without closely examining contrary information other sources are asking you to pause and examine.

I'd like to write more about the situation at some point, because I have a lot to say about it even beyond the flood of comments I left on the LessWrong and EA mirrors of the article and think it presses at some important tension points. It's a bit discouraging to watch communities who try so hard to be good from first principles speedrun so many of the pitfalls broader society built guardrails around.

This was a weird one.

I remember reading the original callout post and thinking: "If this were coming from literally anyone else I'd call bullshit, but I trust the LessWrong guys to not massively screw up something like this." The fact that they did in fact massively screw up something like this is a big update.

I also think it was correct to make some huge updates on the FTX collapse. Ben and Habryka just updated too much on "calling people out is good," whereas most of my updating was on "Benthamite utilitarianism is bad".

The narrative the Nonlinear team presents is of one employee with mental health issues and a long history of making accusations against the people around her came on board

28 people!

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

The same way they got suckered into thinking AI x-risk is an "effective" altruist cause?

At least the callout post came with testimony from people who had actually worked at Nonlinear. It had quotes and screenshots and other forms of evidence of the kind that convince us of many things every day. It turns out these statements did not reflect reality and the screenshots were carefully curated to present a particular narrative. This is a risk we run any time we trust someone's testimony about a situation we don't have first hand experience with. This is an ordinary, and probably unavoidable, epistemic failure mode.

By contrast, what is the state of evidence for AI x-risk research being an effective cause area? If I'm making a $5k dollar donation should I make it to the Against Malaria Foundation (who I'm reasonably confident will save a life with that money) or to some AI x-risk charity? What's the number of lives, in expectation the donation to the AI x-risk charity will save? What was the methodology for determining that number? The error bars on it? As best as I can tell these numbers are sourced to the same place: their ass. If you think AI x-risk is an "effective" cause area, you have bad epistemic standards! Not good ones!

At least the callout post came with testimony from people who had actually worked at Nonlinear. It had quotes and screenshots and other forms of evidence of the kind that convince us of many things every day. It turns out these statements did not reflect reality and the screenshots were carefully curated to present a particular narrative. This is a risk we run any time we trust someone's testimony about a situation we don't have first hand experience with. This is an ordinary, and probably unavoidable, epistemic failure mode.

Not good enough.

Yes, the callout post came with all of those things. Here's what else it came with:

  • An emphatic warning from a trusted community member that he had reviewed the draft the day before publication and warned of major inaccuracies, only one of which got corrected.

  • The subjects of the post claiming hard evidence that many of the claims in the post were outright false and begging for a week to compile and send that evidence while emphasizing that they'd had only three hours to respond to claims that took hundreds of hours to compile.

  • A notice at the top, treated as exculpatory rather than damning, that it would be a one-sided post brought about by a search for negative information.

Any one of those things, by itself, was a glaring red flag. All three of them put together leave absolutely no excuse for the post to have been released in the state it was in, or for an entire community that prides itself on healthy epistemics to treat it as damning evidence of wrongdoing. If it had been published in the New York Times rather than the effective altruism community, every single rationalist would—rightly—be cursing the name of the news outlet that decided to post such a piece.

This is ordinary in Tumblr fandoms. It's ordinary in tabloids. It's jarring and inexcusable to see the same behavior dressed up in Reasonable, Rational, Sensible language and cheered by a community that prides itself on having better discourse and a more truth-seeking standard than others.

This is an ordinary, and probably unavoidable, epistemic failure mode.

I'm not sure what's so difficult about saying "It sounds bad if it's true, but I'll reserve judgement until I hear the other side's case." I say it all the time, to the endless frustration of friends and family, but still. I'd expect rationalists to get that one right.

Some people can make accusations, and people will say "That sounds really bad but we're probably not hearing the full story". They then will make no attempt to hear the full story, and just dismiss the accusations on those grounds. Other people can make accusations and they are gospel truth and questioning them simply compounds the offense. It is all about who and whom.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

People occasionally ask whether the ratsphere is just reinventing the wheel of philosophy (my response then). I suspect that EA is similarly reinventing the wheel of non-profit profiteering.

This is something I've been thinking about a lot lately, but so far all I have to show for it is a scattered mess of loosely-connected (as though by yarn and pushpins) thoughts. Some of them are even a bit Marxist--we live in a material world, we all have to eat, and if you aren't already independently wealthy then your only options for going on living are to grind, or to grift (or some combination of the two). And the Internet has a way of dragging more and more of us into the same bucket of crabs. AI is interesting stuff, but 99% of the people writing and talking about it are just airing views. MIT's recent AI policy briefs do not contribute any technical work to the advancement of AI, and do not express any substantive philosophical insight; all I see there is moralizing buzzwords and wishful thinking. But it is moralizing buzzwords and wishful thinking from top researchers at a top institution discussing a hot issue, which is how time and money and attention are allocated these days.

So for every one person doing the hard work of advancing AI technology, there seem to be at least a hundred grasping hands reaching out in hopes of being the one who gets to actually call the shots, or barring that at least catches some windfall "crumbs" along the way. For every Scott Alexander donating a damn kidney to strangers in hopes of making the world an ever-so-slightly better place to live, there are a hundred "effective altruists" who see a chance to collect a salary by bouncing between expenses-paid feel-good conferences at fancy hotels instead of leveraging their liberal arts degree as a barista. And I say that as someone with several liberal arts degrees, who works in academia where we are constantly under pressure to grift for grants.

The cliche that always comes to my mind when I weigh these things is, "what would you do, if money were not an issue?" Not in the "what if you had unlimited resources" sense, but like--what would the modal EA-AI acolyte do, if they got their hands on $100 million free and clear? Because I think the true answer for the overwhelming majority of them is something like "buy real estate," not "do more good in the world." And I would not condemn that choice on the merits (I'd do the same!) but people notice that kind of apparent hypocrisy, even if, in the end, we as a society seem basically fine with non-profits like "Black Lives Matter" making some individual persons wealthy beyond their wildest dreams. I can't find the link right now (but I thought it was an AAQC?) but someone here did a Likewise, there was a now-deleted deep dive into the Sound of Freedom guy's nonprofit finances posted here a while back, and he was making a lot of money.

So if you want to dig in, the 2020 return is here and the 2021 is here.

As far as most concerning stuff, there is a pretty large amount of money flowing out to Ballard and his wife. $335,000 of salary to Ballard in 2021 and $113,858 of salary to his wife. These aren't super eye popping numbers, but it is a pretty high amount.

The second thing is that they seem to be hoarding a lot of cash. They have like $80 million cash on hand, and are spending much less than they raise. This isn't inherently an issue if they're trying to build an organization that's self-sustaining, but it does mean as a donor your money is not likely going to actual stuff in the short or medium term.

Speaking of that actual stuff, they don't seem to spend most of what goes out the door on their headline-generating programs. A pretty big chunk of their outflow is just grants to other 501(c)(3)s, which is not something you need to be spending millions in executive compensation for. As best I can figure, in 2021 they did just shy of $11 million of grants to other nonprofits. It's a little tricky to suss out their spending on program expenses versus admin, but they claim for outside the US a total of just shy of $8 million in program expenses.

Legal expenses are also very high (at over 1.5 million). Not sure if they're involved in some expensive litigation or what is going on there. Travel is also really high at 1.9 million, but given the nature of their organization, a good chunk of that is likely programmatic.

Now it looks like, even if maybe he did (?) save some kid(s) from trafficking along the way, it was mostly a grift? Anyway, the point is, stories like this abound.

So it would be more surprising, in the end, if the rationalist community had actually transcended human nature in this case. And by "human nature" I don't even mean greedy and grubbing; I just mean that anyone who isn't already independently wealthy must, to continue existing, find a grind or a grift! As usual, I have no solutions. This particular case is arguably especially meta, given the influence AI seems likely to have on the grind-or-grift options available to future (maybe, near-future) humans. And maybe this particular case is especially demonstrative of hypocrisy, given the explicit opposition of both effective altruism and the ratsphere to precisely the kind of grind-or-grift mentality that dominates every other non-profit world. But playing the game one level higher apparently did not, at least in this case, translate into playing a different game. Perhaps, so long as we are baseline homo sapiens, there is no other game available to us.

there are a hundred "effective altruists" who see a chance to collect a salary by bouncing between expenses-paid feel-good conferences at fancy hotels instead of leveraging their liberal arts degree as a barista

Yeah, I think that's so. If you're in the geographical bubble, there's a good chance that if you can't parlay your way into some kind of Silicon Valley start-up with free money from venture capitalists, the next best thing is to hop aboard the EA train. Especially if you knock together something about AI-risk. There's money in that, now (see Microsoft, Altman, and OpenAI). Put together a convincing pitch that you're working on existential risk, and there are donors, grant-makers, and a lot of deep pockets who will listen attentively.

Right now this makes it fertile ground for hucksters, scammers, and the like.

Right now this makes it fertile ground for hucksters, scammers, and the like.

Or also (I imagine, I'm not actually familiar) relatively sincere people, who do care about the goals in question, but also care about living well, or social status, or whatever else.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

Very simple. The "rationalist community" is embedded in the SF zeitgeist and questioning callouts from women is anathema. This has happened before (e.g. Kathy F) and will happen again.

I can't really see how the "rationalist community" can be such a thing when it's utterly compromised. Progressive wokism, or whatever you want to call it, is the preeminent irrationalist philosophy of modernity, and the "rationalist community" is one of its vassals--all too often a willing one at that. From the outside, they're so absurd. Maybe they'll evolve into something worthy of their name eventually, but I see little sign of that. They would have been eugenicists 100 years ago.

Aren't a non-negligible number eugenicists now?

What do you mean by eugenics and their support for it? I’m not familiar with Bay Area progressives.

Yeah, until you fundamentally change the way non-profits are organized, the supposed goals of the organization will always come second to the grift.

One problem is that the world belongs to those who show up. Tolerance for boring, six hour long meetings determines who gets leadership positions.

And tolerance for those meetings goes up a lot when there's money involved. Few people are willing to do it out of the kindness of their hearts. But if they get a $300,000/year paycheck they will do it. So the grifter will naturally rise.

which in practice seems to look like a few people traveling around exotic locations, connecting with other effective altruists, and brainstorming new ways to save the world from AI

While snarky, this is indeed my impression of (current) EA movement. At the start, with the mosquito nets, this at least was practical, boots-on-the-ground charity and they could be forgiven for their slightly smug 'we're doing charidee right (unlike the mugs who went before we appeared fully-formed from the head of Zeus)' attitude because they were indeed helping the poor and deprived.

But helping the poor and deprived wasn't the full gamut of EA activity and philosophy, and the crank stuff (sorry, people, I do not care if insects suffer) was there from the start. However, it was a minor part. But AI risk was one of the Less Wrong and other rationalist/rationalist-adjacent bugbears, and because of the cross-over between EA and the rationalist bubble, that was there too.

And it was sexy! and modern! and interesting! in a way that plain, bread-and-butter, 'help the poor with an ongoing problem that, despite all the fancy technical attempts to solve it, looks to remain intractable: malaria by mosquito-borne transmission' wasn't, because all the former mugs had been doing 'missions to Africa' and the likes for decades, so what makes you so special?

And it involved flying around and going to conferences and hob-nobbing with Big Names and getting yourself known in those circles, and was way more appealing to the SF nerd in us all (c'mon, if we're hanging round these parts, even if we're not rationalists or EA, we're SF nerds).

So EA the movement seems, to me at least looking in from the outside, to have subtly but definitely transformed into 'making a living by taking in each other's washing' - going to conferences to network about getting an internship to get into a programme about signing people up to attend EA conferences.

(Here's where I mention the manor house in Oxford).

That's why, while I understand Scott doing an apologia for EA and appealing to all the lives it (presumably/allegedly) has saved, I don't think he still has entirely grappled with the criticisms from the outside about 'travelling around exotic locations and brainstorming for projects which are not practical, boots-on-the-ground, charity'. If (and it's one hell of a big if) AI is going to Doom Us All unless it's perfectly aligned with nice, liberal, 21st century middle-to-upper middle class San Franciscan values, then their work is important.

If AI screws us over because (deep breath) the free market capitalist system incentivises greed and the gold rush is on to get to market first and grab the majority share with your product, and just ignore that right now the product you're peddling makes shit up and is totally unreliable but people are being sold on the notion that it's super-ultra-mega-accurate, just believe all it says but the thing is never going to become self-aware and have its own goals and I highly doubt even smarter than human intelligence (exhale) - then all the fancy conferences mean nothing. Except pleasant trips to Oxfordshire manor houses for EA talking sessions where you pretend to be doing something meaningful - junkets, in other words.

And if you've reached the point of junkets, you are not "doing charidee right unlike those other mugs".

As for the rest of it? Sounds like the typical EA over-sensitivity/scrupulousness where small things get blown up into microaggressions, unfulfilled promises, and 'you said I'd get X and then I never got X' pouting where all kinds of accommodations for neurodiversity, gender diversity, I don't know what diversity, are expected implicitly.

EDIT:

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

I think, and this is only a vague impression so don't take it as Gospel, that it's a case of the pendulum over-correcting and swinging too far to the other side. There have been previous internal scandals among rationalist groups, and subsequent accusations of cover-ups and people in charge not taking the complaints seriously/not acting quickly enough/doing their best to hush it up.

So I think there's a sensitivity around being seen to 'victim blame' and not immediately strike while the iron is hot when you hear people accusing EA/EA-aligned groups of wrongdoing, and this perhaps led in this instance into jumping the gun. Fact-checking could be seen as denying the truth, trying to delay embarrassing revelations, and even a form of harassing the victims by making them respond to little, nit-picky details.

The whole "my vegan diet/my money that I was promised" and so on sounds exactly like what I've come to expect from these types, to be frank (and a little mean) about it. Wanting a whole specific vegan product from one place and kicking up about not getting it. If you're sick with Covid, you're likely not to be eating much anyway, and if you can eat to the point that you're fussy about "I only eat this not that", then you're not that sick. One of my siblings got Covid and couldn't even keep down water because she vomited everything she consumed straight back up, so I was genuinely worried about her getting dangerously dehydrated; that's not at all the same as "I didn't get my vegan din-dins".

EDIT EDIT: To be fair, if she was that sick, and her stomach was sensitive, it may well have been that she could only eat that one particular Burger King vegan burger; Mexican food does sound like it would be too much. But building it up into The Persecution of the Vegan Joan of Arc is the kind of overly dramatic, self-regarding, navel-gazing that a lot of the writing by EA and LessWrongers and lesser lights exhibits. That's one of the attractions of Scott's writing for me - he's never (or barely ever) indulged in that slightly whiny "I have this entire laundry list of Things wrong with me and I need and demand these special accommodations and I continuously gaze into the mirror of my soul and you lot get the reports from the frontier on that every five minutes and any criticism no matter how mild is hate speech".

My own impression is that this is a case of rationalist first-principles thinking gone awry and applied to a domain where it can do real damage. Journalism doesn't have the greatest reputation these days and for good reason, but his approach contrasts starkly with its aspiration to heavily prioritize accuracy and verify information before releasing it.

On the other hand, I don't remember any journalists giving Nick Sandman, or Brett Kavanaugh a chance to respond. The Official Book Of Rules Of Journalism might have some good ideas in it, but it seems to have been gathering dust to the point where deliberately breaking some of the rules yields a better result.

Concrete note on this:

accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

The "all expenses" they're talking about are work-related travel expenses. I, too, would be extremely mad if an employer promised me $75k / year in compensation, $10k of which would be cash-based, and then tried to say that costs incurred by me doing my job were considered to be my "compensation".

Honestly most of what I take away from this is that nobody involved seems to have much of an idea of how things are done in professional settings, and also there seems to be an attitude of "the precautions that normal businesses take are costly and unnecessary since we are all smart people who want to help the world". Which, if that's the way they want to swing, then fine, but I think it is worth setting those expectations upfront. And also I'd strongly recommend that anyone fresh out of college who has never had a normal job should avoid working for an EA organization like nonlinear until they've seen how things work in purely transactional jobs.

Also it seems to me based on how much interest there was in that infighting that effective altruists are starved for drama.

It all seems very dodgy, and I think that the company was one of those set-ups that many people encounter at least once in their working lives: very extrovert/charismatic boss, who spouts a lot of the right stuff about ideals and appeals to better nature, and convinces you that working with them is going to change the world/improve the lives of many.

When you're young and inexperienced, you're vulnerable to all this because you don't have enough time put in to know what work is like. And the use of unpaid interns and so forth is common in all kinds of businesses.

So in this case - were the people volunteers, travelling on their own dime along with the Interlinear people, and getting room and board and expenses with an allowance on top, or were they employees? Since they don't seem ever to have been formally employed or given contracts, it sounds like the 'volunteer/unpaid intern' type of taking advantage.

If I take the initial story on face value, the Interlinear lot are EA-adjacent, swimming in the same waters, but sharks not dolphins (or behaving like sharks, at least). The kind of exploitative set-up that, as I said, most of us hit up against at least once when we're out there working for a living, but covering it all over with the language of volunteering and idealism and changing the world, etc. It's possible that Interlinear aren't that bad, but it's also possible that this Emmanuel guy is that sort of charming psychopath that top management roles attract. And the mess of overlapping romantic/family/employee or pseudo-family? who knows? roles didn't help.

Then the sort of people who are in the EA bubble are exactly the over-sensitive, rather credulous, inexperienced sorts who believe all the clap-trap about idealism and also expect a ton of accommodations for their lifestyle choices (e.g. veganism).

Put the two together, and that's putting fire and tinder together.

They never promised $75k/year in compensation, $10k of which would be cash-based. This was the compensation package listed in their written, mutually agreed upon employment contract:

As compensation for the services provided, the Employee shall be paid $1,000 per month as well as accommodation, food, and travel expenses, subject to Employer's discretion.

They included another text in evidence where they restated part of it:

stipend and salary mean the same thing. in this instance, it's just $1000 a month in addition to covering travel, food, housing etc

The only apparent mention of $70000 as a number happened during a recorded interview (edited for clarity, meaning retained):

We're trying to think about what makes sense for compensation, because you're gonna be living with us, you're gonna be eating with us. How do you take into account the room and the board and stuff and the travel that's already covered? What we're thinking is a package where it's about the equivalent of being paid $70k a year in terms of the housing and the food, and you'll eat out every day and travel and do random fun stuff. And then on top of that, for the stuff that's not covered by room and board and travel is $1000 a month for basically anything else.

I would not personally take a job offering this compensation structure, but they were fully upfront about what the comp package was and it came pre-agreed as part of the deal. I see no grounds for complaints about dishonesty around it.

It's not a job, it's more like being an au pair: the 'employer' or 'host family' provides room and board and an allowance in return for light domestic/child minding work.

The selling point here seems to be "wouldn't you like to travel and have fun overseas on our dime? we'll pay for travel, and room and board, and even give you a stipend on top! and all you have to do is help us with our fun, impactful, altruistic projects!"

As a gap year thing, sure. Maybe. Were I a parent, I'd still be "but who are these people and what happens if you're overseas and get sick or something?" But it's not a job job, it's volunteer work or voluntourism or the likes, and that may be what Interlinear are relying on.

Yeah, that’s the same as things like teach abroad programs and Peace Corps. It’s nice when you’re young and single.

Sounds like Nonlinear are relying on those blurry boundaries; one person says you're an employee, here's your contract, you'll be working for me but the head boss says oh no all these are volunteers and we pay for travel, room and board plus we throw in a stipend, but they're independent contractors (as it were). So when they have you there, you think you're working in a job, but any trouble and you find out nope no that's not what we said and that's not what is written down, not our problem.

Speaking of blurry boundaries, Nonlinear almost certainly violated either federal tax law or minimum wage laws, and potentially immigration law too.

I read the same doc you did, and like. I get that "Chloe" did in fact sign that contract, and that the written contract is what matters in the end. My point is not that Nonlinear did something illegal, but... did we both read the same transcript? Because that transcript reads to me like "come on, you should totally draw art for my product, I can only pay 20% of market rates but I can get you lots of exposure, and you can come to my house parties and meet all the cool people, this will be great for your career".

I don't know how much of it is that Kat's writing style pattern matches really strongly to a particular shitty and manipulative boss I very briefly worked for right after college. E.g. stuff like

As best as I can tell, she got into this cognitive loop of thinking we didn’t value her. Her mind kept looking for evidence that we thought she was “low value”, which you can always find if you’re looking for it. Her depressed mind did classic filtering of all positive information and focused on all of the negative things. She ignored all of my gratitude for her work. In fact, she interpreted it as me only appreciating her for her assistant work, confirming that I thought she was a “low value assistant”. (I did also thank her all the time for her ops work too, by the way. I’m just an extremely appreciative boss/person.)

just does not fill me with warm fuzzy feelings about someone's ability to entertain the hypothesis that their own behavior could possibly be a problem. Again, I am probably not terribly impartial here - I have no horse in this particular race, but I once had one in a similar race.

While I don't endorse "come on, you should totally draw art for my product"–type behavior, I do think the position would have been appealing and appropriate for a certain type of person I am not far from. My monthly salary on top of room and board was significantly larger as a military enlistee, but I also wasn't traveling the world. I think they were realistically underpaying for what they wanted but also think "don't take the job" is an adequate remedy to that.

I take your point about the writing style, but for me it's secondary to the core impression that the investigation was very badly mishandled in a way that makes examining things now feel unfair. The initial report should not have been released as-is and it reflects poorly on the whole EA/LW-rationalist community that it was. Given the poor choices around its release, I don't feel inclined to focus too much on what really looks like mundane and predictable workplace/roommate drama.

I agree that it was badly mishandled. I think it's valuable to tell EAs that the "people will try to get you to take a job where they say you'll be paid in experience/exposure, be mindful of that dynamic" but singling out a single organization to that degree makes it sound like it's a problem specific to that organization (which it is not, even within the EA space I personally know of another org with similar dynamics, and I'm not even very involved with the space).

I personally still wouldn't work for nonlinear but then I also would have noped out in the initial hash-out-the-contract phase.

The problem is that even if Nonlinear is pure as the driven snow (and there seems to be some grounds to doubt that), it's operating in the EA sphere where 'put the majority of the money you earn to good causes, live sparely so you can give even more' is an acceptable community value, and where there are a lot of idealists willing to save the world if they can, and willing to be emotionally guilt-tripped into volunteering, doing way more work than they should be doing, and living on fresh air while doing that. Where scrupulosity is a known problem, and people do tie themselves into knots over paperclip maximisers.

It's not sustainable for anybody and it's very open to abuse.

Yeah. And honestly, there are worse things than being paid in exposure. I'd describe that as the primary compensation for my podcast job (my bosses pay me a perfectly fair hourly wage, but I'm certainly not doing it for the money). It's just worth being clear-eyed about precisely what that entails and when it's appropriate.

Because that transcript reads to me like "come on, you should totally draw art for my product, I can only pay 20% of market rates but I can get you lots of exposure, and you can come to my house parties and meet all the cool people, this will be great for your career".

See all the reputable media companies, including the New York Times at one time, that use(d) unpaid interns for the same thing - this is helping you get your foot in the door, it pays in exposure. Lots of places rely on the unpaid/voluntary labour of hopefuls to carry them through backlogs, or the busy period, or rush orders. The wonders of the gig economy, where there will be no such thing as guaranteed employment but it's your responsibility to be flexible, available, and constantly re-skilling/upskilling to meet demand.

Sounds like a good learning experience about the world of work, but I imagine since this is all within the EA bubble, the expectations about being treated super-specially and not being taken advantage of and getting all sorts of loving, caring, treatment were sky-high.

Everyone involved sounds narcissistic at best and absolute pricks at worst, and I'm not going to single out one person from the lot.

Lots of places rely on the unpaid/voluntary labour of hopefuls to carry them through backlogs, or the busy period, or rush orders.

There is a certain narrative that this is common but I'm not sure a buy it. Maybe it's just software engineering but interns have never made sense as a free labor prospect to me, they cost more in senior dev time spent training than they could possibly be alleviating. It only makes sense as a junior talent pipeline tool.

Because that transcript reads to me like "come on, you should totally draw art for my product, I can only pay 20% of market rates but I can get you lots of exposure, and you can come to my house parties and meet all the cool people, this will be great for your career".

Sounds like a mercifully inexpensive lesson about the nonexistence of free lunches. What an offer like that translates to is

You could work for $4k/month cash, or, OR! You could work for $1k/month, and every month you get to pull a prize out of the Mystery Box! Wooo! The Mystery Box! Who knows what's in there? There might be all kinds of cool stuff!

If someone willingly agrees to work for a pathetic salary with "all expenses paid," or draw art for "exposure," and they get what they signed up for, it's really not shitty or manipulative, it's just an unremarkable business agreement, regardless of what unrealistic hopes on party may have had.

Ask questions, get everything in writing, in a contract, and if it sounds too good to be true, walk away. I'm continually amazed at how my some of my colleagues and acquaintances just take others' word and then get disappointed when their own expectations let them down. Classic example: "We can't give you a raise this year, but I'm sure we'll be able to do something for you when the next performance cycle rolls around." Okay cool, write me a bonus offer right now for next year and sign it, otherwise I'm hopping on LinkedIn tonight.

If someone willingly agrees to work for a pathetic salary with "all expenses paid," or draw art for "exposure," and they get what they signed up for,

I worked a few seasonal jobs in my youth that included room and board, which was taken out of the already-modest paychecks. You really could get away with not spending anything for months if you wanted, although many of my coworkers would occasionally find a restaurant or bar at which to spend money.

I don't regret doing these because the job itself was pretty enjoyable. I make better money now, but I didn't feel exploited by the arrangement: as long as you're saving enough for retirement and such, gigs that cover "expenses" in-kind can be an option, although probably not the most interesting one.

I have no idea how so many of these so called "well adjusted" human beings fall for things so simple as the Mystery Box. Like people used to call me borderline autistic and even in my worst moments I would never ever have fallen for "I'm sure we'll be able to do something for you when the next performance cycle rolls around".

OTOH, and maybe it's my backwater upbringing talking, $1K cash on top of room, board and other expenses, doesn't sound like a bad deal, if you're still young.

Combined with world travel within an existing social circle and without grinding repetitive labor, and it sounds good to me right now at over 30. Maybe negotiate some sort of bonus if my living/travel costs come in way under the estimate.

My own impression is that this is a case of rationalist first-principles thinking gone awry and applied to a domain where it can do real damage. Journalism doesn't have the greatest reputation these days and for good reason, but his approach contrasts starkly with its aspiration to heavily prioritize accuracy and verify information before releasing it.

It seems like the opposite to me. Running with the baseless callout post to show how seriously you take wrongdoing in your community is extremely normal behavior. Normal people tend to assume accusations are true, without appreciating how easily they can be dominated by a small percentage of delusional or malicious people. Normal people tend to take a "if there's smoke there's fire" attitude rather than nitpicking individual claims to see if the accuser is credible. Normal people are more interested in punishing or warning about wrongdoers than the impact of false accusations, and don't think about the second-order consequences of incentivizing false accusations by taking even weak accusations seriously. Indeed, I wonder if one reason the claims weren't questioned enough is because those doing so wanted to act normally and being skeptical would have pattern-matched onto negative stereotypes about EAs: defending an EA organization accused of abusive behavior would be cult-like, while nitpicking the truth of individual claims by an alleged victim would be cold and emotionless. Now, normal people can be skeptical, especially after a response like the one Nonlinear has now posted, and obviously they aren't as bad as SJW-inclined communities with ideological antibodies against failing to "Believe Victims". But the behavior you're attributing to rationalism seems very typical. Sadly this includes large sections of mainstream journalism, regardless of what the SPJ ethical guidelines say they should be doing.

I'm not sure how much of this is specific to post-FTX behavior -- there were a good few explosions in the rat-tumblr sphere during its height, and even when (probably) correct, there was very little interest or ability to consider how one knows a thing from knowing the thing, or to separate halo-and-horns for one demonstrated bad act to every alleged one. The community often doesn't have the tools to do serious first-party investigation, evaluate conflicting claims, nor the dedication to track them down. The final collapse of su3su2u1 sticks with me for the extent people turned SHLevy (and Scott)'s reveal of the sock-puppeting into 'doxxing', but it's relatively light-weight in terms of spreading around the shit.

For a more serious one, I'm still not sure where to place the numbers any one specific allegation about Vassar, and that's as someone that considered conversation somewhere nonproductive back in the Gold Age of LessWrong.

((On the flip side, I've not been impressed by the guardrails in general society, not least of all because many of the people and methods were the same.))

I won't claim it's entirely discontinuous from the past, but I think it's notable that eg Ben expressed fury at the lack of changes since FTX and the EA community as a whole has recent memories of being dragged through scandal after not being suspicious enough.

EDIT: Oliver, too, mentions being intimidated by FTX and not sharing his concerns as one of the worst mistakes of his career.

Rationalists have the flaw that they assume anyone interested in rationalism as a movement will be well-meaning. Which is probably true when rationalism is a couple of nerds debating whether eating oysters is moral, but in a world where "rationalism" is worth money and social cachet, it attracts hanger-ons who are motivated by other considerations.

Any movement seeking power has to consider what will happen when someone who is less interested in their principles than attaining power tries to join it.

Rationalists have the flaw that they assume anyone will be well-meaning.

Fixed.

There are no well-meaning people, only cynics, liars and the self-deluded. Altruism is the first lie.

Any movement seeking power has to consider what will happen when someone who is less interested in their principles than attaining power tries to join it.

Almost like it's some sort of iron law. Combining the sibling comment to this one should enable any movement to derive the conclusions stated by the "neutral vs. conservative" thing, since neutral can't resist entryism by the evil but conservative can.

Geeks, MOPs, and Sociopaths remains the classic diagnosis of this.