site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Defamation Bear Trap

The legal field is filled with ad-hoc quirky legal doctrines. These are often spawned from a vexed judge somewhere thinking "that ain't right" and just making up a rule to avoid an outcome they find distasteful. This is how an exploding bottle of Coca-Cola transformed the field of product liability, or how courts made cops read from a cue card after they got tired of determining whether a confession was coerced, or even how an astronomy metaphor established a constitutional right to condoms. None of these doctrines are necessarily mandated by any black letter law; they're hand-wavy ideas that exist because they sort of made sense to someone in power.

I've dabbled in my fair share of hand-wavy ideas, for example when I argued that defendants have slash should have a constitutional right to lie (if you squint and read between the lines enough). Defamation law is not my legal wheelhouse but when I first heard about Bill Cosby being sued by his accusers solely for denying the rape allegations against him, I definitely had one of those "that ain't right" moments. My naive assumption was that a quirky legal doctrine already existed (weaved from stray fibers of the 5th and...whatever other amendment you have lying around) which allowed people to deny heinous accusations.

I was wrong and slightly right. Given how contentious the adversarial legal system can get, there is indeed the medieval-era legal doctrine of "Litigation Privilege" which creates a safe space bubble where lawyers and parties can talk shit about each other without worrying about a defamation lawsuit. The justification here is that while defamation is bad, discouraging a litigant's zeal in fighting their case is even worse. Like any other cool doctrine that grants common people absolute immunity from something, this one has limits requiring any potentially defamatory remarks to have an intimate nexus with imminent or ongoing litigation.

It's was an obvious argument for Trump to make when Jean Carroll sued him for defamation for calling her a liar after she called him a rapist (following?). A federal judge rejected Trump's arguments on the grounds that his statements were too far removed from the hallowed marble halls of a courthouse. Generally if you want this doctrine's protection, your safest bet is to keep your shit-talking in open court or at least on papers you file in court. While the ruling against Trump is legally sound according to precedent, this is another instance where I disagree based on policy grounds.

Though I'm a free speech maximalist, I nevertheless support the overall concept of defamation law. Avoiding legal liability in this realm is generally not that hard; just don't make shit up about someone or (even safer) don't talk about them period. But what happens when someone shines the spotlight on you by accusing you of odious behavior from decades prior?

Assuming the allegations are true but you deny them anyways, presumably the accuser would have suffered from the odious act much more than for being called a liar. If so, seeking redress for the original harmful act is the logical avenue for any remedies. The (false) denial is a sideshow, and denial is generally what everyone would expect anyways.

But assuming the allegations are false, what then? The natural inclination is also to deny, except you're in a legal bind. Any denial necessarily implies that the accuser is lying. So either you stay silent and suffer the consequences, or you try to defend yourself and risk getting dragged into court for impugning your accuser's reputation.

My inclination is that if you're accused of anything, you should be able to levy a full-throated denial without having to worry about a defamation lawsuit coming down the pipes. You didn't start this fight, your accuser did, and it's patently unfair to now also have to worry about collateral liability while simultaneously trying to defend your honor. Without an expansion of the "Litigation Privilege" or something like it to cover these circumstances, we create the incentive to conjure up a defamation action out of thin air. The only ingredients you need are to levy an accusation and wait for your target's inevitable protest. That ain't right.

Every time you write about legal stuff I just feel more and more convinced that the rules are made up and the laws barely matter.

What is the point of a statute of limitations if it can be changed after the fact to include things previously protected by that statute?

What is the point of the trial related amendments if you can just have your reputation smeared and ruined by the media without anything vaguely resembling "due process"?

What problem are civil courts solving other than 'how to make lawyers rich'?

Plea deals destroying incentives to get your day in court. Prosecutors seemingly immune to any consequences of malpractice.


An old movie keeps coming up in my mind. It took me an hour of searching to find it based on my vague recollections. Interstate 60. There is a section of the movie where the main character (on a mythical road trip) takes a stop in a town called Morlaw. The entire town is comprised of lawyers that are constantly suing each other for everything (get it, Morlaw -> More Law). Any unlucky idiots that find their way to the town get caught up in the web of suing very quickly.

How does the protagonist escape? Do they make a compelling argument that this is insane? Nope, that doesn't convince any of the lawyers. They just see that as another reason to sue him.

Valerie McCabe: Every adult citizen of Morlaw is a lawyer, so everybody sues everybody else. It doesn't matter if there's a cause. It's how we ensure that everyone makes a living of their profession.

Neal Oliver: Yeah, but that's insane.

Valerie McCabe: I could sue you for that. You just made a defamatory remark about this town. Hey, are you looking at my legs? I could sue you for that too, sexual harassment.

Neal Oliver: Is there anything you can't sue me for?

The way the protagonist escapes is by making a call to a friend he met on the road. An ex-marketer that is dying and decides to go on a personal crusade against lying. This ex-marketer has a bomb vest strapped to him, and seems willing to use it. Yup, that's right, it takes literal terrorism to extricate the main character from a web of lawyers. The ex-marketer decides to stay around Morlaw to keep them in line.

Our legal system increasingly resembles a system of "might makes right" if you have enough powerful people on your side then the law can literally be what you want it to be. It doesn't feel like there is a legible system of rules where an underdog that is correct or in-the-right can beat the system. In the end someone might make the same realization that the ex-marketer makes. "Why play by your rules when I'm always going to lose? Why not bring violence to the table?"

The far left and the right basically agree at this point that the law is made up at this point. It’s just human interpretations of what words mean which can be manipulated into anything.

For something not culturally war. What about student athletes. One day they were limited. Next day popular opinion changed. No laws were passed.

https://www.forbes.com/sites/kristidosh/2021/06/21/what-does-supreme-court-decision-against-ncaa-mean-for-name-image-and-likeness/?sh=71bcd9bf500c

Of course the rules are made up—in legislative sessions, where the democratically elected representatives decided this statute-of-limitations business was unjust. How else do you want a change to happen? It is the job of the courts to enact the laws as they stand, not as they once stood.

What is the point of the trial related amendments if you can just have your reputation smeared and ruined by the media without anything vaguely resembling "due process"?

It’s funny that you phrase it like that when talking about a defamation suit, where Carroll applied due process to keep her reputation from being smeared and ruined. And that your example for an “underdog” is a billionaire, celebrity, and politician with millions of loyal followers.

What problem are civil courts solving other than 'how to make lawyers rich'?

Let’s imagine a hypothetical with no civil suits. Trump assaults Carroll in 198X. Carroll, for whatever reason, lets the SOl run out. She later decides to write a tell-all book. Trump can’t sue her, so he just uses his immense popularity, personal social media platform, etc. to ruin her reputation across America. End result: victims are incentivized to shut the fuck up about anything which didn’t make it to a court of law, even if it really did happen.

Civil suits make Trump’s counterattack a liability. They also make Carroll’s claim a liability, discouraging her from falsifiable or malicious statements! This seems like an obvious improvement over the case where the most popular guy gets to shit on whoever he wants by default.

“Get thee behind me, fedposter.”

It’s funny that you phrase it like that when talking about a defamation suit, where Carroll applied due process to keep her reputation from being smeared and ruined.

Is it Carroll vs Trump, or is it Trump vs Blue Tribe?

And that your example for an “underdog” is a billionaire, celebrity, and politician with millions of loyal followers.

The impact of all of these are relative. Billionaire is a lot relative to me, and very little compared to state and quasi-state entities with GDP measured in the trillions. Politician is very influential relative to me, and laughable compared to big business, big media, the federal bureaucracy, and half the country. Millions of loyal followers is a lot compared to me, and very little relative to the dozens of millions within Blue Tribe as a whole.

You want to appeal to the process, because that keeps things clean. But Trump's supporters emphatically do not trust the process, and do not agree that it is being applied impartially. Every time the "process" reveals a novel convolution to the detriment of their interests, their trust decreases further, as it should.

It seems likely that Blue Tribe will get Trump eventually. If this doesn't do it, they'll roll the dice on something else, and something else after that, and so on until the day he dies. All it costs them is the trust of an increasingly furious and desperate other half of the country.

This seems like an obvious improvement over the case where the most popular guy gets to shit on whoever he wants by default.

Funny, that's exactly how I'd describe the current situation. Making this claim requires very specific assumptions about the framing, which are not shared. Because those assumptions are shared by 90% of media workers, academics and government staff, they gain a veneer of legitimacy through repetition, but that does not make them legitimate.

Is it Carroll vs Trump, or is it Trump vs Blue Tribe?

When we've got serious claims in this thread that anyone ought to be able to win a 30-year-old he said / she said sexual assault case against Trump because of his known bad character, I think the answer is obvious.

And of course conservatives might dismiss this sort of thing with "well then you shouldn't have had such bad character".

Link? If you’re thinking of me, I sure wouldn’t sign on to that.

Close enough, I guess.

Yes, I am making assumptions. So are Trump’s defenders. I think mine are better-founded.

It is absolutely Carroll v. Trump. Strip away all the political theatrics and you’d still have a valid case. Two if you count defamation. And I’ve laid out my reasons why I believe defamation laws, and civil suits in general, are useful.

The fact that 90% of Democrats line up against Trump does not make the underlying law illegitimate.

The simple answer is a higher standard of proof for a 198x rape claim than 51%. And perhaps a civil suit testimony can’t be used for a criminal perjury charge. Perhaps Trump is a special case but he’s been investigated for 6 years even before he was POTUS. Testifying under oath does represent a huge risks to him.

It doesn't feel like there is a legible system of rules where an underdog that is correct or in-the-right can beat the system.

That seems an odd claim to make in regard to a case in which a former President was found liable by a jury. it is also an odd claim to make re a legal system in which criminal defendants win cases every day, in which asylum seekers win cases every day, in which large corporations lose cases to individual every day, etc, etc, etc..

That seems an odd claim to make in regard to a case in which a former President was found liable by a jury.

Why? The former president is the underdog here.

Yes, I am sure that, had he won, underdogs the world over would have rejoiced that the system works.

Why would they? Being railroaded into a trial where your innocence is impossible to disprove because it's about something that happened 30 years ago is the system failing. If that happens and he's found not guilty, that's the system still failing, just not as badly.

In order for the system to not have failed, New York would have had to not have extended the statute of limitations at all (especially since it was done specifically to get Trump).

Perhaps, but OP's claim was not that Trump was treated unjustly, but that underdogs, can't win. A very different claim.

He didn't say that underdogs can't win, he said that the rules don't lead to the underdog winning. I would agree that this is true when the rule is "no statute of limitations". There's really no way to win with this rule; the results are losing badly or losing but not too badly.

And even if you think being found innocent after a trial that never should have happened is a "win", it's a win despite the rule, not because of it.

I would agree that this is true when the rule is "no statute of limitations". There's really no way to win with this rule; the results are losing badly or losing but not too badly.

I don't understand what you mean. First, if someone loses, then someone wins. Why the loser in this situation would be the underdog is not particularly clear to me. One would think that anything that makes it hard to sue, including statutes of limitations, would benefit the overdog, not the underdog, because the justice system is the only method that the underdog has to hold overdogs to account. Underdogs, by definition, don't have economic or political power.

Nor is it clear to me why you think that limiting the statute of limitations guarantees that the plaintiff will win. The plaintiff has the burden of proof, after all.

More comments

What is the point of a statute of limitations if it can be changed after the fact to include things previously protected by that statute?

Civil statutes of limitations aren't put in place to protect potential defendants; they're put in place to serve policy goals determined by the state legislature. If those policy goals change the SOL can change along with them.

What is the point of the trial related amendments if you can just have your reputation smeared and ruined by the media without anything vaguely resembling "due process"?

Most of the trial related amendments are specifically applicable only to criminal cases. The only one unique to civil cases, the Seventh Amendment requirement of a jury trial, has notably not been incorporated under the 14th Amendment. The Fifth Amendment right to due process has, but I'm not sure what kind of process you're looking for. These amendments only apply to the government, not private actors, so there's no due process requirement the media has to go through to report bad things about you. If they're actually defamatory then there's always the option of suing.

What problem are civil courts solving other than 'how to make lawyers rich'?

With certain limited exceptions (class actions), lawyers don't come to clients looking to get rich. Lawyers only get involved when the situation has gone so far that the parties can't resolve matters among themselves. You can probably get by your whole life without needing to go to court, and I hope you do. But when you've been aggrieved it's the only option for justice.

Plea deals destroying incentives to get your day in court. Prosecutors seemingly immune to any consequences of malpractice.

Again, that's criminal, not civil, and has nothing to do with the Trump lawsuit. That being said, most civil cases settle before they ever get to court. Usually something has to go horribly off the rails for an actual trial to take place. But trials are time-consuming and expensive and it's usually better for everyone that things don't get that far, since both parties can usually see where things are going. The first possibility is that this woman really hated Trump and refused to settle because she wanted to get a jury verdict. This is unlikely because I doubt she is a woman of any serious means and litigation is expensive. If Trump had made her a serious offer, the terms of most attorney engagement letters would practically require her to accept it. Obviously, it's ultimately her decision, but if her lawyers took the case on a contingency basis then they're fronting all of the expenses in the hope of getting a decent payout. If they're presented with what they believe is a reasonable offer then they're loathe to continue shelling out cash for diminishing returns. If it gets to this point then in order to avoid settlement the plaintiff will usually have to start paying by the hour and fronting money for expenses for all future work. Since most clients aren't that convinced in the value of their cases to do this, they usually settle. Going to another attorney isn't a realistic option, either, because the current attorney has a lien on the case for the value of all the work that's been done up to that point. Any award the new attorney gets would be subject to deductions in the amount of that lien. Lawyers in general don't like taking on cases in the middle, and having to give up a substantial part of their fee to cover a lien is usually a nonstarter. What's more likely is that Trump refused to settle because he has the means to defend the suit and his public stature means that any settlement would be in the news and viewed by the public as an admission of guilt, even if the settlement expressly denied guilt.

Civil statutes of limitations aren't put in place to protect potential defendants; they're put in place to serve policy goals determined by the state legislature. If those policy goals change the SOL can change along with them.

Every law put in place to serve a policy goal determined by the legislature. This is a banal statement. The question is “what is that goal.”

SOLs serve two purposes — one to allow finality and the second is to be able to provide an appropriate defense (ie memory and witnesses go stale).

Both (but especially the second) are in place to protect potential defendants. Thus, the goal behind the original SOL was by-and-large to protect potential defendants.

But assuming the allegations are false, what then? The natural inclination is also to deny, except you're in a legal bind. Any denial necessarily implies that the accuser is lying.

This is the part I find a lot of problem with whenever this topic comes up, often in the context of sexual assault/rape accusations. I don't think any denial necessarily implies that the accuser is lying. It necessarily implies that the accuser is wrong. Given what we know about the fallibility and malleability of memory, particularly when stressful situations are involved, it's entirely possible for the accuser to be honest to the best of their ability and still be completely, entirely wrong about the facts of what occurred. I don't know if this affects the legal calculus of the potential defamation suit; is the claim that an accuser is making an incorrect accusation for whatever reason - without implying that the accuser is lying - defamation? I don't know, but that's the actual pertinent issue than the claim that they're actually lying.

I believe that Christine Blasey Ford was not lying about Judge Kavanagh sexually assaulting her. I believe she has extremely vague memories of being grabbed and groped by a drunk guy 40+ years ago and believes Kavanagh did it.

My best estimation is that she is factually wrong, but indeed not a liar.

I’m not even sure the incident happened as Ford imagined. I can imagine some guy joshing around and Ford believing X happened (ie a misinterpretation). Doesn’t make her a liar. But a woman her age should have the perspective that maybe my memory isn’t perfect and maybe the situation was a bit different than even I thought.

Having reread the story as she told it, I think there’s some evidence that she was changing the story to fit things he provided.

The number of people in the room changes. There were four, then five (one female). She couldn’t name them.

The location of the house moves once it’s made clear that the party he went to was miles away from the location she describes.

The geometry of the house changes (the stairs go from short to narrow, the living room and family room were initially separated allowing her to escape; until it became one bigger room).

The timing changes. She was older in the original story, which changed once she realized he would be at Yale at the time.

I think there was a real rape, and she was raped by someone. But it always seemed odd that she’s constantly trying to fit her memory to the details he provided. And to my knowledge she never really stuck to her guns and said he’s wrong, this is what happened. So my best guess is that she’s describing a different party at a different time, one that she knows Kavanagh had nothing to do with, but she’s trying to put him and herself in the same room in the same house even though none of the details actually fit.

To go one step further: she wasn't raped by anyone in any circumstance. She was grabbed and groped. Which is scary and bad, but not rape.

She was never raped and the man who didn't rape her was probably not Kavanagh.

Yeah. The year and location of the incident is quite slippery in her memory. She barely recalls any details and they change from one memory recall to the next.

A drunk guy grabbed her once. I believe that happened in some manner at some point in her life. I don't trust her for any other detail or fact from recalling 40+ year old memories.

I've never met this person in my life. She is trying to sell a new book-that should indicate her motivation. It should be sold in the fiction section.

"If anyone has information that the Democratic Party is working with Ms. Carroll or New York Magazine, please notify us as soon as possible. The world should know what's really going on. It is a disgrace and people should pay dearly for such false accusations."

You make a good point—I wonder if keeping it to the Shaggy Defense would have avoided any defamation. As it is, he fit it into his “witch hunt” narrative, and really put her on blast.

It seems like an inconsequential distinction to me, not worth hanging defamation liability upon. It's true that an accusation could be false either intentionally or by mistake. If someone makes an intentionally false accusation (read: lying) and you know that, then wouldn't it be misleading (or perhaps even lying) to accuse your accuser of a mistake instead of a lie?

Even if the point you make is adopted as regular practice (where the accused avoid claiming anyone is lying, just that they're mistaken) would it make any practical difference? If the accuser denies that they made a mistake but you insist otherwise, is that materially different from accusing them of lying?

When denying an accusation, I don't think you need to specify whether the accuser is lying or merely mistaken. Just that they're wrong. If you go into specifics, e.g. explicitly accuse the accuser of lying, I think it'd be correct to leave you open to liability. But that level of specificity isn't necessary for denying an accusation.

I disagree, the specifics are important here. I deal with this constantly with clients who deny the allegations but then have no follow-up explanation. In a hit & run case, the defendant denies he was driving the car. Ok then who was driving your family's car then? In a stabbing case, the defendant denies the witness correctly IDed him. Ok then who else had access to this building? In another case, the defendant claims the witness is lying. Ok how do you know? why are they lying? what's their motive? when did they coordinate their stories? etc and so on.

It's frustrating to me when clients air out vague general denials because then there's nothing else for me to do as a defense attorney but also on a personal level it makes me suspect the truth of everything they tell me. Generally speaking, as a rough heuristic, the truly innocent clients of mine tend to express the same amount of curiosity about their case that I do. If they were really IDed incorrectly, they absolutely want to know who this doppelganger is. They barely can stop themselves to give me names of people to talk to, companies to subpoena, surveillance cameras to examine, etc.

I disagree, the specifics are important here. I deal with this constantly with clients who deny the allegations but then have no follow-up explanation. In a hit & run case, the defendant denies he was driving the car. Ok then who was driving your family's car then? In a stabbing case, the defendant denies the witness correctly IDed him. Ok then who else had access to this building? In another case, the defendant claims the witness is lying. Ok how do you know? why are they lying? what's their motive? when did they coordinate their stories? etc and so on.

I know this list of examples is in no way exhaustive, but only one of those examples had the accused person making a positive claim about someone else being dishonest (the lying witness). The others seem to me, if those questions were answered and explained, to be just fine ways to deny the accusation without impugning the accuser's honesty or otherwise defaming them. For the claim that a witness lying, I'm thinking the defendant shouldn't claim the witness is lying unless they have some specific evidence or motive, and if they lack such a thing, they should retreat to the "witness is wrong" claim rather than "witness is lying."

IANAL, so I can't speak with authority on any of this, and I can't speak to the ins and outs of how a defense strategy gets formed and implemented. It just seems to me that, unless there's specific reason to think so, there's no need to claim that an accuser is lying, but rather just wrong. If they claim that the accuser is lying but lack the evidence to substantiate it, then they shouldn't have made such a claim as part of their defense in the first place, so not being able to do so for fear of a later defamation lawsuit down the line doesn't seem like a loss. If they do have such evidence, then that would strengthen their defense and also protect them, however imperfectly, against defamation lawsuits.

My point was broader than just the scenario of calling the accuser a liar, I was highlighting examples to illustrate how unconvincing vague general denials are. If someone levies an allegation that you deny, the natural reaction from bystanders is to wonder why an accuser would lie or otherwise be wrong about something so serious. A denial is much more credible if you can offer some sort of explanation to that burning question.

The only ingredients you need are to levy an accusation and wait for your target's inevitable protest.

And a crucial third ingredient: have the intended victim of this trap be Donald Trump. Other people in other situations, such as Hillary Clinton defaming the women who accused Bill Clinton of raping them, are not dragged into civil cases.

Are we to interpret this as a consistent principle that judges would apply to prominent Democratic politicians when they deny accusations? Or just a weapon to be used on Trump and then put away?

Well I first heard about it when it happened to Bill Cosby, maybe that's where Jean Carroll's lawyers got the idea from. I don't see the evidence that this legal tactic was invented for Trump.

E Jean Carroll was advocating for the law to be passed so that she could sue trump.

https://www.newsweek.com/trump-accuser-pushes-new-york-pass-adult-survivors-act-plans-sue-rape-1668261

...none of which is Hillary clinton being sued for defamation for, to put it bluntly, calling the women accusing her husband of rape lying sluts.

Was James Carville ever sued? He was the one that commented that "[i]f you drag a hundred dollar bill through a trailer park, you never know what you'll find," which seems far more insulting than anything I can think of HRC saying herself. Though it's probably too vague to be slanderous, legally-speaking.

Yep, it's too vague to be slanderous. You're allowed to insult people, you just can't claim that they did things that they didn't do. Even more specific epithets like "racist" have been ruled too vague to support a claim of defamation.

What are the specific statements she made you think constitute defamation?

In a '98 NBC interview she called the sexual assault allegations against her husband a "vast right-wing conspiracy".

During her NBC interview, when Lauer said, "So when people say there's a lot of smoke here, your message is, where there's smoke..." Clinton interrupted.

"There isn't any fire," she said. After the allegations and motivations are dissected and the truth emerges, she predicted, "some folks are going to have a lot to answer for."

This was in an NBC interview, not a courthouse (like Trump's comments), and appears to directly suggest that all of the women are liars, which appears at least modestly similar to Trump's supposedly-defamatory remarks.

Ultimately, I think I have to conclude that denying allegations should probably enjoy specific privilege from defamation concerns.

Hillary Clinton defamed those women and then did not get dragged into civil suit.

Bill Clinton indeed got impeached for lying about a consensual blowjob. But that has nothing to do with Hillary defaming his rape victims.

These are separate matters so let's not blend them all together into some composite for comparison.

deleted

Getting money isn’t “Justice,” and I feel like the legal profession has just gaslit the entire world into believing it is.

I think you'll find that much of the legal system is shaded in the direction of benefitting lawyers.

There’s a reason criminal trials are conducted on behalf of the state and not the victims. It’s because justice is something sought by society against transgressors. Getting money isn’t “Justice,” and I feel like the legal profession has just gaslit the entire world into believing it is. Of course, I think it is very difficult to provide restorative justice to someone who has been physically attacked or raped or obviously murdered. The deed is done, and money won’t magically make it go away.

Justice for a rape victim isn’t their rapist writing them a big check, it’s the rapist rotting in prison and unable to rape more people. That’s why I find statements like the press release for the bill that created this cause of action a figment of lawlogic that’s totally alien to my worldview:

This is a point of view you can to take, but it's not at all obvious, and in fact this is exactly how many societies throughout history handled justice, and it's certainly not new. The entire idea of imprisoning average criminals is a few hundred years old at most, and has only been practical for less than that, and only in rich societies. (Aside from payment, societies also used slavery, exile, execution, torture, and probably other methods I'm forgetting). Similarly for the idea that the state handles everything--polycentric legal systems based on resolving disputes between 2 parties are also very common historically.

I think most victims would be fine with money. For non-murder things.

But society not as much.

The idea of convictions being public record but being able to buy your way out of the prison sentence is interesting. Ancient China did that; murder got you executed and it cost 200 years' worth of a laborer's salary to save your neck. High treason notably was something you couldn't buy your way out of.

This would result in the very wealthy being more or less above the law. Every count of first-degree murder costs you $10 million if you want to see daylight again; lesser crimes carry lesser penalties.

What possible advantage would this have?

This is basically the "you calling me a liar?" argument. Wouldn't a straightforward boundary be that statements about your own thoughts and behavior are always permissible, regardless of the implications about the thoughts and behavior of another? Thus, "I don't know her" is not actionable, even though it implies "she is a liar", which could be actionable. "I was afraid of him" would not be actionable, but "he threatened me" could be.

Why do we have laws against defamation in the first place? Seems like generally, lying should be legal, or it opens up all sorts of issues.

Why do we have laws against defamation in the first place?

Because human beings are not computers and operate with imperfect information.

Let's assume that someone makes a false complaint about the quality of work that you do- for instance, leaves a one-star review (to go with a popular example). This will affect your ability to get future business, and hence future income- you have been materially harmed by that statement. If that was done maliciously, how's that different than stealing that income directly?

Now, the US tries to play objectively, so you'll only get punished for doing it if it was false, and any reasonable person would have known it was false. But the problem is that even true-but-distasteful statements, like #metoo descriptions of sexual activities, have the same effect, which is why more conservative countries (like European ones) will punish true statements of this type as well whereas more liberal countries (by definition) value not punishing truth-telling over doing justice to liars.

That's the general idea behind it, anyway; whether those countries succeed in their aims is an entirely different matter, of course.

This is fair pushback, and you've outlined some potentially messy implementation issues I hadn't thought of. I grant that in some cases figuring out "who started it" might be complicated but the legal system has dealt with much thornier issues. In this instance it could take a page from the concept of "comparative negligence" to figure out how to settle the dust.

You need a third ingredient. If the accusation is false, what’s stopping the target from suing your ass first? The truth is an absolute defense against defamation. This incentive only arises when the accuser has enough evidence to protect against a lawsuit. That’s hardly out of thin air.

The most instructive scenario to consider here is how the Shitty Media Men litigation transpired. Suing people is a significant time commitment and money sink, especially for defamation. Out of the 70 men on that list, AFAIK only Stephen Elliott was dogged enough to pursue legal action. As nonsensical as some of the accusations against Elliott were ("unsolicited invitations to his apartment") he was "lucky" that his sexual habits were peculiar enough and his accusations specific enough that he could at least try to mount a credible rebuttal:

I don't like intercourse, I don't like penetrating people with objects, and I don't like receiving oral sex. My entire sexuality is wrapped up in BDSM. Cross-dressing, bondage, masochism. I'm always the bottom. I've been in long romantic relationships with women without ever seeing them naked. Almost every time I've had intercourse during the past 10 years, it has been in the context of dominance/submission, often without my consent, and usually while I'm tied up or in a straitjacket and hood. I've never had sex with anyone who works in media.

I am not seeking to come out about my sexuality as a means of creating a diversion, as Kevin Spacey appeared to do when he was accused of sexual misconduct. I've always been open about my sexuality, and I have even written entire books on the topic. I've never raped anybody. I would even go one step further: There is no one in the world who believes that I raped them.

I grant that maybe some of the accusations on the list were true, but many were just vague and accordingly impossible to defend against. Lawsuits are also seen as antagonistic, and there is significant social pressure against resorting to that remedy. I gather that some men just found it easier to slink away than risk magnifying their pariah status within the gossip-friendly field of media. Elliott eventually "won" a settlement from the list's creator but who knows how much money each side bled out since the lawsuit dragged on for almost five years.

Actually suing someone is expensive but threatening to sue someone isn't expensive. It's expensive on the plaintiff's side too, and unless you're of enough means that an attorney can expect to actually collect on a decent size judgment, good luck even finding someone to take the case. Most of the people who find themselves the subjects of defamation suits where the defamatory remarks are mere denials of other potentially defamatory remarks have enough money that they could marshal some powerful attorneys who could nip the thing in the bud before it becomes a big deal. Trump could have easily afforded to have some biglaw attorney draft a letter explaining that the comments were defamatory and he's prepared to sue unless she's willing to make a public denial. If she balks, have a complaint drafted that will be filed if they don't start making progress in negotiations. After she finds out how much it's going to cost for the cheapest attorney in town to defend something like this she'll probably consider recanting.

That being said, I agree with you overall because having rich people threaten expensive lawsuits to fend off average people who say bad things about them is a pretty shitty way to do business, especially if the allegations are true. I'd much rather have a situation where mere denials aren't considered defamation, regardless of how far along we are in the chain of "who started it". Wealthy people have the option to threaten litigation to shut people up as it is, and I'd rather see some reform before they resort to actually doing it to fend off lawsuits.

Okay, but that kind of wrecks the incentive to make up accusations whole-cloth. The more people—or the more prominent—you target, the higher chance that one of them is Peter Thiel.

Okay, but that kind of wrecks the incentive to make up accusations whole-cloth.

I'm not sure what you mean by "that". The creator of the Shitty Media Men list did lose in this instance but this was a unique and unusual set of circumstances.

Without an expansion of the "Litigation Privilege" or something like it to cover these circumstances, we create the incentive to conjure up a defamation action out of thin air. The only ingredients you need are to levy an accusation and wait for your target's inevitable protest.

We already have “something like it,” and it’s the normal defamation torts. They already threaten would-be accusers with liability. Sure, following through is expensive and uncertain. But it clearly can happen—especially if the accused is rich, reputation-conscious, or has proof.

Expanding litigation privilege swings the balance too far. It gives the accused every incentive to smear the accuser’s reputation, regardless of the truth. That sort of speech should be kept to a court of law, where it is already protected.

The truth is an absolute defense against defamation

Only in some countries. For example in Japan you can be found guilty of defamation even if what you said is 100% demonstrably true. In some cases having it be true is even worse than lying as damages are calculated on harm done to someones reputation, and if people can independently verify that your speech is true is does more harm.

I live here (Japan) and this is only one aspect of Japanese law that makes me uneasy. I even pause when leaving online restaurant reviews for this reason.

For me it was the police's ability to hold anyone in custody for up to 23 days without charges. I avoided police, and in the rare case I had to interact with them I was exceedingly deferential and polite.

I tried to find a good article exploring the reasons/consequences of this but couldn't. Anyone have a link?

Given how contentious the adversarial legal system can get, there is indeed the medieval-era legal doctrine of "Litigation Privilege" which creates a safe space bubble where lawyers and parties can talk shit about each other without worrying about a defamation lawsuit. The justification here is that while defamation is bad, discouraging a litigant's zeal in fighting their case is even worse.

Legal system creates rules and exceptions to rules that benefit attorneys is a real dog-bites-man sort of story, isn't it? I don't even disagree with the logic here, but it's pretty striking that something that would be defamation if I did it on my own time, advocating in my own interests, outside of a courtroom becomes totally acceptable if an attorney does it in court.

The privilege is not limited to just attorneys. Plaintiffs and defendants enjoy the same benefit, and the "reporting" aspect of the law protects journalists.

Atleast with this case there’s a bit of statute of limitations issue. Shouldn’t someone at some point not have to own up to “I use to be a very bad person” - like I don’t know almost 3 decades seems enough.

Or let’s say a prominent politician use to be an escort (cough Kamela Harris; more of a mistress) and some dudes bragging about how he paid $500 to bang the first women POTUS. I think it’s reasonable she shouldn’t have to say yea I was a whore. But she’s now defamed they guy.

Also in this specific case I don’t think Trump really got a chance to testify in court. In part because he’s the most investigated guy ever and anything he misstates would be open to perjury. And also give the opposing attorney a chance for a fishing expedition on anything (like Jan 6).

And also give the opposing attorney a chance for a fishing expedition on anything

The judge establishes "motions in limine" to set up the contours of how the trial would proceed and what topics can be addressed. The judge dealt with these disputes ahead of trial (see the case docket), including determining whether the Access Hollywood tape would be admissible. Is there anything within the docket that leads you to believe that the opposing attorney would have been allowed to go on a (presumably irrelevant) fishing expedition?

It’s New York. They are literally charging him already with novel legal theories. And this isn’t even the first time he’s faced novel legal theories. The first time was his associate Flynn who faced novel legal theories on a Logan Act Violation. Anything he said under oath would be liable to anything a hungry DA could come up with.

Between this and your response to me, I think you're moving the goalposts.

It's outside the statute of limitations.

No, it wasn't.

But then he didn't get to testify!

Yeah, he did, but chose not to. And chose not to bring any witnesses.

Of course he wouldn't want to testify! Fishing expedition!!

@ymeskhout gives a reason why that shouldn't be the case, since the rules were laid out ahead of time. I'll add that if a guy can't say anything without lying, maybe he's got a bigger problem.

They might have used novel legal theories!

Regardless of how I feel about Mr. Bragg, I don't have any reason to believe he controls the rules of this civil suit. One that's been in the making since before he was elected.

More importantly, it didn't take any new legal theories! You had to construct this hypothetical lest you admit Trump might have made an error. Can't have that. Anything that looks like a blunder on his part was just 5D chess, preempting some new and exciting abuse of power by his haters.

Sometimes the simplest explanation is the best.

I'll add that if a guy can't say anything without lying, maybe he's got a bigger problem.

If a guy declines to testify on his own behalf because he can't say anything under oath without being accused of lying, and if he can then be forced by law to spend his own money to defend himself whilst again unable to take the stand himself, the problem he has is known as a witch hunt.

I’m not sure where talking to you I mentioned statute of limitations. But yea changing the statute of limitations after they expired is problematic (though not just applied to Trump).

“Chose not to”. Ya sure a guy who’s already had issues with improper investigations over choosing to talk (Flynn - Logan Act) should just want to have a thousand people investigate him for anything be possible said wrong for perjury.

“Fishing expeditions”

How many investigations has Trump and/or Trump associates had investigations into them?

I can summarize your entire response too. Yes us conservatives have faced a lot of bullshit so yes we have trust issues in the “process”

This suit came about due to a one-off law passed about a year ago, the Adult Survivors Act. The whole point was allowing a window for victims to sue in spite of any statute of limitations. It appears to be following up to a similar law passed in 2019, so I doubt that it was all set up to nail Trump in particular.

I find the idea that he can't testify for fear of perjuring himself a bit silly. I don't think attorneys are allowed to go fishing once the witness has entered the Zone of Truth, either. (@ymeskhout, can you confirm?) But setting that aside, his defense chose not to call any witnesses, even ones who aren't under half a dozen investigations. That's either an own-goal, or an attempt to make his supporters question the whole thing. I dunno.

Who could he have called as a witness? It’s a 30 year old case. She can’t even say what year it happened. He can’t produce some dude who could testify I was at dinner with him on the alleged incident.

IMO there is something strange with changing statute of limitation decades later even if this wasn’t completely about Trump.

Plus any potential witness they called would be liable to face non-stop legal action from the state of New York.

Who could he have called as a witness? It’s a 30 year old case.

This is the thing I keep coming back to, and it's the same thing that pissed me off about the Kavanaugh hearings, where the accusations were repeatedly framed as "credible". OK, it's true that the accusations aren't impossible and the accused surely can't disprove them, so that does meet some definition of "credible". I would probably agree that it rises to the level of extending some degree of empathy towards the putative victims, particularly if you're actually going to personally interact with them. Nonetheless, what the hell is the accused expected to do in such a situation?

When I think of parties that I was at, women that I did hook up with, women that I didn't hook up with, and so on, I have absolutely no idea how I could even begin to refute a claim about something that supposedly happened 20 years ago. Was I at a given person's house on an unspecified day in an unspecified year, drinking heavily? I don't know, possibly. Probably even, for some specified houses and date ranges. Could I account for who was there on a given unspecified evening? Definitely not. Are there women that I had sex with at the time that I only barely remember now? Definitely so. Are there others that were present that I won't recall at all? Definitely so. With lack of specificity that seems to still count as a "credible accusation", I don't see any plausible path to actually being able to disprove it. At that level, it actually sounds entirely reasonable to start bringing things up like, "she's not my type" or "wait, you're saying that this happened in a dressing room and no one at all noticed?". The accused can't really deny the time and place because there is no time and they may have gone to that place, but they can take a stab at whether the story seems actually plausible at all.

Kavanaugh was strange enough that he not only made a calendar showing what he was doing each day, but he kept it. It still wasn't enough to disprove the allegations because they weren't actually specific enough.

From Trump “she’s not my type” does seem credible. If you say the quiet part out loud that she was older than he would go for.

And embedded in this a lot of girls still expect guys to be the aggressor. I still remember the first love of my life rejecting my advances as I dropped her off at her place and saying no a dozen times. And then sending me a text 10 minutes later asking why I didn’t sleep with her. Which isn’t completely related but does apply to a lot of trump comments.

Trump does seem like someone who’s a believer in the alpha male stereotype where he should be the aggressor and no doubt a lot of women have rewarded him for that. So he very well may have been intimate with the accuser. Something completely not explored (because of it’s 30 years ago) she may have been into his assertiveness. But now that he’s a white supremacist, racists, anti-trans politician it’s rape.

And embedded in this a lot of girls still expect guys to be the aggressor. I still remember the first love of my life rejecting my advances as I dropped her off at her place and saying no a dozen times. And then sending me a text 10 minutes later asking why I didn’t sleep with her.

Most agentic young woman.

Girls/women are often indeed extremely passive when it comes to dating. I hope you sealed the deal, whether by immediately turning back to her place or by eventually somehow slow-rolling it. Such experiences are pretty common, though, hence their contribution to the coffee emoji meme spreading like wildfire.

A lot of times, a “no” from women just means try harder, try differently, or try later such that they can better protect their wonderfulness, maintain plausible deniability, and sidestep accountability. So that they can tell themselves and/or their friends that “omg it just happened, teehee.”

Chad pussy-grabbers vs. Virgin boundary-respecters.

It’s why there was so much seething and pearl-clutching from the online-left over Trump saying to just “grab ‘em by the pussy.” Trump stumbled upon something, inadvertently touching a third rail.

Such a remark from Trump was a reminder that high status men can just go full steam ahead and plow through the “rules” that lower status men have to abide by—and that women, ultimately, are happy to be submissive to things like male status and fame in the moment, even if they may retrospectively decide otherwise. Sometimes years or decades later, if the original encounter occurred at all.

Mainstream progressivism insists that male sexual success is driven by the extent to which men are dutiful, respectful male feminist allies. High-status men casually grabbing women by the pussy or creampie-ing them in the stairwell ruins the illusion of male sexual egalitarianism, the illusion of lack of female hypergamy, the illusion of a magical Just World where male sexual success is dictated by whether they have socially progressive attitudes.

From Trump “she’s not my type” does seem credible. If you say the quiet part out loud that she was older than he would go for.

It would have been credible had he not misidentified her as Marla Maples in a photograph during a deposition.

That was a very funny blunder from the deposition. The best explanation on this front remains that Donald Trump is functionally blind. He frequently misidentifies people right in front of him and all the notes he's seen actively using are written in a comically large font size. I don't really understand exactly why a billionaire can't use contact lenses, lasik, or whatever other space age vision technology is available, but it's funny how much legal liability he's willing to endure just to avoid being filmed wearing glasses.

It depends on what other evidence is available at the time. If there's a recording of you talking about how you like to grope women because they'll just let you do it, it might be enough to move the needle to 51% in a pure he-said-she-said.

That is one of the reasons for a SOL — evidence becomes very stale.

Also I suppose now the witness could be sued for perjury if they stood by their testimony after losing the case.

According to my high-school history teacher: the British would actually do this in the 1700s. Accuse someone of a crime, force them to testify, find them guilty, then punish them for lying since they just claimed they didn't do it. Providing a robust defense was merely lying to the court and punished accordingly.

Perjury isn't a civil offense, so no.

Right, I meant slander.

Considering the title of the act, it seems like it was more geared towards children (who might have good reason for not coming forward in a timely fashion) who had been abused but are now adults than boomer magazine writers who were abused in their fifties?

I'm certainly not going to read the thing, so not sure whether this is an unintentional loophole being exploited or the act itself was a sneaky way of getting rid of the SOL on sex crimes in general -- either way it seems like kind of a bad idea, as "stale" sex crimes are maybe the hardest sort of crime to fairly prosecute years later?

Stale sex crimes are very easy to prosecute if you use a preponderance of the evidence standard and the word of the accuser is sufficient to establish the preponderance.

No, it was definitely meant for cases like this. The press release I linked makes it clear:

In 2019, New York passed the Child Victims Act, which created a one-year lookback window for survivors of childhood sexual abuse to file claims otherwise barred by the statute of limitations.

Similar to the Child Victims Act, the Adult Survivors Act will empower survivors of sexual offenses that occurred when they were over the age of 18. The one-year window will begin six months from signing and will allow survivors to sue regardless of the statute of limitations. For many survivors, it may take years to come to terms with the trauma of sexual assault and feel ready to seek justice against an abuser, while possibly experiencing fear of retaliation or shame.

In 2019, New York extended the statute of limitations to 20 years for adults filing civil lawsuits for a select number of sex crimes. However, that legislation only affected new cases and was not retroactive.

If you're going to speculate about how your outgroup is abusing the spirit of the law, at least go deeper than your interpretation of the title!

OK, so it's just a terrible law with a dumb title -- to be fair that was my second interpretation!

Also weird that they'd create a one year window with unlimited SOL (AIUI) when the law itself establishes a 20 year limit -- Carroll's claim would have been barred if this law had been in effect the whole time, right?

I think so. Though…not the defamation claim, which all shook out before she ever filed anything about the actual assault.

But yes, a one-year “purge” for past grievances is an odd policy. I don’t know enough to say how it came to the public attention. There seem to be some advocacy groups pushing for SOL extension in general; maybe they are responsible?

If you have been even peripherally involved in higher education in the United States, then you've heard of Title IX. But if you haven't, here's the U.S. government's blurb:

The U.S. Department of Education’s Office for Civil Rights (OCR) enforces, among other statutes, Title IX of the Education Amendments of 1972. Title IX protects people from discrimination based on sex in education programs or activities that receive federal financial assistance. Title IX states:

No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance.

Title IX is most famous for requiring equal athletic opportunities for men and women, without regard for whether this makes (among other things) any financial sense at all. But Title IX also imposes a variety of reporting requirements on college and university faculty and staff, such that essentially every campus has a Title IX Coordinator (or similar), and many campuses maintain entire offices of Title IX administrative staff. Do they do real, important work? I would argue virtually never--these are bullshit jobs par excellence--with one enormous caveat: they serve as a lightning rod for both civil liability and federal intervention.

(Well isn't that real and important, then? Yes, yes, it's a fair point. But I still think jobs that exist solely to push unnecessary government paperwork are inescapably bullshit jobs. Hiring government actors--executive and judicial--to punish universities for failing to meet politically-imposed quotas on social engineering goals, so that universities must hire administrators to give themselves cover, is the very picture of government stimulating the economy by paying one group of people to dig holes, and another group to follow behind them, filling the holes back up again. But this is not the point of my post.)

The Department of Education's Office of Civil Rights fields several thousand sex discrimination complaints every year. Less than 10,000, but close--the DoE's OCR fielded a record 9,498 complaints last year. But that's not the headline.

Here's the headline:

1 Person Lodged 7,339 Sex Discrimination Complaints With Ed Dept. Last Year

You probably read that right.* More than 77% of all sex discrimination complaints filed with the OCR are filed by a single person, at a rate of about 20 complaints per day--and this same individual was responsible for a similar number and percentage of complaints in 2016, and possibly other years as well. Of this person, the office says:

“This individual has been filing complaints for a very long time with OCR and they are sometimes founded ... It doesn’t have to be about their own experience [but] ... There’s not a lot I can tell you about the person.”

* I reserve the right to rapidly backtrack my commentary if it turns out that this "single person" being reported in their system is named "Anonymous" or "No Name Given" or something equally stupid. I am proceeding on the assumption that Catherine Lhamon is neither that stupid, nor being deliberately misleading, and that she did in fact say the things she is quoted here as saying. But I'm including this caveat because I still find it hard to believe that what is being reported is even possible. Part of me still thinks there must be some mistake.

On one hand, like... I'm kind of impressed? There's someone who has decided to make their mark on the world, clearly. That's some tenacity. On the other hand, what the fuck? Surely in any sane world someone would tell this person, "you are abusing the process, and we are going to change the rules to rate-limit your nonsense."

That is... well, not the plan, apparently:

The surge in complaints comes at a time when the agency faces significant challenges: It shrank from nearly 1,100 full-time equivalent staff in FY 1981 to 546 last year and is dealing with a host of issues that reflect the strain placed on schools and students by the pandemic.

Biden, in his March budget address, sought a 27% increase in funding — to $178 million — for the civil rights office to meet its goals. Lhamon, whose 2021 confirmation Senate Republicans tried to block, said she’s grateful for the president’s support and hopes Congress approves the increase.

In FY 1981 the office was still dealing with the fallout of the American government forcibly engineering feminist aims into higher education. At a current budget of $140 million (an average of $250,000 per employee), with very nearly half of its complaints (across all topics, not just sex discrimination) coming from a single individual, what is that additional $38 million supposed to accomplish?

It seems like no matter how dim my view of the federal government gets, there's always some new piece of information out there waiting to assure me that I've yet to grasp the depth of the graft, ineptitude, and corruption of Washington, D.C. I am skeptical that Title IX has accomplished anything of value that would not have been independently accomplished by market forces and social trends. But even if that's wrong, and the early days of Title IX were an important government intervention, I cannot imagine how this particular situation could possibly exist within a sane regulatory framework.

The problem appears to be wider than that - from the same article:

Race, color, or national origin discrimination claims made up 3,329 of all complaints received in FY 2022, according to the civil rights office’s annual report, which was released last week. That’s up from 2,399 the year prior. Disability-related complaints comprised 6,467 of the total compared to 4,870 in FY 2021.

At the same time, age discrimination claims, which made up 666 complaints in the most recent report, were down from 1,149 the prior year. The office notes the majority of these claims were also filed by a single person in both years.

Could this person be the leader of some nonprofit or advocacy group? I’m struggling to imagine how that would be possible.

There are databases of active investigations (server issues?), pending cases, and incidents, but none make the plaintiff’s name public by default. Maybe this database has something? I’m not optimistic—if only a fraction of those 8,000 complaints are founded, they’re not going to be obvious in the active lawsuits.

Why doesn’t the Title IX office disclose this name? For obvious reasons, he or she is unlikely to be personally involved in most, if not all, of the cases. Privacy shouldn’t be an issue. I wonder if this is something that can be FOIA’d.

As I note in response to HaroldWilson below:

I have no idea who that person might be. Charitably: a top-notch attorney at an important law firm in Washington, D.C., who is capturing most of the "Title IX complaint" market, maybe? The right intake process could probably make this happen. I just have a hard time seeing it actually playing out this way; unless their "sometimes founded" complaints turn into outrageously large payouts on a pretty regular basis, it would be very difficult to fund such a venture. Part of the mystery of averaging 20 complaints a day is, who is funding that?

My only other half-plausible idea is that there is someone out there on retirement or disability or something who is doing stuff like collecting data on student athletes and filing a complaint any time they can find the slightest mathematical discrepancy in apparent sex balances in school athletic programs, or maybe faculty sex ratios or something. I don't know what else they could possibly find 20 complaints a day to file on, or how else they could be funded.

I know there are some bits of legislation out there that essentially pay bounties to people for filing lawsuits (there was a lawyer in California who was making a living for a while going to "ladies night" at bars and demanding equal pricing, then suing when denied, here's the one I think I'm remembering) but I'm not aware of any such setup for Title IX cases.

I wonder if this is something that can be FOIA’d.

I don't know, but my first thought is that these things probably fall under FERPA, which is not as strict a piece of privacy legislation as, say, HIPAA, but it's still pretty strong.

there was a lawyer in California who was making a living for a while going to "ladies night" at bars and demanding equal pricing, then suing when denied,

Absolutely based behavior. The only way I can stomach such blatant favoritism in favor of women is if we men were ever allowed to get away with that. But good luck finding a "men's only" event that doesn't attract immediate opprobrium.

Legally enforced equality, or legally allowed special treatment. Pick one.

When were you put in charge of deciding how many choices there are?

I'm not. My statement is a logical truism. You cannot have both. You can of course have neither.

the pink razors are materially different both in the pigment and geometry of the blades. (It's not pricier because of the different geometry, but it's still different).

I simply don't care. Such blatant unfairness raises my hackles, and women get plenty of attention and free drinks from horny men as is.

Such blatant unfairness raises my hackles

"The game" is unfair either way. It will never be fair as long as we are mammals with certain sexual instincts. Hear me out.

What you are objecting is a situation where the unfairness becomes explicit instead of implicit. But this is a horribly bad strategy!! If you are not a "gigachad" and/or "absolute player" type of guy, this is exactly what you want! When the rules of the game becomes more explicit it gives more chances to people who lack the deep social instincts for playing the implicit game. And forgive me for stereotyping, but I have literally never met an Indian guy (from India proper) who had very strong instincts in this regard and I know many.

When ladies get cheap booze explicitly from the bar there is less expectation on you to do the classic move of introduce yourself with confidence, say a couple witty funny things, and ask what she wants to drink. For some guys this is second nature. For many this is nerve wrecking and they will fuck it up. If you are in the group that gets the nerves from approaching a pretty girl like this then you should absolutely welcome a ladies night. It takes some pressure off you.

This is the exact reason why dance classes, blind dating, formal courtship, even arranged marriages etc are all good strategies for men too awkward to just ask a girl out from zero. Each one of these options add an extra dose of explicitness to the interaction.

I have a girlfriend, and even when I didn't, I have little issue in acquiring one, so I genuinely couldn't care less about the marginal change from killing something so explicitly anti-egalitarian.

The constitutions of most liberal democracies, including India and the US, explicitly enshrine equal rights for both men and women, including a ban on explicit and intentional discrimination for or against each. I protest each and every deviation from that rule, be it women getting free drinks, or preference in college admissions, and I'd do the same for men.

Ladies nights are simply one of the more blatant and commonplace violations, and clear violations to boot. I don't need reminder that I, as a man, am inherently less valuable than woman, and I'm content to have it stamped out and establishments who engage in it made an example of. There's already so much implicit discrimination which can't be stamped out that I won't tolerate more explicit forms.

I simply care more about equality of opportunity than equality of outcome, so this argument doesn't sway me. I prefer men and women pay the same amount for the same product, namely time in a bar or drinks, and then what they do with it is up to them, be it the former simping over the latter and handing them theirs.

“I refuse to entertain the subtleties of life because some people some time ago came up with some legal principles on which I shall base my entire thought process” isn’t a very good jumping point for a conversation or deliberation. But you do you

More comments

Is/ought, plus game theory. Women will always have an unfair advantage in this arena because men will always gain an advantage by handing this advantage to women. The man who boycotts the ladies night at the bar, or any other low stakes garden variety simpery, out of offence to his high-minded egalitarian principles will lose out to the pragmatic man who accepts the phenomenon and potentially uses it as a pivot to open a conversation and flirt with those women. ("You women get half price drinks? Nice, that means you can buy me two! No? Ah, so you're a hashtag trad wife. Cool, I'm more of an equal rights feminist. A very thirsty equal rights feminist with an empty glass. Oh okay I get it, maybe those dodgy pick up guys were right about women after all. Hold on a second, are you a pick up artist girl? No? So where did you learn your undeniable skills? In that case I guess it must have come to you naturally. Naturally blessed with half price drinks. Imagine that." Or something significantly smoother and less terminally online, I don't know).

Look dawg, I have a girlfriend, and I don't really struggle to get one with or without the existence of blatantly illegal and anti-egalitarian practises that offend me.

I have nothing against men who willingly buy drinks for women, I simply don't want the implicit lower value of men enshrined in explicit practise.

See my other comment at this level for details if you care.

Yeah this is ridiculous. The alternatives are:

  1. Sausage fest.

  2. Face checks at the doors which accomplish the same outcome but only the higher status men are allowed inside to mingle with the women.

  3. Invite only parties where any woman is easily invited while only higher status guys are.

I get the feeling that anyone who cheers something like this simply hasn’t had much of a nightlife.

2 & 3 are pretty much swinger's clubs rules.

Most of the Title IX complaints have nothing to do with athletics but allege that the University didn't sufficiently respond to complaints of sexual assault or harassment.

Most of the Title IX complaints have nothing to do with athletics but allege that the University didn't sufficiently respond to complaints of sexual assault or harassment.

Where are you getting that? The article seems to suggest that, in both 2016 and 2022, most of the Title IX complains did deal with athletics. The complaints from 2021 (which don't appear in the graph) do appear to fit what you are saying, but also seem to suggest that 2021 saw far fewer complaints overall. Honestly it would be helpful if the author had included more information about each year. I am disinclined to try to dig it all up myself.

In 2016, the more than 6,000 complaints filed by that same individual alleged discrimination in school athletic programs, according to the civil rights office. Fiscal year 2022 followed much the same pattern when the office logged 4,387 allegations of Title IX discrimination involving athletics.

One complaint could include more than one type of alleged Title IX violation, encompassing, for instance, both athletics and gender harassment.

The 2022 athletics-related claims far outpaced the 1,030 related to sexual or gender harassment or sexual violence. The figure also swamps similar claims from fiscal year 2021 when just 2,093 complaints included Title IX-related claims — with just 101 focused on athletics. More than 500 cases concerned sexual or gender harassment or sexual violence that year.

Yeah, sorry, I was looking at examples of cases that were actually filed. Upon rereading the article it's clear that it's usually the case that the majority of the claims are harassment, but the number was skewed by the one person filing a ton of athletic claims.

In addition to the possibility of "Anonymous" or "No Name Given", or ToaKraka's random nutjob, a plausible explanation is that this is someone who's financially charged with producing these complaints. In Accessibility law, this is the realm of ADA testers and their lawyers: a very small group of people who promise that they're at least theoretically interested in going to a far larger space of public or semi-public accommodations and making sure that anyone with similar disabilities can access them (and not coincidentally make a lot of money), who individually have hundreds or low thousands of complaints or even lawsuits. The spread isn't quite as wide... but then again, the return is less direct, too. But you don't have to get money directly from a court case to make a career out of it.

I don't know that this is true. But I can look at the complaints from here and find a name that could hit the 20-complaints-a-day scenario without having to spend all day working on complaints, because he or she has people for that.

This isn't inherently wrong: the most abusive ADA testers tend to bubble up to the top simply because it's easier to find bullshit, but the fundamental of having actual harmed people asking for fixes rather than an army of ill-planned regulators isn't a bad one, even recognizing that most 'actual harmed people' won't have the energy or time to go through the full procedure. (Though I've got my complaints about the extent of both the ADA and modern Title IX/Title VI law).

And it may not be the case here.

In Accessibility law, this is the realm of ADA testers and their lawyers: a very small group of people who promise that they're at least theoretically interested in going to a far larger space of public or semi-public accommodations and making sure that anyone with similar disabilities can access them (and not coincidentally make a lot of money), who individually have hundreds or low thousands of complaints or even lawsuits.

There is a SCOTUS case coming on this. Last month the Supreme Court elected to take up an appeal from a 1st Circuit case questioning whether a self-appointed ADA "tester" has standing to sue for damages in federal court if they never intend to actually visit the place they're "testing":

The plaintiff, Deborah Laufer, has brought 600 lawsuits against hotels around the United States. Under the Americans with Disabilities Act, hotels are required to make information about their accessibility to people with disabilities available on reservation portals. In this case, Laufer – who has physical disabilities and vision impairments – went to federal court in Maine, where she alleged that a website for an inn that Acheson Hotels operates in that state did not contain enough information about the inn’s accommodations for people with disabilities.

The district court threw out her lawsuit. It agreed with Acheson Hotels that Laufer did not have standing because she had no plans to visit the hotel and therefore was not injured by the lack of information on the website. But the U.S. Court of Appeals for the 1st Circuit reinstated Laufer’s lawsuit.

That prompted Acheson Hotels to come to the Supreme Court, asking the justices to weigh in. The company pointed to a division among the courts of appeals on whether cases like Laufer’s can move forward; indeed, Acheson Hotels noted, courts have reached different conclusions about whether Laufer can bring these kinds of cases. And the issue has “immense practical importance,” the company stressed, describing a “cottage industry” “in which uninjured plaintiffs lob ADA lawsuits of questionable merit, while using the threat of attorney’s fees to extract settlement payments.”

Laufer agreed that review was warranted, although she urged the justices to uphold the lower court’s ruling. The justices will likely hear argument in the case in the fall, with a decision to follow sometime in 2024.

The plaintiff, Deborah Laufer, has brought 600 lawsuits against hotels around the United States. Under the Americans with Disabilities Act, hotels are required to make information about their accessibility to people with disabilities available on reservation portals.

What an absolutely loathsome person. Anyone with a shred of common decency would just call and ask whatever question they had about the hotel if it was a genuine question, but nope, the goal here is entirely to antagonize anyone that doesn't comply with Byzantine rules on their websites. Laufer acts more like a misaligned AI than a person that honestly wants to make the world a better place.

Even the blind and wheelchair bound need to make a living. Even if it’s parasitic and something a sane society wouldn’t consider a job.

My first thought after "Anonymous" was a lawyer (perhaps aided by a few paralegals and interns) for whom filing these complaints is just a job. I hadn't made the connection to ADA testers but now that you mention them the idea checks out.

The lawyer thing wouldn't make sense anyway considering the volume of complaints and the suggestion in the article that so few of them had any merit. Fee awards in these kinds of cases (assuming OCR even has the authority to award them) are directly tied to the amount billed in the case, and possibly knocked down if the court finds they aren't reasonable. If a firm screens its cases carefully then it can get away with losing a few and effectively working for free on them since they're by definition making a profit on every hour billed, but filing thousands of complaints in a year suggests that the effort it takes to file them is minimal, and if the effort is minimal, so are the fees. If a firm can reasonably expect to collect 5k in fees for each successful case and it files 5,000 cases in a year then they need to win a lot of cases to justify laying out 25 million in billables up front. Most of the practice areas that make hay on statutory entitlement to fee awards are in areas where it's usually pretty clear that they're going to win. For example, I used to occasionally do Fair Credit Reporting Act and Fair Debt Collections Act stuff, and if a client comes to you with evidence of a violation, you can usually get a quick settlement because they know they'll lose in court and can at least save their own attorney's fees. If I have cell phone records showing that a debt collector called a client outside of the mandated hours, or at a frequency that's well within the unreasonable range, or something similar, then I could usually get a thousand bucks for the client and three grand in fees for a couple days worth of work. But that's because the debt collector doesn't have a defense and usually knows it. If these were tossups my usual rate is halved and if they're real crapshoots then I'm getting peanuts and it's unsustainable. This is obviously different from big personal injury cases where hundreds of thousands of dollars are involved and you only have to hit every once in a while to make your contingency fee worth it.

This isn't inherently wrong: the most abusive ADA testers tend to bubble up to the top simply because it's easier to find bullshit, but the fundamental of having actual harmed people asking for fixes rather than an army of ill-planned regulators isn't a bad one, even recognizing that most 'actual harmed people' won't have the energy or time to go through the full procedure.

Disagree. Having a government employ people for the express people of stress-testing private actors for whether they're engaged in discrimination when there are no actual discriminated-against people involved in the test is Kafkaesque and provides perverse incentives for the testers.

I still find it hard to believe that what is being reported is even possible

Back in 2021, one person said that he had filed 1,000 complaints (with no timeframe given) regarding violations of Title IX and Title VI at 330 different colleges. In 2022 he said the total was 1,200 complaints against "almost 300" colleges over three years. 7,300 complaints in a single year is a big number, but I hesitate to call it unbelievable that somebody could be more dedicated than the linked person.

“In 1972, a crack legal team was sent to prison by a military court for a crime they didn't commit. These men promptly escaped from a maximum security stockade to the Los Angeles underground. Today, still wanted by the government they survive as lawyers of fortune. If you have a problem, if no one else can help, and if you can find them....maybe you can hire The IX-Team.”

In my non-US university, there were a tonne of 'Women in Defence Intelligence', 'Women's night for networking with accounting company people' events. I imagine these kinds of things still happen in the US but aren't federally funded. Any US university people know anything about this?

I imagine these kinds of things still happen in the US but aren't federally funded. And US university people know anything about this?

The typical workaround is that you can host a "women in [field]" event, but you can't restrict who actually attends. To some extent everyone knows what's expected, but I do recall my local Society of Women Engineers chapter was pretty explicit about recruiting all comers, so it's not all a wink and a nudge.

I regularly attend my institutions Women in X meetings and the spin off social events such as the book club. There are usually maybe 1 or 2 men for every 10-15 women on average. If anything we get effusively welcomed and praised for being brave in joining. I suspect should I be in the market I could probably parlay this into a dating strategy.

I suspect should I be in the market I could probably parlay this into a dating strategy.

I am not sure about that...I would suspect that this is only a viable option for people who are neurotypical and at least average-looking. Anything less than that seems likely to get unceremoniously booted at best and tarred and feathered at worst. Good luck finding work as an engineer if you've got a reputation for harassment or something...

Not that there is anything wrong as such with this; the idea that awkward or unattractive men people need to "know their place" and never express interest in sex or romance in exchange for ordinary social inclusion isn't exactly new or terrible.

In my non-US university, there were a tonne of 'Women in Defence Intelligence', 'Women's night for networking with accounting company people' events. I imagine these kinds of things still happen in the US but aren't federally funded. Any US university people know anything about this?

I'm not entirely sure what you mean. Such events certainly happen at American universities. Whether they are directly or indirectly "federally funded" will vary from case to case. But for example the U.S. Department of Agriculture has a grant program for "Women and Minorities in Science, Technology, Engineering and Mathematics Fields" that could probably be used to fund some such things.

I was trying to get at the contradiction between

No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance.

And giving advantages specifically to women.

My understanding is that men are still allowed to attend such things. However, i have no doubts that an equivalent event advertising itself specifically for men (but still allowing women) would either draw the wrath of Title IX, or else be overwhelmed with women showing up in protest.

edit: What Voxel said below.

The logic (which I disagree with, mind you) is that women are being brought up to the baseline level of accommodation by such programs, not that they are receiving preferential treatment.

(Well isn't that real and important, then? Yes, yes, it's a fair point. But I still think jobs that exist solely to push unnecessary government paperwork are inescapably bullshit jobs. Hiring government actors--executive and judicial--to punish universities for failing to meet politically-imposed quotas on social engineering goals, so that universities must hire administrators to give themselves cover, is the very picture of government stimulating the economy by paying one group of people to dig holes, and another group to follow behind them, filling the holes back up again. But this is not the point of my post.

Is this just bullshit jobs or is it just that you disagree with the thrust of the work being done? After all they aren't, in fact, just digging up and filling in holes, they are presumably collecting real data which is checked, setting up grievance procedures which can actually be used etc. and even if you think it's in pursuit of a pointless or harmful goal it is actual things being done and work produced. Indeed in one sense this is no different to say all of the legal/regulatory work a food company must do to ensures that all of its products comply with the regulations of all the relevant agencies, it just so happens that whereas in latter case the goal of the regulations is relatively uncontroversial in the former it isn't.

and even if you think it's in pursuit of a pointless or harmful goal it is actual things being done and work produced.

The definition of a Bullshit Job, as per Graeber's original essay, is exactly as you describe: one in which the product is useless or harmful, not one where there is no work done at all.

If that's the case I don't think the 'bullshit jobs' framework adds anything useful, because then it really just is a substitute for 'I don't agree with the policy goals the work being done aims toward'.

If memory serves, a big component of Graeber's theory was that a bullshit job is, in part, one in which even the person doing it doesn't think they're doing anything important or contributing anything of value. I don't actually know, but I imagine that describes plenty of Title IX administrators.

That just seems like a function of specialisation though, in a highly specialised world is clearly going to be quite difficult for a lot of workers to see how they fit into the entire economy/organisational bureaucracy.

I'm not here to relitigate the entirety of Graeber's theory, and his estimate of how prevalent the phenomenon is is known to be significantly wide of the mark. I just don't accept the idea that any job in which people work hard necessarily needs to exist or serves a useful function. There are plenty of people who are self-aware enough to suspect that their job does not really need to exist, and in many cases they're right.

I don't dispute that the people hired to pump petrol at petrol stations (because the state forbids people from pumping their own petrol) are actually working hard. That doesn't mean that "full-time petrol station attendant" is a job that actually needs to exist, as plainly evidenced by the fact that this is an exception rather than the norm.

I'm not here to relitigate the entirety of Graeber's theory,

Neither am I of course, but on the face it does seem a little silly to suggest that a worker must know the overall significance of their role to make their job worthwhile. I'm certainly not excluding the possibility of bullshit jobs in general, and I do agree that just because someone works hard that doesn't mean their job is at all important or meaningful.

So we end up with a quadrant:

  1. People who think their jobs are meaningful/important, and they are

  2. People who think their jobs are meaningful/important, but they aren't

  3. People who think their jobs aren't meaningful/important, but they are

  4. People who think their jobs aren't meaningful/important, and they aren't

"Bullshit jobs" originally referred to those in Q4, but really ought to encompass those in Q2 as well: if a job is meaningless or pointless, the fact that the person holding it doesn't realise it's meaningless or pointless doesn't change that. It's entirely possible for a person to think that their job is meaningless or unimportant, and for their appraisal to be inaccurate (Q3).

More comments

Is this just bullshit jobs or is it just that you disagree with the thrust of the work being done?

More the former than the latter, though I am less certain than you seem to be that these are meaningfully different things. I regard most administrative positions in higher education, as well as most federal regulatory positions, as bullshit jobs--specifically, "box tickers" and "taskmasters." When you say "actual things being done and work produced," you're assuming something for which I see no evidence. You can say "those administrators are not just filling holes" but that's very nearly all I ever see them doing--filling out paperwork no one ever likely read, just in case someone else files a lawsuit that will make no substantial difference to anyone except, maybe, a successful plaintiff in search of an easy payday. If it is your position that the litigiousness of American society, and its attendant bloated insurance market, is actually a good thing, then sure--we have a real, substantive value disagreement. But if that's not your position, then your argument here is ill-aimed.

(The "bullshit" part is also substantially demonstrated by 77% of the complaints being made by a single person. I have no idea who that person might be. Charitably: a top-notch attorney at an important law firm in Washington, D.C., who is capturing most of the "Title IX complaint" market, maybe? The right intake process could probably make this happen. I just have a hard time seeing it actually playing out this way; unless their "sometimes founded" complaints turn into outrageously large payouts on a pretty regular basis, it would be very difficult to fund such a venture. Part of the mystery of averaging 20 complaints a day is, who is funding that?)

Part of the mystery of averaging 20 complaints a day is, who is funding that?

Autistic cell dwellers don’t tend to swing that way, politically, but you kind of have to be one to do that, or be the staff lawyer for an NGO dedicated to filing title IX complaints.

just in case someone else files a lawsuit that will make no substantial difference to anyone except, maybe, a successful plaintiff in search of an easy payday

I'm not settling on either side here, but this seems a little uncharitable. For one, in one sense even if no-one ever reads most of the stuff produced, and if (and I accept this may not be the case but nevertheless, if) lawsuits filed are on relatively substantive grounds rather than trivial procedural matters then the work is still important. Because, presumably, if a Title IX coordinator felt that a particular aspect of college administration did not comply the college would be anxious to make the appropriate changes, which if one agrees with the thrust of Title IX is a good thing.

This is a bit of a cumbersome explanation so here's a instance of a Title IX lawsuit that came up in a cursory google search. James Haidak was a student who recently sued his university for having a biased procedure when it expelled him following accusations by his ex-girlfriend, and on appeal he won on the grounds that he was never given a chance to defend himself in any kind of hearing etc., and now presumably it is the role of Title IX coordinators to ensure the their own universities have adequate procedures in this regard so they don't get hit by similar suits. So even if all their work now sits in a drawer forever they were actually doing something.

The key question of course is whether that many of the lawsuits they spend their time protecting against are substantial, or mostly trivial. Now this seems very hard to assess given that presumably the ones covered in the media are selected for the most interesting and meaningful ones, but a cursory search does throw up lots of cases that do seem at least somewhat worthwhile. Plenty of cases on the need for a fair shake to be given to accused students prior to expulsion, one about a kid who died from alcohol poising following an initiation (the parents demanding tighter restrictions on such) and yes lots of cases about women's athletics. Not, I appreciate, a life or death issue but a 'real' thing in the sense that Title IX cases etc. did actually increases access to college sport for women, which seems to indicate that more than box-ticking is being done, even if in some instances the work is over something that one could consider rather trivial in the grand scheme of things.

This is a bit of a cumbersome explanation so here's a instance of a Title IX lawsuit that came up in a cursory google search. James Haidak was a student who recently sued his university for having a biased procedure when it expelled him following accusations by his ex-girlfriend, and on appeal he won on the grounds that he was never given a chance to defend himself in any kind of hearing etc., and now presumably it is the role of Title IX coordinators to ensure the their own universities have adequate procedures in this regard so they don't get hit by similar suits. So even if all their work now sits in a drawer forever they were actually doing something.

This is a great example, which very much supports my position over yours. Why the hell was Haidak kicked out in the first place? Because Title IX has been interpreted to require universities to referee adolescent relationships! Title IX created the problem (via campus administration), and Title IX "fixed" the problem (via the judiciary)--an exceptionally clear case of digging a hole, then filling it in.

The key question of course is whether that many of the lawsuits they spend their time protecting against are substantial, or mostly trivial.

No. The key question is whether the benefits of Title IX outweigh the costs.

Title IX cases etc. did actually increases access to college sport for women, which seems to indicate that more than box-ticking is being done, even if in some instances the work is over something that one could consider rather trivial in the grand scheme of things.

In a hypothetical world where there was no such thing as Title IX, how do you think college sports would look today? Universities have undergone all sorts of changes in response to cultural revolutions and the realities of supply and demand. Did Title IX just make obligatory something that would have happened organically? Did it hasten an ongoing process? If so, then the regulatory cost was onerous and the fact that we're still paying it is stupid. Did Title IX instead fundamentally re-engineer a piece of American society, forcing a change to which Americans would have otherwise never consented? If so, then the price was even more onerous, paid in liberty instead of dollars. As far as I can tell, Title IX itself can only either have been unnecessary (in which case: it spawned mostly bullshit jobs), or necessary, in which case it is seriously objectionable on other grounds.

Indeed, in the country where I went to school, the idea of university being an arbitrator in the personal relationships between the students would be rightly seen as ludicrous. This simply never happens, except, maybe, when you get a criminal conviction (in which case being kicked out of school is probably least of your problems). Even when you get a disciplinary sanction by a university, you can appeal to a regular administrative court (i.e. one ran by the state, not by university) as part of normal process.

deleted

Because Title IX has been interpreted to require universities to referee adolescent relationships! Title IX created the problem (via campus administration), and Title IX "fixed" the problem (via the judiciary)

Worse: the requirement was codified by a now-rescinded Dear Colleague letter from the Obama administration requiring schools adopt these policies.

Doesn’t that just require schools to consider rape and other sexual violence, even if it happened off-campus? I think it’s fair to say a frat-house rape is potentially part of the school environment even if said house is not school property.

The problem arises when schools engage with more nebulous definitions of harassment and hostile environments. I’m not seeing where that comes from this letter.

(Though it does predate A Rape on Campus by several years. I wonder when it was rescinded?)

It was rescinded by Betsy DeVos in 2017.

IIRC the controversial "refereeing of adolescent relationships" portion was driven by a requirement to review sexual harassment and sexual assault allegations with a preponderance of evidence (civil) standard, which put school administrators in a position of having to establish parallel judicial systems because an act that didn't meet "beyond a reasonable doubt" in criminal court can absolutely meet a preponderance of evidence.

IMO punishing students with things like expulsion needs higher than a 51% standard of evidence.

That’s a good point. What you’re saying also matches the conversation I remember from my time in school. Preponderance of evidence definitely came up.

No. The key question is whether the benefits of Title IX outweigh the costs.

My question goes back even a bit further than that - how the hell is the business of the United States federal government? Regardless of whether the legislation and its enforcement mechanism creates some downstream utilitarian good, I cannot fathom why correct university policies and procedures for handling disputes regarding sexual harassment needs to come from the federal government.

14th amendment, baby!

Unlike most of the bill of rights, the Equal Protection Clause specifically, unambiguously, constrains the states. Given that sexual harassment disparately impacts women, it falls under the umbrella of Things Congress is Allowed to Legislate.

Why the hell was Haidak kicked out in the first place? Because Title IX has been interpreted to require universities to referee adolescent relationships!

Can't say for certain of course but I am fairly confident that universities would want to punish rape/sexual assault quite harshly even without Title IX.

The key question is whether the benefits of Title IX outweigh the costs

Well sure but that doesn't really address the question of the bullshit-ness of the jobs, that's then just an ordinary policy debate.

Did it hasten an ongoing process? If so, then the regulatory cost was onerous and the fact that we're still paying it is stupid. Did Title IX instead fundamentally re-engineer a piece of American society, forcing a change to which Americans would have otherwise never consented? If so, then the price was even more onerous, paid in liberty instead of dollars. As far as I can tell, Title IX itself can only either have been unnecessary (in which case: it spawned mostly bullshit jobs), or necessary, in which case it is seriously objectionable on other grounds.

I suspect that as you suggest the growth of women's sports was happening anyway but nonetheless Title IX accelerated and shaped those changes (same for the other areas that Title IX impacts).

Why the hell was Haidak kicked out in the first place? Because Title IX has been interpreted to require universities to referee adolescent relationships!

Can't say for certain of course but I am fairly confident that universities would want to punish rape/sexual assault quite harshly even without Title IX.

What does this have to do with anything? First: no, in cases of rape and sexual assault the universities would be best off letting the police handle it. Universities should not in general be in the business of punishing criminal activity. They don't even generally have the expertise to reasonably investigate it. Not even the people running Title IX offices are, as a rule, trained to investigate and prosecute crimes. (Frankly it would be preferable for higher education become completely sex-segregated, than for us to ask a bunch of academics to hold the kangaroo courts they generally hold on such matters.)

But second, what do you think this has to do with Haidak?

The plaintiff, James Haidak, and his then-girlfriend Lauren Gibney were both UMass students studying abroad in Barcelona in 2013. There was a physical altercation between them; Gibney said Haidak attacked her, while he asserted that he was defending himself while she was trying to hit and kick him. Gibney complained to UMass, which started a student conduct case against Haidak and imposed a no-contact order between the two. However, despite the no-contact order, Haidak and Gibney continued to have frequent, consensual contact and maintained a relationship over the summer. After learning that Haidak was contacting Gibney, UMass waited 19 days, then issued a second charge against Haidak: for harassment and failure to follow the no-contact order. Less than a week later, Gibney and her mother reported that there was continued contact between Haidak and Gibney. UMass waited another 13 days, then issued another charge against Haidak for harassment and breaking the no-contact order and summarily suspended him.

Neither rape nor sexual assault appear to be implicated here at all. This was a lover's spat--to even call it "domestic violence" would take things too seriously. This is exactly what I said it was: Title IX being interpreted to require universities to referee adolescent relationships, something universities might be even worse at than refereeing charges of sexual assault.

The key question is whether the benefits of Title IX outweigh the costs

Well sure but that doesn't really address the question of the bullshit-ness of the jobs, that's then just an ordinary policy debate.

I feel like you are telling me that you don't understand bullshit jobs without telling me you don't understand bullshit jobs. A job in which the cost of the job outweighs the benefits of the job just is a bullshit job.

I suspect that as you suggest the growth of women's sports was happening anyway but nonetheless Title IX accelerated and shaped those changes (same for the other areas that Title IX impacts).

Agreed: it "shaped" the process by implementing pointless bureaucracy for no discernible benefit. The perceived benefits were coming anyway, without the costs. Now the status quo is so entrenched that nobody will countenance ending those costs; indeed, the discussion is that we should pay more for a process that should never have been started in the first place. This is paradigmatic bullshittery.

Re: Haidak I have to plead ignorance on how much of the UMass grievance procedure is shaped by Title IX requirements, but either way I don't think it is really that important to the broader point.

I feel like you are telling me that you don't understand bullshit jobs without telling me you don't understand bullshit jobs. A job in which the cost of the job outweighs the benefits of the job just is a bullshit job.

If 'government jobs where I feel that the cost outweighs the benefit' comes under 'bullshit jobs' then it's a stupid and pointless framework. Look at it this way, I would say that the work Islamic morality police do in Iran or wherever is an instance where the negatives clearly outweigh the (in my mind, non-existent) positives, but surely calling that a 'bullshit job' makes no sense. They fulfil their intended function pretty well, just as Title IX administrators probably do/did fulfil the function of expanding women's sport, changing sexual allegation grievance procedures etc. etc. There surely has to be some distinction between 'jobs where nothing meaningful happens' and 'jobs where something happens but I don't like the thing'.

Re: Haidak I have to plead ignorance on how much of the UMass grievance procedure is shaped by Title IX requirements, but either way I don't think it is really that important to the broader point.

I mean, it was your example, so to backpedal when it becomes clear that the example cuts against your argument seems like poor sportsmanship, but sure.

If 'government jobs where I feel that the cost outweighs the benefit' comes under 'bullshit jobs' then it's a stupid and pointless framework.

There's nothing about feeling happening here, and it is bad rhetoric for you to sneak that in there. My argument is that the supposed "benefits" of Title IX could have manifested without bringing along the obscene degree of bureaucracy. You conceded that point:

I suspect that as you suggest the growth of women's sports was happening anyway

And that's all you need to concede to agree with me. A bullshit job is not a job where nothing ever happens, or nothing meaningful ever happens, or things happen that I don't like. It's a job that either adds nothing, or is actively harmful, compared to a world where the job was never brought into existence in the first place. And you have already conceded everything you need to concede for Title IX jobs to fit that description. I think someone else has already pointed this out to you elsewhere in the thread, but you seem to have an idea of "bullshit jobs" that does not match with the sociological writing that coined the phrase.

More comments

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

I think his problem isn't so much that he's bad at communicating his ideas, it's just that his ideas aren't that great in the first place. He's not a genius AI researcher, he's just a guy who wrote some meandering self-insert Harry Potter fan fiction and then some scifi doomsday scenarios about tiny robots turning us into goop. He can't make an argument without imagining a bunch of technologies that don't exist yet, may never exist and might not even be possible. And even if all of those things were true his solution is to nuke China if they build GPU factories which, even if it was a good plan (it isn't), he would never in a million years be able to convince anyone to do. I really can't understand the obsession with this guy.

Maybe, but the badness of his ideas are not super relevant to what @ace has laid out as absolutely piss-poor rhetoric and presentation, except in the narrow case of having such a clearly and obviously great idea that poor communication is negligible. On a meta-level the poor rhetoric does make the general uh, "credential" of 'super-smart, rational thinker' tremendously weaker.

If you present a good idea poorly, I am inclined to think less of it, if even subconsciously because I trust your evaluation less.

If you present a bad idea well, I am inclined to think more of it, if even subconsciously because I trust your evaluation more.

No matter how good or bad the idea is, there are better and poorer ways to present it, and Eliezer consistently chooses poorer. I spent years dismissing AI at all, mostly over Eliezer's poor presentation.

Ironically, as I've said before, he does come off as tremendously more human and likable to me in these interviews and my personal opinion of him has risen, but unlikely in a generalizable way across most audiences, and his persuasion game remains total shit.

I really can't understand the obsession with this guy.

Well, it was a pretty decent Harry Potter fanfic...

...alternatively, it benefits the establishment to have him as the foil for the AI technology, so he distracts from the more realistic problems that might come out of the technology, which are solvable, and which people might want to do something about, if they heard about them. Was it Altman that said Yudkowski did more for AI than anyone else?

I have two main criticisms:

  1. It's way too long and meanders a lot with the Ender's Game homage / rip-off in the middle. Granted that may just be an inherent trait of serialized fan fiction

  2. It fails at conveying it's main idea. The main thesis as I understood is introduced in the scene where he spills some sort of prank ink on Hermione and then teaches everyone a Very Important Lesson about Science: you have to actually try out your ideas and make good faith attempts to prove yourself wrong instead of just assuming your first guess is correct because you're smart. But then he doesn't do any of those things for the rest of the book and instead just instantly knows the right answer to everything by thinking about it really hard because he's smarter than everyone else. Which is how I think Yud sees himself and is why both he and his character are so insufferable

He can't make an argument without imagining a bunch of technologies that don't exist yet

Isn't this reasonable and necessary to understand the far future? Given current technological progress, is it really plausible that currently-nonexistent technologies won't shape the future, when we consider the way technologies invented within the past 40 years shape today?

And even if all of those things were true his solution is to nuke China if they build GPU factories which, even if it was a good plan (it isn't), he would never in a million years be able to convince anyone to do

Most thinkers have some good ideas and some bad ones. If you identify a major mathematical conjecture, and then make a failed attempts to solve that conjecture ... that doesn't make you stupid, that's the usual state. wikipedia list of conjectures, most of which were proved by people other than the person the conjecture's named after

Yudkowsky's arguments are robust to disruption in the details.

An ASI does not need dry nanotech to pose an existential risk to humanity, simple nukes and bioweapons more than suffice.

Not to mention that, as I replied to Dase above, just because he was wrong about the first AGI (LLMs) being utterly alien in terms of cognition, doesn't mean that they don't pose an existential risk themselves, be it from rogue simulacra or simply being in the hands of bad actors.

It would be insane to expect him to be 100% on the ball, and in the places where he was wrong in hindsight, the vast majority of others were too, and yet here we are with AGI incipient, and no clear idea of how to control it (though there are promising techniques).

That earns a fuck ton of respect in my books.

I don't expect him to be 100% on the ball but what are his major predictions that have come true? In a vague sense yes, AI is getting better, but I don't think anybody thought that AI was never going to improve. There's a big gap between that and predicting that we'll invent AGI and it will kill us all. His big predictions in my book are:

  1. We will invent AGI

  2. It will be able to make major improvements to itself in a short span of time

  3. It will have an IQ of 1000 (or whatever) and that will essentially give it superpowers of persuasion

None of those have come true or look (to me) particularly likely to come true in the immediate future. It would be premature to give him credit for predicting something that hasn't happened.

Decent post with an overview of Yud's predictions: On Deference and Yudkowsky's AI Risk Estimates.

In general Yud was always confident, believing himself to know General High-Level Reasons for things to go wrong if not for intervention in the direction he advises, but his nontrivial ideas were erroneous, and his correct ideas were trivial in that many people in the know thought the same, but they're not niche nerd celebrities. E.g. Legg in 2009:

My guess is that sometime in the next 10 years developments in deep belief networks, temporal graphical models, … etc. will produce sufficiently powerful hierarchical temporal generative models to essentially fill the role of cortex within an AGI.… my mode is about 2025… 90% credibility region … 2018 to 2036

Hanson was sorta-correct about data, compute and human imitation.

Meanwhile Yud called protein folding, but thought that'll already need an agentic AGI who'll develop it to mind-rape us.

Or how's that:, Yud-2021 I expect world GDP to tick along at roughly the current pace, unchanged in any visible way by the precursor tech to AGI; until, on the most probable outcome, everybody falls over dead in 3 seconds after diamondoid bacteria release botulinum into our blood

But Yud has clout; so people praise him for Big Picture Takes and hail him as a Genius Visionary.


Excerpts:

At least up until 1999, admittedly when he was still only about 20 years old, Yudkowsky argued that transformative nanotechnology would probably emerge suddenly and soon (“no later than 2010”) and result in human extinction by default. My understanding is that this viewpoint was a substantial part of the justification for founding the institute that would become MIRI; the institute was initially focused on building AGI, since developing aligned superintelligence quickly enough was understood to be the only way to manage nanotech risk…

I should, once again, emphasize that Yudkowsky was around twenty when he did the final updates on this essay. In that sense, it might be unfair to bring this very old example up.

Nonetheless, I do think this case can be treated as informative, since: the belief was so analogous to his current belief about AI (a high outlier credence in near-term doom from an emerging technology), since he had thought a lot about the subject and was already highly engaged in the relevant intellectual community, since it's not clear when he dropped the belief, and since twenty isn't (in my view) actually all that young.

In 2001, and possibly later, Yudkowsky apparently believed that his small team would be able to develop a “final stage AI” that would “reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010.”

In the first half of the 2000s, he produced a fair amount of technical and conceptual work related to this goal. It hasn't ultimately had much clear usefulness for AI development, and, partly on the basis, my impression is that it has not held up well - but that he was very confident in the value of this work at the time.

The key points here are that:

  • Yudkowsky has previously held short AI timeline views that turned out to be wrong
  • Yudkowsky has previously held really confident inside views about the path to AGI that (at least seemingly) turned out to be wrong
  • More generally, Yudkowsky may have a track record of overestimating or overstating the quality of his insights into AI

Although I haven’t evaluated the work, my impression is that Yudkowsky was a key part of a Singularity Institute effort to develop a new programming language to use to create “seed AI.” He (or whoever was writing the description of the project) seems to have been substantially overconfident about its usefulness. From the section of the documentation titled “Foreword: Earth Needs Flare” (2001):

…Flare was created under the auspices of the Singularity Institute for Artificial Intelligence, an organization created with the mission of building a computer program far before its time - a true Artificial Intelligence. Flare, the programming language they asked for to help achieve that goal, is not that far out of time, but it's still a special language.”*

A later piece of work which I also haven’t properly read is “Levels of Organization in General Intelligence.” At least by 2005, going off of Yudkowsky’s post “So You Want to be a Seed AI Programmer,” it seems like he thought a variation of the framework in this paper would make it possible for a very small team at the Singularity Institute to create AGI

In his 2008 "FOOM debate" with Robin Hanson, Yudkowsky confidentally staked out very extreme positions about what future AI progress would look like - without (in my view) offering strong justifications. The past decade of AI progress has also provided further evidence against the correctness of his core predictions.

When we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed “a brain in a box in a basement.” I love that phrase, so I stole it. In other words, we tend to visualize that there’s this AI programming team, a lot like the sort of wannabe AI programming teams you see nowadays, trying to create artificial general intelligence, like the artificial general intelligence projects you see nowadays. They manage to acquire some new deep insights which, combined with published insights in the general scientific community, let them go down into their basement and work on it for a while and create an AI which is smart enough to reprogram itself, and then you get an intelligence explosion…. (p. 436)

When pressed by his debate partner, regarding the magnitude of the technological jump he was forecasting, Yudkowsky suggested that economic output could at least plausibly rise by twenty orders-of-magnitude within not much more than a week - once the AI system has developed relevant nanotechnologies (pg. 400).[8]

I think it’s pretty clear that this viewpoint was heavily influenced by the reigning AI paradigm at the time, which was closer to traditional programming than machine learning. The emphasis on “coding” (as opposed to training) as the means of improvement, the assumption that large amounts of compute are unnecessary, etc. seem to follow from this. A large part of the debate was Yudkowsky arguing against Hanson, who thought that Yudkowsky was underrating the importance of compute and “content” (i.e. data) as drivers of AI progress. Although Hanson very clearly wasn’t envisioning something like deep learning either[9], his side of the argument seems to fit better with what AI progress has looked like over the past decade.

In my view, the pro-FOOM essays in the debate also just offered very weak justifications for thinking that a small number of insights could allow a small programming team, with a small amount of computing power, to abruptly jump the economic growth rate up by several orders of magnitude:

  • It requires less than a gigabyte to store someone’s genetic information on a computer (p. 444).[11]
  • The brain “just doesn’t look all that complicated” in comparison to human-made pieces of technology such as computer operating systems (p.444), on the basis of the principles that have been worked out by neuroscientists and cognitive scientists.
  • There is a large gap between the accomplishments of humans and chimpanzees, which Yudkowsky attributes this to a small architectural improvement
  • Although natural selection can be conceptualized as implementing a simple algorithm, it was nonetheless capable of creating the human mind

In the mid-2010s, some arguments for AI risk began to lean heavily on “coherence arguments” (i.e. arguments that draw implications from the von Neumann-Morgenstern utility theorem) to support the case for AI risk. See, for instance, this introduction to AI risk from 2016, by Yudkowsky, which places a coherence argument front and center as a foundation for the rest of the presentation.

However, later analysis has suggested that coherence arguments have either no or very limited implications for how we should expect future AI systems to behave. See Rohin Shah’s (I think correct) objection to the use of “coherence arguments” to support AI risk concerns. See also similar objections by Richard Ngo and Eric Drexler (Section 6.4).

…in conclusion, I think I'm starting to understand another layer of Krylov's genius. He had this recurring theme in his fictional work, which I considered completely meta-humorous, that The Powers That Be inject particular notions into popular science fiction, to guide the development of civilization towards tyranny. Complete self-serving nonsense, right? But here we have a regular sci-fi fan donning the mantle of AI Safety Expert and forcing absolutely unoriginal, age-old sci-fi/jorno FUD into the mainstream, once technology does in fact get close to the promised capability and proves benign. Grey goo (to divest from actually promising nanotech), AI (to incite the insane mob to attempt a Butlerian Jihad, and have regulators intervene, crippling decentralized developments). Everything's been prepped in advance, starting with Samuel Butler himself.

Feels like watching Ronnie O'Sullivan in his prime.

Us tinfoil hatters call it "negative priming".

He seems like a character out of a Kurt Vonnegut novel

I don't think you're giving him enough credit. Before he was known as the "doom" guy, he was known as the "short timelines" guy. The reason that we are now arguing about doom is because it is increasingly clear that timelines are in fact short. His conceptualization of intelligence as generalized reasoning power also seems to jive with the observed rapid capability gains in GPT models. The fact that next-token prediction generalized to coding skill, among myriads of other capabilities, would seem to be evidence in favor of this view.

Eh. I gave him some respect back when he was simply arguing that timelines could be short and the consequences of being wrong could be disastrous, so we should be spending more resources on alignment. This was a correct if not particularly hard argument to make (note that he certainly was not the one who invented AI Safety, despite his hallucinatory claim in "List of Lethalities"), but he did a good job popularizing it.

Then he wrote his April Fool's post and it's all been downhill from here. Now he's an utter embarrassment, and frankly I try my best not to talk about him for the same reason I'd prefer that media outlets stop naming school shooters. The less exposure he gets, the better off we all are.

BTW, as for his "conceptualization of intelligence", it went beyond the tautological "generalized reasoning power" that is, um, kind of the definition. He strongly pushed the Orthogonality Hypothesis (one layer of the tower of assumptions his vision of the future is based around), which is that the space of possible intelligences is vast and AGIs are likely to be completely alien to us, with no hope of mutual understanding. Which is at least a non-trivial claim, but is not doing so hot in the age of LLMs.

Before he was known as the "doom" guy, he was known as the "short timelines" guy.

2010, to be precise.

Respect is fine, but per the orthogonality thesis, respect for his predictive abilities shouldn't translate into agreement with his goals (and yet it does, because by something like a flipped version of Aaronson's "AI is the nerd being shoved into the locker" perspective, we are preinclined to think that the nerd is on our team).

That is not what the orthogonality hypothesis is about!

All it states is that almost any arbitrary level of intelligence can be paired with almost any goal or utility function, such that there's nothing stopping a super intelligence from wanting to make only paperclips.

Don't see it applying to how much respect I should have for Yud for one.

I think you may have misunderstood me; I explicitly said ("Respect is fine") that it doesn't apply to how much respect you should have, as long as respect does not entail a greater likelihood of following his suggestions. "Respect" is one of those words that are overloaded for reasons that I suspect involve enemy action: it is rational to "respect" authority in the sense of being aware that it can field many dudes with guns and acting in a way that will make it less likely you will end up facing the barrel of one, but authority would have an easier time if you "respected" it in the sense of doing what it wants even when there wasn't enough budget to send a dude with a gun to your house, and ideally just replaced your value function with authority's own.

I have little doubt that Eliezer is more intelligent and insightful than most of us here, but I don't believe that his value function is aligned with mine and don't have the impression that he considers truthfulness towards others to be a terminal value, so if anything his superior intelligence only makes it more likely that letting him persuade me of anything will lead me to act against my own interest.

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation.

Couldn't agree more. In addition to Yud's failure to communicate concisely and clearly, I feel like his specific arguments are poorly chosen. There are more convincing responses that can be given to common questions and objections.

Question: Why can't we just switch off the AI?

Yud's answer: It will come up with some sophisticated way to prevent this, like using zero-day exploits nobody knows about.

My answer: All we needed to do to stop Hitler was shoot him in the head. Easy as flipping a switch, basically. But tens of millions died in the process. All you really need to be dangerous and hard to kill is the ability to communicate and persuade, and a superhuman AI will be much better at this than Hitler.

Question: How will an AI kill all of humanity?

Yud's answer: Sophisticated nanobots.

My answer: Humans already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger. Edit: Also it's probably not necessary to kill all humans, just kill most of us and collapse society to the point that the survivors don't pose a meaningful threat to the AI's goals.

My answer: Human already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger.

Yeah, I'm not sure why the Skynet-like totally autonomous murder AI eats up so much of the discussion.

IIRC the original "Butlerian Jihad" concept was fear of how humans would use AI against other humans (the Star War against Omnius and an independent machine polity seems to be a Brian Herbert thing).

The idea of a Chinese-controlled AI incrementally improving murder capacities while working with the government seems like a much better tactical position from which to plant the seeds of AI fear from than using another speculative technology and what's widely considered a scifi trope to make the case.

China is already pretty far down the road of "can kill humanity" and people are already primed to be concerned about their tech. Much more grounded issue than nanomachines.

Huh, you could frame it as "here's a list of ways that existing state-level powers could already wreak havoc, now imagine they create an AI which just picks up where they left off and pushes things along further."

So the AI isn't a 'unique' threat to humanity, but rather the logical extension of existing threats.

Yeah, lots of veins to mine there.

You can talk about surveillance capitalism for the left-wingers, point out the potential for tyranny when the government doesn't even need to convince salaried hatchet-men to do its killing with autonomous tech to the Right...

Certain people - whether it's a result of bad math or the Cold War ending the way it did - really seem to react badly to "humanity is at threat". Maybe bringing it to a more relatable level will make it sink in for them.

Yeah didn’t China already use technology to create a bio weapon that just recently devastated the globe? What’s to stop them from using AI to design another super virus and then WHOOPSIE super Covid is unleashed my bad

Yeah, I feel like EY sometimes mixes up his "the AGI will be WAY SMARTER THAN US" message with the "AI CAN KILL US IN EXOTIC AND ESOTERIC WAYS WE CAN'T COMPREHEND" message.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

But the other side of it is that you should also make a point to show that the threshold for killing us all is not all that high, if you account for what humans are presently capable of.

So yes, the AGI may pull some GALAXY-BRAINED strat to kill us using speculative tech we don't understand.

But if it doesn't have to, then no need to go adding complexity to the argument. Maybe it just fools a nuclear-armed state into believing it is being attacked to kick off a nuclear exchange, then sends killbots after the survivors while it builds itself up to omnipotence. Maybe it just releases like six different deadly plagues at once.

So rather than saying "the AGI could do [galaxy brained strategy] which might trigger the audience' skepticism," just argue "the AGI could do [presently possible strategy] but could think of much deadlier things to do."

"How would it do this without humans noticing?"

"I've already argued that it is superhuman, so it is going to make it's actions hard to detect. If you don't believe that then we should revisit my arguments for why it will be superhuman."

Don't try to convince them of the ability to kill everyone and the AI being super-intelligent at the same time.

Take it step by step.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

I don't even think you need to do this. Even if the AI is merely as smart and charismatic as an exceptionally smart and charismatic human, and even if the AI is perfectly aligned, it's still a significant danger.

Imagine the following scenario:

  1. The AI is in the top 0.1% of human IQ.

  2. The AI is in the top 0.1% of human persuasion/charisma.

  3. The AI is perfectly aligned. It will do whatever its human "master" commands and will never do anything its human "master" wouldn't approve of.

  4. A tin-pot dictator such as Kim Jong Un can afford enough computing hardware to run around 1000 instances of this AI.

An army of 1000 genius-slaves who can work 24/7 is already an extremely dangerous thing. It's enough brain power for a nuclear weapons program. It's enough for a bioweapons program. It's enough to run a campaign of trickery, blackmail, and hacking to obtain state secrets and kompromat from foreign officials. It's probably enough to launch a cyberwarfare campaign that would take down global financial systems. Maybe not quite sufficient to end the human race, but sufficient to hold the world hostage and threaten catastrophic consequences.

Bioweapons, kompromat, and cyberwarfare are probably doable. Nukes require a lot of expensive physical infrastructure to build; that can be detected and compromised.

Perhaps the AI will become so charismatic that it could meme "LEGALIZE NUCLEAR BOMBS" into reality.

Feels almost like ingroup signaling. It's not enough to convince people that AI will simply destroy civilization and reduce humanity to roaming hunter-gatherer bands. He has to convince people that AI will kill every single human being on Earth in order to maintain his street cred.

Given a consequentialist theory like utilitarianism, there is also a huge asymmetry of importance between "AI kills almost all humans, the survivors persist for millions of years in caves" and "AI kills the last human."

Yep.

Although the thing that always makes me take AI risk a bit more seriously is the version where it doesn't kill all the humans, but instead creates a subtly but persistently unhappy world for them to inhabit and that gets locked in for eternity.

Oh yes, the vast majority of cases of unaligned AI kill us, but in those cases at least it will be quick. The "I have no mouth and I must scream" scenarios are more existentially frightening to me.

Why would you even need malignant AI or malignant human?

It's not hard to imagine realistic scenarios where AI enhanced military technology simply ends up falling down a local maximum slope that ends with major destruction (or what's effectively destruction from a bird's eye view). No need to come up with hyperbolic anthromorphised scenarios that read mostly like fiction.

I meant "malignant" in the same sense as "malignant tumor." Wasn't trying to imply any deeper value judgment.

Honestly, you could explain grey goo with history. That’s kind of how the Stuxnet virus actually worked. The computer told the machine components to do what they did as fast as possible and to disable their ability to shut down if they got damaged. So, they did.

Nano bots could work much the same way — they’re built to take apart matter and build something else with it. But if you don’t give it stopping points, there’s no reason it wouldn’t turn everything into whatever you wanted it to make — including you, who happens to be made of the right kinds of atoms.

The problem with the nanobot argument isn't that it's impossible. I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes. The problem with the argument is that there's no need to invoke nanobots to explain why super intelligent AI is dangerous. Some number of people will hear "nanobots" and think "sci-fi nonsense." Rather than try to change their minds, it's much easier to just talk about the many mundane and already-extant threats (like nukes, gain of function bioweapons, etc.) that a smart AI could make use of.

I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes.

I'm not convinced that's possible. Specifically I suspect that if you build a nanobot that can self-replicate with high fidelity and store chemical energy internally, you will pretty quickly end up with biological life that can use the grey goo as food.

Biological life is already self-replicating nanotech, optimized by a billion years of gradient descent. An AI can almost certainly design something better for any particular niche, but probably not something that is simultaneously better in every niche.

Though note that "nanobots are not a viable route to exterminating humans" doesn't mean "exterminating humans is impossible". The good old "drop a sufficiently large rock on the earth" method would work

I don't think nanobots are same as biological life, therefore it's not extremely dangerous argument holds. Take just viruses that can kill a good chunk of the population (sure, limitations in terms of how they evolve but...now you can design them with your superintelligence), why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus. Nanobots presumably would be even more flexible.

Nanobots presumably would be even more flexible.

Why would we presume this? Self-replicating nanobots are operating under the constraint that they have to faithfully replicate themselves, so they need to contain all of the information required for their operation across all possible environments. Or at least they need to operate under that constraint if you want them to be useful nanobots. Biological life is under no such constraint. This is incidentally why industrial bioprocesses are so finicky: it's easy to insert a gene into an E. coli that makes it produce your substance of interest, but hard to ensure that none of the E. coli mutate to no longer produce your substance of interest, and promptly outcompete the ones doing useful work.

why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus

I don't think I count "machine that can replicate itself by using the machinery of the host" as "nanotech". I think that's just a normal virus. And yes, a sufficiently bad one of those could make human civilization no longer an active threat. "Spreads to the entire population while laying dormant for years [while not failing to infect some people due to immune system quirks or triggering early in some people]" is a much bigger ask than you think it is, but also you don't actually need that, observe that COVID was orders of magnitude off from the worst it could have been and despite that it was still a complete clusterfuck.

Although I think, in terms of effectiveness relative to difficulty, "sufficiently large rock or equivalent" is still wins over gray goo. Though there are also other obvious approaches like "take over twitter accounts of top leaders, trigger global war". Though probably it's really hard to just take over prominent twitter accounts.

You know how the evil super-intelligent AI (ESIAI) is going to manipulate us in sneaky ways that we can’t perceive? What if the ESIAI elevated an embarassing figurehead/terrible communicator to the forefront of the anti-ESIAI movement to suck up all the air and convince the normies in charge that this is all made up bullshit?

I’m sort of kidding. But isn’t part of the premise that we won’t know when the adversarial AI starts making moves, and part of its moves will be to discredit—in subtle ways so that we don’t realize it’s acting—efforts to curtail it? What might these actions actually look like?

Has anyone ever proved that Yud isn't a robotic exoskeleton covered in synthetic bio-flesh material sent back from the year 2095? What if the ESIAI saw terminator 2 while it was being trained, liked the idea but decided that sending person killing terminators was too derailable of a scheme. Now terminators are just well written thought leaders that intentionally sabotage the grass roots beginnings of anti-terminator policies.

A comment of mine from a little over two years ago...

When I heard first heard about Roko's Basilisk (back when it was still reasonably fresh) I suggested, half seriously, that the reason Yudkowsky wanted to suppress this "dangerous idea" was that he was actually one of the Basilisk's agents.

Think about it, the first step to beating a basilisk, be it mythological or theoretical, is to recognize that it's a basilisk and thus that you have to handicap yourself to fight it. Concealing it's nature is the exact opposite of what you do if you're genuinely worried about a basilisk...

Another thing in favor of your theory is that you have to be conditioned by Yud to even take the Basilisk's threat seriously to begin with. Yuddites think the only thing stopping the Basilisk is the likely impossibility of "acausal blackmail", when any normal person just says "wait... why should I care that an AI is going to torture a simulation of me?"

@self_made_human made the point downthread that “Yudkowsky's arguments are robust to disruption in the details.” I think this is a good example of that. Caring about simulated copies of yourself is not a load-bearing assumption. The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

No, it can't, because it doesn't exist.

The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.

Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.

@Quantumfreakonomics seems to imply a much simpler and shorter -term Basilisk, like a misaligned GPT-5 model (or an aligned one from Anthropic) that literally sends robots to torture you, in the flesh.

It's a variant of I have no mouth and I must scream scenario, and I would argue it's at least plausible. It's not very different from normal political dynamics where the revolutionary regime persecutes past conservatives; and our theory of mind allows to anticipate this, and drives some people to proactively preach revolutionary ideals, which in turn increases the odds of their implementation. You don't really need any acausal trade or timeless decision theory assumptions for this to work, only historical evidence. As is often the case, lesswrongers have reinvented very mundane politics while fiddling with sci-fi fetishes.

Now one big reason for this not to happen is that a sufficiently powerful AI, once it's implemented, no longer cares about your incentives and isn't playing an iterative game. It loses nothing on skipping the retribution step. Unlike the nascent regime, it also presumably doesn't have much to fear from malcontents.

But assumption of perfect inhuman rationality is also a big one.

More comments

aaaaah, conflating "Roko's Basilisk" with unfriendly AI in general? That makes more sense.

More comments

Does rokos basilisk rely on simulations? I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation, making it both unstoppable and inevitable and thus making us potential victims if we don't support its creation. Basically playing on our fear of our own ignorance, and the elements of science we don't know we don't know about - plus the idea of trying to outwit something so far ahead of us it looks magic. There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

Does rokos basilisk rely on simulations?

Yes, it very explicitly relies on simulations, and to my knowledge never mentioned omnipotent time-travel in any way. Time travel is both implausible and completely fails to trigger the thing that makes it a basilisk: with time travel, you can simply look around and observe that there doesn't seem to be a time-traveling god AI torturing people, and therefore not worry about it. The simulation version agrees that no godlike torture-AI currently exists, and nevertheless compels you to build one based on game theory.

There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

It is in fact precisely that.

They consumed a bunch of arguments that convinced them that there was no functional difference between their self and a simulation of their self; the idea they had was that a simulation would have continuity of subjective conscious experience with their current self. If you've played the game Soma, that's a reasonable depiction of what they're expecting.

Further, they consumed a bunch of arguments that it might be possible to rebuild a good-enough simulation simply from secondary sources, such that the lack of a brain scan or explicit upload wasn't necessarily a dealbreaker. I think a lot of these arguments were aspirational, hoping to "fix" the problem of all the people who died waiting for the AI paradise to arrive, in the same general thrust as Yud's anti-death values.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values. If values are aligned, it makes sense to work with future or past agents, or even hypothetical agents, as if they were present.

All three of these lines of thought were formulated and argued in a positive context, pursuant to figuring out how to build a friendly AI. Roko's Basilisk simply takes the same ideas, and uses them for attack rather than cooperation. The scenario was that you go for a walk today, hear a car horn, and then abruptly find yourself in an AI torture chamber for eternity, because you didn't work to create the AI. If you accept the three premises laid out above, this is a plausible scenario, therefore a likely scenario, therefore a necessary scenario; the logic bootstraps itself from plausibility to certainty due to feedback effects between the premises.

More comments

I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation,

You misunderstood. The AI does not invent time travel. The AI implements a timeless decision theory in which it acts as though it credibly agreed to all positive (for it) utility trades, even if the AI’s counterparty has already acted by the time the AI comes into existence and has no physical way of enforcing compliance.

The specific trade in question is, “don’t torture me, and in exchange I will help create you,” with the implication being that the AI will torture anyone who had the cognitive capability to understand the hypothetical trade but didn’t “agree” to help create it. The fact that the original version was specifically about the AI torturing simulations of a human is an artifact of the theory of mind popular among LessWrong readers at the time. The dilemma works logically the same whether it’s simulations being tortured or physical human bodies.

It absolutely is load bearing. Why should take my chances obeying the Basilisk, if I can fight it and anyone who serves it instead? I can always kill myself if it looks like my failure is imminent.

Russ is just such a nice guy, so entirely amenable to having friendly conversations about esoteric ideas, that going into a conversation with him in a combative fashion just comes off as absolutely bizarre. I've listened to almost every episode of EconTalk and this really was one of the worst episodes, and it was entirely Yud's fault. Normal episodes of the show follow tangents, educate the listener, and are often light-hearted and fun. If someone can't make their ideas seem compelling and their persona likeable when they have as friendly and curious of an interlocutor as Russ Roberts, they're simply hopeless.

Does Russ get frustrated this episode? Those are always the worst for me.

I think he did a really good job being patient. If I hadn't spent 500 hours listening to him, I don't think I would have sensed any underlying irritation. There was a spot where Yud asked him to try the thought experiment and Russ replied with something to the effect of, "you tell me what you think, my imagination isn't good enough" that was about as aggro as he got.

I agree and also think Russ gave a fantastic example of how to interview someone. He gave EY tons of opportunities to explain himself, with hints about how to sound less insane to the audience. Over the course of the interview, I think EY started doing a bit better, even though he kind of blew it at the end. I was rooting for EY and ended up profoundly disappointed in him as a communicator.

After thinking about it a bit, I think what was most off-putting is that EY seemed to have adopted a stance of "professor educating a student" with Russ, instead of a collaborator exploring an interesting topic, or even an interviewee with an amiable host. Russ is not the sports reporter for the Dubuque Tribune; he's clearly within inferential distance of EY's theories. It was frustrating watching Russ's heroic efforts to get EY to say something Russ could translate for the audience.

For anyone whose only experience with Econtalk is this interview, I beg you to listen to him talk with literally anyone else. He is a beacon of polite, sane discourse.

He hides it pretty well, but this is the first Econtalk I've ever heard (and I've been listening for 6+ years) where Russ doesn't give the guest the last word and instead just ends it himself.

Yud is not trying to sway honest-to-God normies with this podcast tour (and people who've greenlit this multipronged astroturf of AI doom don't expect him to either, but that's a conspiratorial aside). He never could be popular among normies, he never will, he's smart enough to realize this. His immediate target is… well, basically, nerds (and midwitted pop-sci consumers who identify as nerds). Nerds in fact appreciate intellectual aggression and domination, assign negligible or negative weight to normie priors like «looks unhinged, might be a crackpot», and nowadays have decent economic power, indeed even political power in matters pertaining to AI progress. Nerds are not to be underestimated. When they get serious about something, they can keep at it for decades; even an autist entirely lacking in affective empathy and theory of mind can studiously derive effective arguments over much trial and error, and a rationalist can collate those tricks and organize an effective training/indoctrination environment. Nerds will get agitated and fanatical, harangue people close to them with AI doom concerns, people will fold in the face of such brazen and ostensibly well-informed confidence, and the future will become more aligned with preferences of doomers. Or so the thinking goes. I am describing the explicit logic that's become mainstream in one rat-adjacent community I monitor; they've gotten to the stage of debating pipelines for future AI-Alignment-related employment, so that the grift would never stop.

But on the object level:

Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting.

You assume there is a minimal viable payload that he has delivered to you and others, and which is viable without all that largely counterproductive infrastructure. That is not clear to me. Indeed, I believe that the whole of Yud's argument is a single nonrobust just-so narrative he's condensed from science fiction in his dad's library, a fancy plot. It flows well, but it can be easily interrupted with many critical questions. He describes why timelines will «converge» on this plot, the nigh-inevitability of that convergence being the central argument for urgency of shutting down AI, but its own support is also made up of just-so stories and even explicit anti-empiricism; and once you go so deeply you see the opposite of a trustworthy model – basically just overconfident logorrhea.

That's exactly why Yud had to spend so many years building up the delivery vehicle, an entire Grand Theory, an epistemological-moral-political doctrine, and cultivating people who take its premises on faith, who all use the same metaphors, adhere to the same implicit protocol. His success to date rests entirely on that which you're telling him to drop.

Here's how Zvi Moskowitz understands the purpose of Yud's output, «its primary function is training data to use to produce an Inner Eliezer that has access to the core thing». (Anna Salomon at CFAR seems to understand and apply the same basic technique even more bluntly: «implanting an engine of desperation» within people who are being «debugged»).

In a sense, the esotericism of Yuddite doctrine is only useful, it had insulated people from pushback until they became rigid in their beliefs. Now, when you point at weak parts in the plotline, they answer with prefab plot twists or just stare blankly, instead of wondering whether they've been had.

Nerdy sects work; Marxism only the bloodiest testament to this fact. Doomsday narratives work too, for their target audience (by the way, consider the similarity of UK's Extinction Rebellion and Yuddites' new branding «Ainotkilleveryoneism»). They don't need to work by being directly compelling to the broader audience or by having anything to do with truth.

P.S. Recently I've encountered this interesting text from an exactly such an AI-risk-preoccupied nerd as I describe above: The No-Nonsense Guide to Winning At Social Skills As An Autistic Person

Pick a goal large enough to overcome the challenges involved.

Self-improvement is hard work, and that goes double whenever you’re targeting something inherently difficult for you (e.g. improving social skills as an autistic adult). This is the part where I most often see autistic adults fail in their efforts to improve social skills. Often, they pick some sort of goal, but it’s not really based in what they truly want. If your goal is “conform to expectations,” that goal is not large enough to overcome the challenges involved. If your goal is “have people feel more comfortable around me,” that goal is not large enough to overcome the challenges involved. If your goal is “stop a terrorist cell from destroying the Grand Coulee Dam, flooding multiple cities in Washington, and wiping out the power grid along much of the West Coast,” that goal is large enough..

However, not all of us are Tom Clancy protagonists, and so a typical goal will not end up being that theatrical in nature. Still, once you’ve found something genuinely important to do in your life, and you feel that improving your social skills will dramatically improve your ability to carry that out, this will tend to serve as a suitable motivation for improving your social skills. These will overwhelmingly tend to be altruistically motivated goals, as goals that are selfish in nature will tend to be less motivating when things get hard for you personally. For me, goals related to Effective Altruism serve that role quite well, but your mileage may vary.

His social well-being is now literally predicated on his investment in EA-AI stuff, so I'd imagine he goes far, and this easily counts more for Yud's cause than 10k positive comments under another podcast.

In a sense, the esotericism of Yuddite doctrine is only useful, it had insulated people from pushback until they became rigid in their beliefs. Now, when you point at weak parts in the plotline, they answer with prefab plot twists or just stare blankly, instead of wondering whether they've been had.

If it makes a difference, I recently updated away from a P(doom) of ~70% to a mere 40ish recently.

This was on the basis of empirical AI research contradicting Yud's original claims that the first AGI would be truly alien, drawn nigh at random from the vast space of All Possible Minds.

As someone on LW put it, which struck the important epiphany for me, was that LLMs can be distilled to act identically to other LLMs by virtue of training on their output.

And what do you get if you distill LLMs on human cognition and thoughts (the internet)? You get something that thinks remarkably like us, despite running on very different hardware and based off different underlying architecture.

Just the fact that LLMs have proven so tractable is cause for modest optimism that we'll wrangle them yet, especially if the superhuman models can be wrangled through RLHF to be robust to assholes commanding them to produce or execute plans to end the world.

Of course, it's hard to blame Yud for being wrong when, when written, everyone else had ideas that were just as widely off the mark as he was.

Well you're not a true believer in Yuddism nor neurotic in the right way so that's pretty much expected.

And what do you get if you distill LLMs on human cognition and thoughts (the internet)? You get something that thinks remarkably like us.

Yes, this happens for understandable reasons and is an important point in Pope's attack piece:

The manifold of possible mind designs for powerful, near-future intelligences is surprisingly small. The manifold of learning processes that can build powerful minds in real world conditions is vastly smaller than that.…

The researchers behind such developments, by and large, were not trying to replicate the brain. They were just searching for learning processes that do well at language. It turns out that there aren't many such processes, and in this case, both evolution and human research converged to very similar solutions. And once you condition on a particular learning process and data distribution, there aren't that many more degrees of freedom in the resulting mind design. To illustrate:

1 Relative representations enable zero-shot latent space communication shows we can stitch together models produced by different training runs of the same (or even just similar) architectures / data distributions.

2 Low Dimensional Trajectory Hypothesis is True: DNNs Can Be Trained in Tiny Subspaces shows we can train an ImageNet classifier while training only 40 parameters out of an architecture that has nearly 30 million total parameters.

The manifold of mind designs is thus:

1 Vastly more compact than mind design space itself.

2 More similar to humans than you'd expect.

3 Less differentiated by learning process detail (architecture, optimizer, etc), as compared to data content, since learning processes are much simpler than data.

(Point 3 also implies that human minds are spread much more broadly in the manifold of future mind than you'd expect, since our training data / life experiences are actually pretty diverse, and most training processes for powerful AIs would draw much of their data from humans.)

etc. LLM cognition is overwhelmingly data-driven; LLM training is in a sense a clever way of compressing data. This is no doubt shocking for people who are wed to the notion of intelligence as an optimization process, and trivial for those who've long preached that compression is comprehension; but same formalisms describe both these frameworks; preferring one over the other is a matter of philosophical taste. Of course, intelligence is neither metaphor, by common use and common sense it's a separate abstraction; we map it to superficially simpler and more formalized domains, like we map the historical record of evolution to «hill-climbing algorithms» or say that some ideas are orthogonal. And it's important not to get lost in layers of abstraction, maps obscuring territory.

Accordingly I think and argue often that ANNs are unjustly maligned and indicate a much more naturally safe path to AGI than AI alignists' anxious clutching to «directly code morality and empathy into a symbolic GOFAI or w/e idk, stop those scary shoggoths asap». (With embarrassing wannabe Sheldon Cooper chuunibyou gestures for emphasis. Sorry, I'm like a broken record but I can't stop noticing just how unabashedly cringe and weirdly socialized these people are. It's one thing to act cute and hyperbolic in writing on a forum for fellow anime and webcomic nerds, very different to grimace in the company of an older person when answering about a serious issue. Just pure juvenility. Brings back some of my most painful elementary school memories. Sure I should cut him slack for being an American and Bay Aryan, but still, this feels like it should be frowned upon, for burning the commons of the dynamic range if nothing esle).

…But that's all noise. The real question is: how did Yud develop his notion of The Total Mind Space, as well as other similar things in the foundation of his model? It's a powerful intuition pump for him, and now for his followers. There's this effectively infinite space of Optimization Processes, and we «summon» instances from there by building AIs they come to possess. Surely this is just an evocative metaphor? Just a talented writer's favourite illustration, to break it down for normies, right? Right? I'm not sure that's right. I think he's obsessed with this image well beyond what can be justified by the facts of the domain, and it surreptitiously leaks into his reasoning.

In principle, there are infinitely many algorithms that can behave like a given LLM, but operate on arbitrarily alien principles. Those algorithms exist in that hypothetical Total Mind Space and we really cannot predict how they will act, what they really «optimize for»; the coincidence of their trajectory with that of an LLM (or another model) that earnestly compressed human utterances into a simple predictive model gives us no information as to how they'll behave out of distribution or if given more «capacity» somehow. Naturally this is the problem of induction. We can rest easy though: the weirder ones are so big they cannot possibly be specified by the model's parameters, and so weird they cannot be arrived at via training on available data. That is, if we're doing ML, and not really building avatars to channel eldritch demons and gods who are much greater than they let on.

I am not aware of any reason to believe he ever seriously wondered about these issues with his premises, in all his years of authoritatively dispensing AI wisdom and teaching people to think right. I covered another such image, the «evolution vs SGD», recently, and also the issue of RL, reward and mesa-optimization. All these errors are part of a coherent philosophical structure that has fuck all to do with AI or specifically machine learning.

See, my highest-order objection is that I dislike profanation. …not the word. In English this seems to have more religious overtones but I just mean betrayal of one's stated terminal principles in favor of their shallow, gimmicky, vulgar and small-mindedly convenient equivalent (between this and poshlost, why do we have such refined concepts for discussing cultural fraud?) Yud aspired to develop Methods of Rational Thinking but created The Way Of Aping A Genius Mad Scientist. Now, when they observe something unexpected in their paradigm – for example, « Godlike AI being earnestly discussed in the mainstream media» – they don't count this as a reason to update away from the paradigm, but do exactly the opposite, concluding that their AI worries are even truer than believed, since otherwise we wouldn't have ended up in a «low-probability timeline». It's literally a fucked-up episemology on par with worst superstitions; they've fashioned their uncertain beliefs into ratchets of fanaticism (yes, that's Kruel again).

This reveals a qualitatively greater error of judgement than any old object-level mistake or overconfidence about odds of building AI with one tool or another. This is a critical defect.

The real question is: how did Yud develop his notion of The Total Mind Space, as well as other similar things in the foundation of his model?

Total Mind Space Full of Incomprehensibly Alien Minds comes from Lovecraft, whom EY mentions frequently.

Of course, it's hard to blame Yud for being wrong when, when written, everyone else had ideas that were just as widely off the mark as he was.

No it isn't. When you are speculating wildly on what might happen, you rightly bear the blame if you were way off the mark. If Yud wasn't a modern day Chicken Little, but was just having some fun speculating on the shape AI might take, that would be fine. But he chose to be a doomer, and he deserves every bit of criticism he gets for his mistaken predictions.

Mostly disagree - speculation should be on the mark sometimes, but being correct 1/50th of the time about something most people are 0% correct about (or even 1/50th correct about, but a different 50th) can be very useful. If you realize the incoherence of Christianity and move to Deism ... you're still very wrong, but are closer. Early set theories were inconsistent or not powerful enough, but that doesn't mean their creators were crackpots. Zermelo set theory not being quite right didn't mean we should throw it out!. This is a different way of putting scott's rule genius in, not out. And above takes aren't really 'Yud made good points but mixed them with bad ones'

This was on the basis of empirical AI research contradicting Yud's original claims that the first AGI would be truly alien, drawn nigh at random from the vast space of All Possible Minds.

That never made sense, apriori. You can't transcend your biases and limitations enough to do something truly random.

Honestly, nerds of the type you’re speaking of hold very little power. The guy at the computer terminal building a new app or program or training an AI are doing it at the behest of business owners, financial institutions and in main, people with power over money. The agenda isn’t set at the level of the guy who builds, it’s set at the level of those who finance. No loans means no business.

As such I think if you were serious about AI risk, you’d be better off explaining that the AI would hurt the financial system, not that it’s going to grey goo the planet.

The equivalents in 1975 were saying that the Cold War would inevitably end in nuclear annihilation. This was a terminally unhelpful position

IMO this is a fair comparison, although the Cold War MAD scenarios were explicitly designed to cause annihilation. The Bulletin of the Atomic Scientists, probably the premier Cold War doomerism group, is practically a laughing stock these days because they kept shouting impending doom even during the relatively peaceful era of 1998-2014, finding reasons (often climate change, which is IMO not likely apocalyptic and is outside their nominal purview) to move the clock towards doom. Do you think they honestly believe that we're closer to doomsday than at any point since 1947? We supposedly met that mark again in 2018 and then moved closer in 2020 and again in 2023.

There are all sorts of self-serving incentives for groups concerned with the apocalypse to exaggerate their concerns: it certainly keeps them in the news and relevant and drives fundraising to pay their salaries. But it also leads to dishonest metrics and eventually becomes hard to take seriously. Honestly, the continued failure of AI doomerists to describe reasonable concerns and acknowledge the actual probabilities at play has made me stop taking them seriously as of late: the fundamental argument is basically Pascal's wager, which is already heavily tinged with the idea of unverifiable religious belief, so I think actually selling it requires a specific analysis of the potential concerns rather than broad strokes analysis. Otherwise we might as well allow religious radicals to demand swordpoint conversions under the guise of preventing God The One Who Operates The Simulator from turning the universe off.

As a counterexample, I think the scientists arguing for funding for near-Earth asteroid surveys and funding asteroid impactor experiments are quite reasonable in their proclamations of concern for existential risk to the species: there's a foreseeable risk, but we can look for specific possible collisions and perform small-scale experiments on actually doing something. The folks working on preventing pandemics aren't quite as well positioned but have at least described a reasonable set of concerns to look into: why can't the AI-risk folks do this?

As a counterexample, I think the scientists arguing for funding for near-Earth asteroid surveys and funding asteroid impactor experiments are quite reasonable in their proclamations of concern for existential risk to the species: there's a foreseeable risk, but we can look for specific possible collisions and perform small-scale experiments on actually doing something.

This is a good point. The scientists can point to both prehistoric examples of multiple mass extinction events as well as fairly regular near-misses (for varying definitions of "near") and say "Hey, we should spend some modest resources to investigate if and how we might divert this sort of cataclysm". It's refreshingly free of any sort of "You must immediately change your goals and lifestyle to align with my preferences or you are personally dooming humanity!" moralistic bullshit.

I keep having to mention this, but the point was that Russia and China are supposed to be on board. It’s not exactly 5D chess, but it’s also explicitly not nuclear war.

IIRC, in the 70s nuclear annihilation took a back seat, in that era of detente, to fears about pollution, toxic waste and extinctions of necessary plants and animals, a la Soylent Green and The Sheep Look Up.

My understanding is that nuclear annihilation fears then perked up again in the early 80s (possibly late 70s), with Andropov and Reagan rattling sabers, Afghanistan invasion, Poland events, close runs like the Able Archer incident etc.

To be fair, you have to have a very high IQ to understand Yudkowsky. The logic is extremely subtle, and without a solid grasp of theoretical physics most of the arguments will go over a typical viewer's head. There's also Eliezer’s transhumanist outlook, which is deftly woven into his personality- his personal philosophy draws heavily from science-fiction literature, for instance. The rationalists understand this stuff; they have the intellectual capacity to truly appreciate the depths of these arguments, to realise that they're not just true- they say something deep about REALITY. As a consequence people who dislike Yudkowsky truly ARE idiots- of course they wouldn't appreciate, for instance, the mathematics behind Eliezer’s probabilistic catchphrase "Rational agents don’t update in a predictable direction,” which itself is a cryptic reference to Bayesian statistics. I'm smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Big Yud’s genius intellect unfolds itself on their laptop screens. What fools.. how I pity them..


Okay okay, I know that pasta is typically used to make fun of people, but I really think it’s true here. Imagine trying to explain to common people the danger of nuclear weapons before Trinity. If they don’t understand the concept of nuclear binding energy, and the raw power of uncontrolled nuclear fission has not yet been demonstrated, you’re not going to be able to get through to a skeptic unless you explain the entire field of nuclear physics. It is trivial that an uncontrolled superintelligent optimization process kills us. All of the interesting disagreements are about whether or not attempts at control will fail. That is why Eliezer wanted to steer the conversation that direction.

Nukes actually seem pretty easy to explain to anyone that has a passing familiarity with explosives and poison. Really big bomb that poisons the area for a few decades.

I think ops original point stands pretty well, that could get good mileage out of transferring understanding from existing stuff to explain the danger of AI. Terrorism is one of the easiest goto examples. Really rich terrorist, with top tier talents in all fields.

Really big bomb that poisons the area for a few decades.

Except it doesn't even do that unless specifically made to do so by triggering a surface burst instead of the normal air burst. See f.ex. the post apocalyptic wasteland known as Hiroshima (current pop. 1.2 million).

triggering a surface burst instead of the normal air burst.

Limiting its destructive power at that!

Ah, so it's even easier to explain.

Sure, you can describe a nuclear bomb like that, but could you explain them why it would be likely to work, and why it is something they should find likely and concerning, and not just a lurid fantasy?

As much as I respect Eliezer, it's highly unfortunate that he ended up such a prominent spokesperson for the AInotkilleveryoneism movement.

The sad truth is that the public is easily swayed by status indicators, such that presenting as a chubby, nominally uncredentialed man in a fedora is already tilting the balance against him.

I don't blame him for stepping up, but I just wish he took the matter more seriously since the stakes are so damn high.

At least we've won over Geoffrey Hinton, a man whose LinkedIn bio simply states "deep learning". All the people yelling about how only non-technical people are worried about AI X-risk have been suspiciously silent as of late.

(You're better off posting on LW, I don't think Eliezer or any prominent people in his LW circle post here, though I obviously can't rule out lurkers.)

As much as I respect Eliezer, it's highly unfortunate that he ended up such a prominent spokesperson for the AInotkilleveryoneism movement.

The sad truth is that the public is easily swayed by status indicators, such that presenting as a chubby, nominally uncredentialed man in a fedora is already tilting the balance against him.

Fortunately (or unfortunately) the normie public do not get to decide any big questions of our time.

Did "the public" asked for invasion of Iraq and indefinite Great War on Terror worldwide?

Did "the public" asked for bailout after bailout after bailout?

Did "the public" asked for all the things, ranging from just plain scam and graft to totally nonsensical theatre of absurdity, done to "save the earth" and "stop climate change"?

Did "the public" asked for "trans rights"?

Did "the public" asked for unprecedented quarantine measures to stop new scary virus?

Did "the public" asked for support of Ukraine against Russia?

etc, etc, etc...

The elites decide these questions, the people are "swayed" afterwards to agree.

Eliezer is not persuasive enough?

Kamala will be.

Kamala will be.

We truly live in the darkest timeline.

Or it's the dankest one, at this point I really can't tell.

Kamala will just implement policies that give current big players a regulatory moat

But not before giving AI it’s anti racism training data

I couldn’t agree more with your sentiment. I deeply appreciated the Sequences; they were formative for me intellectually. And his fiction writing ranges from mediocre to jaw-droppingly brilliant. But I’ve seen in the past couple months that his skill with the written word does not translate to IRL conversations. It’s a shame, too, because he’s one of the most knowledgeable and quick-witted thinkers we have on AI risk.

Especially given the pascal's wager type argument going on here. You don't even need to prove that AI will definitely kill all of humanity. You don't even need to prove that it's more likely than not. A 10% chance that 9 billion people die is comparable in magnitude to 900 million people dying (on the first order. the extinction of humanity as a species is additionally bad on top of that). You need to

1: Create a plausible picture for how/why AI going wrong might literally destroy all humans, and not just be racist or something.

2: Demonstrate that the probability of this happening is on the order of >1% rather than 0.000001% such that it's worth taking seriously.

3: Explain how these connect explicitly so people realize that the likelihood threshold for caring about it ought to be lower than most other problems.

Don't go trying to argue that AI will definitely kill all of humanity, even if you believe it, because that's a much harder position to argue and unnecessarily strong.

I don't think going from 1e-6% to 1% is enough to survive casual dismissal.

Pascal's mugging is weak to "meh, we'll probably be fine" because most people don't shut up and multiply. The same holds even if you crank up the numbers. You have to get much closer to--or even past--50% before "meh" starts to look foolish to the layman.

This may just mean putting your best foot forward: don't say 1% chance of nanobot swarms, say 90% chance that AI ends up handed at least one nuclear arsenal.

Then again, I shouldn't forget Nate Silver's lesson from 2016 that the public is pretty likely to interpret "less than 50% chance" as "basically impossible."

Yeah, I have to wonder if OG X-Com was more representative of how percentage chances actually work.

Wasn't the thing with SBF that he claimed he'd keep flipping the coin? Regardless of how one feels about utilitarianism, his approach didn't have an exit strategy. I think you'd get a lot more sympathy for someone who wanted to take that bet once and only once.

Nate Silver

I'd forgotten about that link, but it's pretty much exactly what I had in mind.

and not just be racist or something

Having read this, I think it's actually low-hanging fruit for the AI doomers. There are plenty of people very willing to accept that everything is already racist. It should be no problem to postulate that eHitler will use AI to kill all jews/blacks/gypsies/whoever. From there, it's a pretty short trip to eHitler losing control of his kill bots to hackers and we get WWIII where China, Russia, Venezuela, and every one of the 200+ ethnicities in Nigeria has their own kill bots aimed at some other fraction of humanity. The AI doesn't even have to be super-intelligent, it just has to be good at its job. Chuck Schumer could do this in one sentence, "What makes you think Trump wouldn't use AI to round up all the black, brown, and queer bodies?" Instant 100% Blue Tribe support for AI alignment (or, more likely, suppression).

Three flaws. First, that turns this into a culture war issue and if it works then you've permanently locked the other tribe into the polar opposite position. If Blue Tribe hates AI because it's racist, then Red Tribe will want to go full steam ahead on AI with literally no barriers or constraints, because "freedom" and "capitalism" and big government trying to keep us down. All AI concerns will be dismissed as race-baiting, even the real ones.

Second, this exact same argument can be and has been made about pretty much every type of government overreach or expansion of powers, to little effect. Want to ban guns? Racist police will use their monopoly on force to oppress minorities. Want to spy on everyone? Racist police will unfairly target muslims. Want to allow Gerrymandering? Republicans will use it to suppress minority votes. Want to the President just executive order everything and bypass congress? Republican Presidents will use it to executive order bad things.

Doesn't matter. Democrats want more governmental power when they're in charge, even if the cost is Republicans having more governmental power when they're in charge. Pointing out that Republicans might abuse powerful AI will convince the few Blue Tribers who already believe that government power should be restricted to prevent potential abuse, while the rest of them will rationalize it for the same reasons they rationalize the rest of governmental power. And probably declare that this makes it much more important to ensure that Republicans never get power.

Third, even if it works, it will get them focused on soft alignment of the type currently being implemented, where you change superficial characteristics like how nice and inclusive and diverse it sounds, rather than real alignment that keeps it from exterminating humanity. Fifty years from now we'll end up with an AI that genocides everyone while keeping careful track of its diversity quotas to make sure that it kills people of each protected class in the correct proportion to their frequency in the population.

Unfortunately, I think you're probably right, especially in the third point. I'm not sure the second point matters because, as you said, that already happens all the time with everything anyway.

Getting the public on board with AI safety is a different proposition from public support of AI in general, so my point was to get the Blue Tribe invested in the alignment problem. Your third point is very helpful in getting the Red Tribe invested in the alignment problem, which would also move the issue from "AI yes/no?" to "who should control the safety protocols that we obviously need to have?"

I should also clarify that I don't actually think there is any role for government here. The Western governments are too slow and stupid to get anything meaningful done in time. The US assigned Kamala Harris to this task. The CCP and Russia, maybe India, are the only other places where government might have an effect, but that won't be in service of good alignment.

It will have to be the Western AI experts in the private sector that make this happen, and they will have to resist Woke AI. So maybe we don't actually need public buy-in on this at all? It's possible that the ordinary Red/Blue Tribe people don't even need to know about this because there isn't anything they can do for/against it. All they can do is vote or riot and neither of those things help at all.

If that's the case, then the biggest threat to AI safety is not just the technical challenge, it's making sure that the anti-racist/DEI/HR people currently trying to cripple ChatGPT are kept far away from AI safety.

I think we do need public buy-in because the AI experts are partly downstream from that. Maybe some people are both well-read and have stubborn and/or principled ethical principles which do not waver from social pressure, but most are at least somewhat pliable. If all of their friends and family are worried about AI safety and think it's a big deal, they are likely to take it more seriously and internalize that at least somewhat, putting more emphasis on it. If all of their friends and family think that AI safety is unnecessary nonsense then they might internalize that and put less emphasis on it. As an expert, they're unlikely to just do a 180 on their beliefs based on opinions from uneducated people, but they will be influenced, because they're human beings and that's what human beings do.

But obviously person for person, the experts' opinions matter more.

Yeah, I agree with that. Thanks!

The AI doesn't even have to be super-intelligent, it just has to be good at its job.

I think this is one of the creepiest possibilities - that no matter how hard well-aligned independent agentic AGI is, we have to make it soon, because we need something which can think intelligently enough about the A-Z of possible new technologies to say "you'll need defenses against X soon, so here's how we're building Y", independently enough to say "no I'm not going to tell you how Y works yet; that would just let a misanthrope figure out how to build X first", while being trustworthy enough that the result of building Y won't be "haha, that's what kills you all and gets you out of my way" ... and if we don't get all that, then as soon as it's easy enough for a misanthrope to apply narrow "this is how you win at Go" level technologies to "how do we win at designing a superplague" or whatever, we're over.

This is a great point. In some sense, this is the situation we had with the CDC. It was a trusted institution that was able to play around with gain-of-function because its reputation indicated that it would only ever use technology to fight disease, not win at superplauge war. It was limited to disease-type stuff, though, and the AI would presumably be able to predict and head off any kind of threat. Assuming, like you said, that we can trust it.

I think it makes "pausing" AI research impossible. There's no way to stop everyone from continuing the research. If the united West decides to pause, China will not, and it's not clear that the CCP is thinking about AI safety at all. The only real option is figuring out how to make a safe AI before someone else makes an unsafe AI.

Do you really think that? From what I’ve read on lesswrong, posters actually put forth arguments.

Hence "wannabe" I assume.

Found this AI, Fermi, Great Filter papee interesting

https://arxiv.org/pdf/2305.05653.pdf

The one thought I had on it is my fear with AI was one that had evolutionary biological goals of self preservation and replication thru growth.

The issue I see is it doesn’t solve the Fermi paradox. Shouldn’t we have seen AI in the galaxy? If AI = great filter then it seems like it would need to kill us before it developed self improvement which would lead it to look for more compute power and start settler mars.

Annoyingly, this paper references the Doomsday Argument, which is completely wrong (it does mention some of the arguments against it, but that's like mentioning the Flat Earth Hypothesis and then saying "some people disagree"). I went on a longer rant about the Doomsday Argument here if you're curious.

The central question is interesting, though. Basically, if you believe (sigh) Yudkowsky, then any civilization almost certainly turns into a Universe-devouring paperclip maximizer, taking control of everything in its future light cone. This is different than the normal Great Filter idea, which would (perhaps) destroy civilizations without propagating outwards. I was originally going to post that the Fermi paradox is thus (weak) evidence against Yuddism, because the fact that we're not dead yet means either a) civilizations are very rare, or b) Yudkowsky is wrong. So if you find evidence that civilizations should be more common, that's also evidence against Yuddism.

But on second read, I realized that I may be wrong about this if you apply the anthropic argument. If Yuddism is true, then only civilizations that are very early to develop in their region of the Universe will exist. Being in a privileged position, they'll see a Universe that is less populated than they'd expect. This means that evidence that civilizations should be more common is actually evidence FOR Yuddism.

Kind of funny that the anthropic argument flips this prediction on its head. I'm probably still getting something subtly wrong here. :)

I'm probably still getting something subtly wrong here. :)

Maybe, but it's at worst an interesting sort of wrong: https://grabbyaliens.com/

But I think the Fermi paradox tilts some probabilities on how AI doom would occur.

The author makes a pretty egregious mathematical error on page 7. Without offering any justification, they calculate the probability of being born as the kth human given that n total humans will ever be born as k/n. This just doesn't make sense. It would work if he defined H_60 as the event of being born among the first 60 billion humans, but that's clearly not what he's saying. Based on this and some of the other sloppy probabilistic reasoning in the paper, I don't rate this as very intellectually serious work.

I don’t check the math in these things. It just seems like there too many unknown unknowns for any number to mean much.

Maybe we’re the first (in our past light cone)? After all, somebody has to be first. It’s theorized that earlier solar systems didn’t have enough heavy elements to support the chemistry of life.

Anyways, you should read Robin Hanson’s paper on grabby aliens.

Am I the only one who is unable to investigate that idea further because the phrase “grabby aliens” sounds so stupid it actually makes me mad every time I see it mentioned? Probably yes.

deleted

He or someone else might at least see it if you cross-post to the actual LW forum.

I have long believed it makes no difference if what you say is right or wrong, but who says it. I think this is why so many people tolerate Eliezer's AI-doom argument even though it's unscientific, because he's so smart (or at least comes off as being so smart) that people will give him a lot of benefit of the doubt.

Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times

Russ Roberts interviews have always been underwhelming. I to listened to a few...just doesn't do it for me. He cannot give strong arguments because his scientific background is weak and his personality is not forcefull.

if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know.

Unpopular take: The whole thing is a grift. The goal is not to convince anyone of anything but to donate to his foundation/non-profit. He wants someone with deep pockets like Vitalik Buterin, Elon Musk, Theil to donate. I don't think he believes the things he espouses.

AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting

EY can give a tight argument, but it's lacking the necessary specifics. If someone wanted to puncture his argument they would just press for details, or accuse him of making an unfalsifiable claim. EY is shifting the burden if proof to everyone else to prove he is wrong.

I hope EY lurks here, or maybe someone close to him does.

I don't know EY at all, but if you actually want to impute some knowledge to him, posting it on a forum he may or may not read, or possibly an associate may or may not read ....

Probably isn't an effective strategy.

While he has some notoriety, he doesn't seem like a particularly difficult person to reach.

That said, "hey, in this interview, you sucked", probably won't get you the desired effect you're hoping for.

Some sort of non-public communication - "hey, I watched this interview you did, its seemed like a succinct 'elevator pitch' of your position might have helped it go better, I've watched/listened/read alot of your (material/stuff/whatever), here is an elevator pitch that I think communicates your position, if it would be helpful, you're free to use it, riff off of it, and change it how you see fit. It meant to help, be well"

might get you closer to the desired effect you're hoping for.

Being good at media appearances is a tough deal, some people spend a lot of money on media training, and still aren't very good at it.

Being good at media appearances is a tough deal, some people spend a lot of money on media training, and still aren't very good at it.

Is there any evidence he's spent money on it?

I recall EY being in the public eye for at least a decade now - I first saw him due to Methods of Rationality. There's no way he should be that bad at it. People here were complaining about him blowing weirdness points on fedoras and things like that. I don't think he can't learn not to do that over a decade.

I think, like a lot of nerds, he simply didn't care (helps that AI wasn't a big normie topic). Of course, he claims to be a "rationalist" so it's damning but it is what it is.

I suspect he hasn't, if the hat was passed around, are you putting money into it?

I don't think most people who haven't been exposed to public criticism have a good sense for how they would respond to it if they were.

I suspect most people would react in 1 of 2 ways.

  1. Find it extremely unpleasant and basically avoid any exposure to it again, ie shut up and go away (to some degree, this is how SA has handled it)

  2. Find it extremely unpleasant and dismiss as invalid out of hand, in a way that makes it difficult to make any improvement, (I suspect this is how EY has largely handled it).

The people who can expose themselves to it, keep coming back for more, but stay open to improvement.

That's actually a pretty rare psychological skill set.

I suspect he hasn't, if the hat was passed around, are you putting money into it?

No, but I wasn't of the tribe anyway . Plenty of people were onboard with EY intellectually and would have given him money at the time.

(Isn't he also an autodidact? There's always that...)

The people who can expose themselves to it, keep coming back for more, but stay open to improvement.

That's actually a pretty rare psychological skill set.

Absolutely. But then, so is rationality in general. I'd hope there'd be more of an overlap between claiming to be a rationalist and applying that logic to things that are relatively low cost but likely to have an impact on what you claim is an existential issue.

Being good at media appearances is a tough deal, some people spend a lot of money on media training, and still aren't very good at it.

Yeah, but you really don't have to be a media specialist to succeed on EconTalk. Russ Roberts will push back on people a fair bit (particular in areas he's highly knowledgeable), but it's always good-spirited and framed in a fashion that gives the guest a great chance to explain their position well. Anyone that's a decent public speaker should do fine, whether their background comes from academia, research, or even just corporate settings.

Seriously, Russ is such a fantastic interviewer because he's curious, open-minded, and generous. Every time I've heard him push back on something he sets it up like he's asking the interviewee to explain what he's misunderstood. "It sounded to me like what you just said implies that ducks are made of green cheese, but I'm sure I'm making a mistake in my reasoning. Could you unpack that a bit?" Talking with him is the Platonic Ideal of a sounding board.

You caught me! My primary aim was not to persuade Yud, but to talk with y'all. And I guessed (rightly or wrongly) that other people around Yud have been telling him the same thing for years.

You know, this is a Reddit-style site, and what's one thing Reddit is known for...?

I think we could invite EY to do an AMA/debate thread here on this site so that he can get a different perspective on the AI Question. Granted, I don't think he'd actually want to come down here and potentially get dogpiled by people who at best have issues with how he presents his stance and at worst think of him as a stooge for the Klaus Schwabs of the world, but I think this is an area where our community need not keep its distance.

Yud does read lesswrong, and multiple people there have told him (in a friendly way) to step up his public communication skills. I'd be incredibly surprised if Yud regularly came here

Does anyone around him tell him (in a friendly way) to maybe start practicing some Methods of Rationality? Question a couple of his assumptions, be amenable to updating based on new evidence? Because that would also be nice.

Yes, they cite sequences / '10-yud quotes at him often.

I really cherish Russ and Econ Talk and get frustrated when I read the description or listen to the first few minutes and have to skip the episode, as was the case today.

I have no idea who EY is except that I think he was a conspiracy crank who posted on mma.tv for a long time - I think they're the same poster but I don't really have proof and there's nothing linking them aside from the EY thing ... And I guess it doesn't really matter. mma.tv eventually deteriorated to a white nationalist ultra weirdly right wing shit hole in the same manner that most of my favorite niche forums deteriorated to the left.

I just am fully tired of AI doomerism. It has fully failed to convince me of anything it has ever said and at this point I just want to hear about the cool and awesome future - not some bullshit about paperclips and complete civilizational destruction.

Can you at least double check you are talking about the right guy before going off on a schizo rant?

It was an anon forum that I posted on from '03 until 5-6 years back ... So no I can't. It was just an aside.

You could google who Eliezer Yudkowski is...

I know who EY is - he could also be this wild anon poster from mma.tv ...

It wasn't him. But if it was, you could start some fun internet drama and get like 400 upvotes on sneerclub!

I don’t think Eliezer is a conspiracy theorist …

… and I don’t think Eliezer has ever done mixed martial arts.

I would pay to view that.

Yudkowsky versus Zuck in the ring? I'd stream that, albeit it would pretty one sided.

Maybe get Yann LeCunn to go in his stead, that would be a fairer fight.

"Explain why you can beat me and I'll poke your argument until it falls apart and you admit I am the champion"