site banner

Culture War Roundup for the week of June 3, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

How NOT to Regulate the Tech Industry

Hot on the heels of my comment describing the UK's effort to finally rid the IoT market of extremely basic vulnerabilities like "has a default password", Colorado jumps in like Leroy Jenkins to show us how, exactly, tech regulation shouldn't be done. SB 205 is very concerned with "algorithmic discrimination", which it defines as, "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law."

Right off the bat, it seems to be embracing the absolute morass of "differential treatment or impact", with the latter being most concerning, given how incomprehensible the similar "disparate impact" test is in the rest of the world. This law makes all use of algorithms in decision-making subject to this utterly incomprehensible test. There are rules for developers, telling them how they must properly document all the things to show that they've apparently done whatever magic must be done to ensure that there is no such discrimination. There are rules for deployers of those algorithms, too, because the job is never done when you need to root out any risk of impacting any group of people differently (nevermind that it's likely mathematically impossible to do so).

Their definitions for what types of algorithms this law will hit are so broad that they already know they captured far too much, so they go on a spree of exempting all sorts of already-existing things that they know about, including:

(A) ANTI-FRAUD TECHNOLOGY THAT DOES NOT USE FACIAL RECOGNITION TECHNOLOGY;

(B) ANTI-MALWARE;

(C) ANTI-VIRUS;

(D) ARTIFICIAL INTELLIGENCE-ENABLED VIDEO GAMES;

(E) CALCULATORS;

(F) CYBERSECURITY;

(G) DATABASES;

(H) DATA STORAGE;

(I) FIREWALL;

(J) INTERNET DOMAIN REGISTRATION;

(K) INTERNET WEBSITE LOADING;

(L) NETWORKING;

(M) SPAM- AND ROBOCALL-FILTERING;

(N) SPELL-CHECKING;

(O) SPREADSHEETS;

(P) WEB CACHING;

(Q) WEB HOSTING OR ANY SIMILAR TECHNOLOGY; OR

(R) TECHNOLOGY THAT COMMUNICATES WITH CONSUMERS IN NATURAL LANGUAGE FOR THE PURPOSE OF PROVIDING USERS WITH INFORMATION, MAKING REFERRALS OR RECOMMENDATIONS, AND ANSWERING QUESTIONS AND IS SUBJECT TO AN ACCEPTED USE POLICY THAT PROHIBITS GENERATING CONTENT THAT IS DISCRIMINATORY OR HARMFUL.

If your idea for a mundane utility-generating algorithm didn't make the cut two weeks ago, sucks to be you. Worse, they say that these things aren't even exempted if they "are a substantial factor in making a consequential decision". I guess they also exempt things that "perform a narrow procedural task". What does that mean? What counts; what doesn't? Nobody's gonna know until they've taken a bunch of people to court and gotten a slew of rulings, again, akin to the mess of other disparate impact law.

Don't despair, though (/s). So long as you make a bunch of reports that are extremely technologically ill-specified, they will pinky swear that they won't go after you. Forget that they can probably just say, "We don't like the look of this one TPS report in particular," and still take you to court, many of the requirements are basically, "Tell us that you made sure that you won't discriminate against any group that we're interested in protecting." The gestalt requirement can probably be summed up by, "Make sure that you find some way to impose quotas (at least, quotas for whichever handful of groups we feel like protecting) on the ultimate output of your algorithm; otherwise, we will blow your business into oblivion."

This is the type of vague, awful, impossible regulation that is focused on writing politically correct reports and which actually kills innovation. The UK's IoT rules might have had some edge cases that still needed to be worked out, but they were by and large technically-focused on real, serious security problems that had real, practical, technical solutions. Colorado, on the other hand, well, I honestly can't come up with words to describe how violently they've screwed the pooch.

I’m not going to go full devil’s advocate, here, because this looks like a textbook case of reactionary, something-ought-to-be-done legislation. But I’d like to hone in on one particular aspect.

Even without this law, I would expect AI decision making to fail disparate impact tests due to illegibility. Good luck proving business necessity!

But that cuts both ways. Once RLHF has beaten the racial slurs out of an AI, how are you going to prove that it was going for disparate treatment, even if it’s absolutely refusing to hire/promote/train a protected class?

No, seriously, I would like to see a standard which can distinguish between disparate treatment and impact. You can get a human on the stand and ask about intent, animus, whatever. I don’t think you can expect that to work for a computer program.

AI companies should be afraid of causing disparate treatment. It’s wrong, even when it makes more money. But an unregulated market doesn’t have much reason to care about right or wrong. Until we find a better way to draw the line, disparate impact is going to remain useful.

Without having thought about it super long, like letting myself gradually pick up examples over weeks/months, I can only think of a couple areas that have been able to resist a collapsing of disparate treatment and disparate impact. Credit scores and, currently hanging by a 6-3 thread, gerrymandering.

For gerrymandering, sigh. Honestly, it might just be that the Court is tired of these cases. They get stuck dealing with them over and over again, unlike most of the areas where the disparate treatment/impact distinction is collapsed.

For credit scoring, I think it's that there is soooo much money on the line from politically-powerful interests, plus a little historical "we've been using this for so long" factor. Would credit scoring have to fall under a strict interpretation of how these concepts work according to a radical (or even the otherwise dominant party line)? I think absolutely. Is the reason why it's been able to persist that you can get a human on the stand and ask about intent, animus, whatever? Not at all. Credit scores are an algorithm. An impersonal, just simple math, algorithm, with data in that may be subject to all the complaints people want to have about, "But if your data in is biased by a white supremacist patriarchy, then of course your algorithm is going to have racist and sexist disparate impact." Note that this Colorado law calls out that they're interested in:

THE DATA GOVERNANCE MEASURES USED TO COVER THE TRAINING DATASETS AND THE MEASURES USED TO EXAMINE THE SUITABILITY OF DATA SOURCES, POSSIBLE BIASES, AND APPROPRIATE MITIGATION

No, the reason credit scores are still allowed is because too many connected people would stand to lose too much money if we let the collapse of disparate treatment/impact culminate entirely in the way that it seems to be going in nearly every other domain.

Although the district had historically elected Republicans since 1980, in 2018 a Democrat, Joe Cunningham, won in an upset. Mace defeated him in 2020 by less than 1%.

…and then tried to shuffle the boundaries so those 49% would never win again. No wonder they go for Biden.

I can’t believe that “we did it to make their votes worth less” is considered a legitimate defense. That’s obviously against the spirit of the Constitution and (in my opinion) ought to be illegal. Them’s the rules, I guess.

As for credit—I think we’ve got to distinguish between the different laws governing disparate impact.

First, you have the employment restrictions downstream of Title VII and Griggs. I think these are most likely to apply disparate impact, but also have the most explicit protections. We just don’t think about them as much because age and sex discrimination aren’t as politicized today. Construction, warehousing, meat packing…they’re obviously going to favor young men, but their “business necessity” precludes disparate impact. Cue commenters explaining how this is totally a feminist agenda. But I digress.

Gerrymandering isn’t covered by employment law. The opinion makes it clear that it’s a constitutional question. So I’m not surprised that disparate impact doesn’t come into it.

Neither of those laws apply to housing, which imports disparate impact via the Fair Housing Act. Like Title VII, that law is explicitly interested in racial justice.

Where does this leave credit scores insurers, and other actuarial pursuits? They’re certainly not mentioned in the Constitution. None of the titles of the 1964 CRA cover them. If there’s a later civil rights bill that does, I couldn’t find it. Instead, it appears that insurers are regulated by the states.

There is no third step. A neutral factor’s disproportionate impact on a protected class does not constitute unfair discrimination under any controlling state insurance law.

In other words, federal civil rights legislation doesn’t apply. There are guiderails on state regulations, but they date back to 1945 and use a more narrow definition of discrimination. Congress recognized that they shouldn’t mandate a product while also forcing it to be insolvent. Coming on the heels of the New Deal, this is pretty wild stuff!

Does credit get a similar exemption? Hell if I know. My point is that there’s a legal, intentional basis. Sometimes it’s not actually a corrupt bargain.

AI companies should be afraid of causing disparate treatment. It’s wrong, even when it makes more money. But an unregulated market doesn’t have much reason to care about right or wrong. Until we find a better way to draw the line, disparate impact is going to remain useful.

Modern AI tools have been compared to magic oracles, we ask it a question and it synthesizes vast amounts of information to give us an answer.

What this regulation will achieve isn't restricting the AI from having a disparate impact, it is restricting the AI from synthesizing that information and then telling the truth. Certain categories of question are impossible to ask, or impossible to get a correct answer about, without risking disparate impact.

Consider: take /r/rateme and turn it into a prediction algorithm. Go through the thousands upon thousands of posts and figure out how to spit out an approximation of how Reddit would rate your pictures, without posting your pictures to Reddit. Useful tool, now instead of embarrassing myself asking a bunch of strangers to rate my pictures, I can just do a couple clicks on an online tool and it will spit out what Reddit would have told me anyway. Advances for privacy! I can run the test iteratively, and use different pictures for an online dating profile, or even different haircuts or physique choices edited in, based on the output, and figure out how to make myself more attractive.

But, such an algo would either instantly cross the line of acceptability, or it would need to be dishonest. Because it can't give black people lower ratings, it can't give Asian women higher ratings and Asian men lower ratings, it can't give trans people lower ratings. It's not even clear, based on the angles used to wedge queers into civil rights law intended to protect women, that it can ding effeminate men or butch women. It can't ding you for wearing a yarmulke, even though I can guarantee you that wearing a yarmulke will lower your dating odds. It would be impossible to create such an oracle and not have a disparate impact. So, we've created a tool to grow our knowledge, but that field is permanently restricted, some areas of knowledge must remain unknowable under Colorado law.

You’ve lost me.

I’m not arguing for this law. I’m arguing that disparate impact is useful, even necessary, to achieve goals about disparate treatment. Given that I find the latter legitimate, I’m willing to give more slack to the former.

Let’s say an unsavory developer made your hypothetical product with one change. Like Golden Gate Claude, it tries to work one topic into its answers, except instead of a bridge, it’s Puerto Ricans. This tool just hates ‘em. Any time real Reddit would have come up with a sick burn, it’s now directed at this one nationality. It never has anything good to say about them, and tries its best to convince users to hate and fear (looking like) them. Textbook disparate treatment, right?

Now prove it. How are you going to show that this oracle is a huge racist? Examples? Good luck. Statistical comparison to real Reddit? Thin ice. You’re left with “I know it when I see it,” which is a pretty rough standard for lawsuits.

Of course, the point is moot, because an edgy fashion AI isn’t doing any direct harm. It’s not illegal in the same way as denying a loan or a promotion. But that’s true of your hypothetical, too. Neither of them makes a “consequential decision,” so neither must remain unknowable under this law.

Once RLHF has beaten the racial slurs out of an AI, how are you going to prove that it was going for disparate treatment,

The whole point of "disparate impact" is that intent is irrelevant. It is proven simply by pointing out that the actual outcomes are different.

Well…yes.

I’m saying a disparate-treatment-only law might work for humans, since in theory, you can prove they’re doing it on purpose. So maybe dumping disparate impact is alright.

But the standard of proof for our language models is currently really bad. Removing the category of disparate impact, then, would give bad actors and lazy entrepreneurs a ton of plausible deniability.

If an algorithmic system is consistent and has limited inputs, then it doesn't really matter if it's completely blackboxed. You can just rerun the analysis with slightly-changed inputs to find what it decides. Hopefully that results in mostly-smooth results on simple categories, but a illegible AI might be more understandable than a lying human regardless.

Does this mean insurance companies are forbidden from discriminating by age and sex e.g. for car insurance?

any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals

That particular form of discrimination has already cleared its legal hurdles, as far as I know.

Which is bizarre. If women paid higher rates it never would have stood up to challenge.

The law is saying you can't use all these actuarial ways to determine risk. In many states, you can't even use credit reports or arrest records.

Why is there this special carve out to discriminate against men?

Why is there this special carve out to discriminate against men?

Because men think it's gay to organize and demand things. Simple as.

Why is there this special carve out to discriminate against men?

Who cares?

That's a literal question: Which people care about men being discriminated against, how much do they care about it, and what can they do based on those feelings?

Men's Rights activism is a powerless joke, and equal rights activism has died off and been replaced by a dozen individual interest groups. The people that care don't matter, and the people that matter don't care.

Disparate impact is severely curtailed for insurance in general, it’s a topic of some annoyance in critical justice theory circles and there are academic articles advocating for limits on “excessive” (defined broadly” disparate impact.

Insurance discrimination is not merely disparate impact. It is disparate treatment. In most (though not all) states you are straightforwardly charged more if you are male than if you are female. The most famous case of a person changing their legal gender to save money (over $1000) on their car insurance is Canadian but it would work in much of the US as well.

Yes, but the point is that disparate impact is explicitly tolerated in insurance more broadly. Insurance providers are not required to ensure that the average premium paid by or payout made to a black person is equal to that paid by or made out to a white person. Real-world indicators for sociocultural groups are obviously in many cases baseline parts of actuarial calculations even when insurers strictly ensure nothing is too egregious or obvious.

So... Anyone wants to sue Google for their search results, ads they serve, videos they recommend, etc?

Yeah, this does make me curious, can someone take Google to court over YouTube over this?

Disparate Impact is going to be struck down. The GOP pressing the inclusion political party registration, veteran status and religion (Christian etc) in disparate impact laws is one of the smartest things they’ve done recently (not, admittedly, a long list) since it will accelerate their demise.

But in the long term they’re just not viable. They are pushed because explicit quotes were ruled illegal by SCOTUS, but the more they contradict each other (eg disparate impact against hiring Republicans Va disparate impact against hiring minorities) the more the courts are going to be overloaded with an endless series of these cases and SCOTUS is going to have to act. Even though companies may have legally sound defenses to why their new hires are 70% registered diverse Dems but retirees are, say, largely Christian Republicans (age and politics, changing racial demographics, whatever) the sheer onslaught of cases will become unmanageable.

Nybbler will undoubtedly have some kind of blackpilled spiel about why even this is doomed, but it seems to me that, uh, heightening the contradictions of disparate impact is the surest route to tearing it down.

The contradictions really are ridiculous.

  1. You can be sued if the employees you hire don't match the demographics of your applicant pool

  2. You are forbidden from using race-based quotas to achieve this

So it's essentially impossible for a large corporation to follow employment law.

the demographics of your applicant pool

I think even this is unclear and contradictory: is "your applicant pool" the demographics of the nation, the local area, or even the set of applications you received? If you have a bona fide requirement of, say, a college degree, can you restrict the previous demographics to the qualified subset of those populations? It's unclear, and as far as I can tell, the more caveats and conditions you apply probably make it harder to explain to a jury at trial.

If you have a bona fide requirement of, say, a college degree, can you restrict the previous demographics to the qualified subset of those populations?

I am not an employment lawyer, but I think yes. This probably kept things sane for a long time, but university degrees have ever less IQ signalling value over time.

One solution might be to physically locate your company in a place with favorable demographics.

John Roberts's court is not going to strike down disparate impact.

  1. Aside from Roe/Casey, he's not willing to strike down anything for real. The court will issue a decision, make it super-narrow or leave massive loopholes, and then consider the issue solved and refuse future cases, allowing the lower courts free reign.

  2. Congress put disparate impact in statutes, the 14th amendment specifies that Congress can enforce it by appropriate legislation, the conservative court will defer to Congress on the point that forbidding disparate impact is OK.

  1. The Republican College Professors’ Association sues on disparate impact grounds after political party affiliation is added to more such laws. They say that the college’s DEI recruitment policy explicitly favors classes of people among whom Republicans are extremely underrepresented (female academics, minorities), which is especially egregious when only 10% of faculty are Republicans and 70% are registered Democrats, a trend the new policies will only exacerbate. They claim the college’s internal DEI materials advocating for a ‘less white, male’ faculty show explicit and direct animus or hostility toward the Republican affiliation because of the above.

  2. The college counters that white men are overrepresented on the faculty, that women and people of color are underrepresented, and that because both the vast majority of academics are Democrats and almost all Republican academics are white men, attempting to correct the balance in the party affiliation protected characteristic category would have an extreme negative disparate impact on minority and women applicants.

  3. The Republican Professors’ Association responds that black people and women are also highly underrepresented among professors, particularly in STEM, and yet this makes no difference to the university’s extensive efforts to recruit and advance those applicants and candidates, and suggests that a special effort for Republicans would be no different.

  4. The university is located in a blue enclave in a red state in a circuit that leans red. The court rules that disparate impact rules mean the college must make every effort to hire and promote Republicans in its diversity programs aimed at increasing the representation of underrepresented faculty. This is a legally questionable ruling, but it is made.

  5. The college appeals to SCOTUS, arguing that the same rules would destroy all diversity programs everywhere if they went nationwide, at least in those entities subject to laws that allow disparate impact claims based on the political affiliation/registration category. How does Roberts rule?

"Political party affiliation is added to more such laws" is doing a lot of work here. There is not, to my knowledge, any serious push to do this on the Federal level. In states that have prohibitions on political party discrimination and disparate impact (i.e. California), I'm not aware of any attempt to challenge the doctrine.

I think Roberts gladly swallows that poison pill and strikes dowm all use of disparate impact analysis as a violation of the 14th Amendment. Maybe he goes through the Casey stare decisis factors, acknowledging the importance of aggressive civil rights legislation in the 60s and 70s, but explaining that the world is different now, and what potentially could have been justified back then is clearly now both unnecessary and unworkable. I think the decision would look a lot like Shelby County, to be honest.

He rules that the Republican Professors' Association lacks standing to challenge the rule, and that the case would have to be brought by an applicant that was specifically affected by the rule.

The Republican Professors Association represents as in many of these cases a Republican professor not hired at the university after a final round interview and mandatory diversity statement submission.

Roberts overturns the Circuit Court decision, on the grounds that race, as a Federally protected class, trumps political affiliation (which is not), and therefore the college's interest in not having a disparate impact on protected class members trumps the college's requirement to not have disparate impact on Republicans.

Suppose that a new regulatory bill affecting universities passes in congress in which political affiliation is explicitly described as a protected class (as it has been in some federal bipartisan AI efforts). What then?

My first guess would be a ruling that, applying Strauder, the portions of the law that could negatively impact minorities based on "suspect classifications" would need to satisfy elevated or even strict scrutiny, and would fail to do so, but that the portions which allowed discrimination on other categories would satisfy rational basis and be permissible.

That's not going to happen. If it somehow did anyway, Roberts would uphold the circuit court decision for that case, but include language about how universities must balance the interests of the protected classes and rule that because they didn't make a good showing of doing so this time, they're getting slapped down. Universities would in the future do some lip service about how they did some balancing and really truly this helps more for race than it hurts for political affiliation, but otherwise change nothing.

I mean, I'd love to see TheNybbler's take, but I'll point to "Title VII Religious Freedom in California" here, or less recently, the Damore case under California law -- the very rules that prohibited the discrimination against these people instead were twisted to mandate it. It doesn't matter if there's explicit statutory protections, or SCOTUS caselaw: lower courts and the broader progressive branch will happily look at that obvious contradiction and massive onslaught of cases and happily invite them. The Reinhardt philosophy that SCOTUS can't catch them all is alive and well, and when the worst that happens to the rare losers is that they're temporarily embarrassed, why not roll the dice.

Cfe the recent ATC snafu. It's not just that there's no heads rolling at the top of the pyramid, or that the big civil case is look at "Reply to Motion for Summary Judgment due by 6/26/2025" and actual trial might happen in 2026 if we're lucky.

Shelton Snow's LinkedIn says he's still working as an FAA supervisor!

It doesn't matter if there's explicit statutory protections, or SCOTUS caselaw: lower courts and the broader progressive branch will happily look at that obvious contradiction and massive onslaught of cases and happily invite them.

It does matter for the same reason quota cases mattered in the run up to that decision, or the same reason it took decades for various pro-gun parties to get the wins they sought; every case increases pressure on SCOTUS to draw narrow, readable boundaries on disparate impact (eg making clear exactly what can be used to show it) or deal with it altogether.

It doesn't matter, for the same reason it didn't matter for the gun issue. The conservative court creeps in one direction, never actually changing the situation on the ground. This can go on for decades. Eventually a less conservative court reverses a bunch of conservative jurisprudence (as with Grutter) and if the conservatives get another shot, they start over from near the beginning (or even further back)

the Damore case under California law

Damore sued becuase of that very law and settled with (it is rumored) a pretty substantial sum from Google because of it, did he not?

He sued over a smorgasbord of different laws and regulations. Google had an agreement with him (and other employees in the lawsuit) to dismiss the case, which prohibited further comment, but it's not clear how much Damore got. His LinkedIn does not look like that of a man with FU-money; rumor is 10k USD, and given the costs of getting the case to that point, that's a pretty rough stretch of the term 'substantial'.

But before that, he submitted an NLRB complaint, and got an answer to that complaint:

An employer’s good-faith efforts to enforce its lawful anti-discrimination or anti-harassment policies must be afforded particular deference in light of the employer’s duty to comply with state and federal EEO laws. Additionally, employers have a strong interest in promoting diversity and encouraging employees across diverse demographic groups to thrive in their workplaces. In furtherance of these legitimate interests, employers must be permitted to “nip in the bud” the kinds of employee conduct that could lead to a “hostile workplace,” rather than waiting until an actionable hostile workplace has been created before taking action.

rumor is 10k USD

Not even a month's salary post-tax. That's truly nothing. I'm wondering where these rumors come from.

Any lawyers care to comment on how true this is? I'm not very fluent in legalese but that official legal document seems to be saying ''companies should actively hurt 'problem people' for the good of diversity''?
I want to assume this is somehow out of context or I'm misunderstanding something because the alternative is pretty horrifying.

That seems a fair characterization.

Hanania makes the case in his Origins of Woke, which seemed reasonable enough, that everything is illegal because there are no disparity-free decision-making procedures, and so it just comes down to who they (an agency stocked by lefty bureaucrats) decide to go after, for not following best practices (doing what they want). And yes, it's not hard to get in extremely serious trouble—see where Tesla had to pay 137 million to an employee for creating a hostile work environment (other black people used the n-word, the horror).

Yes, this state of affairs is horrifying. I really hope the next Republican administration scraps as much of this as they can.

Would always be good to get a second opinion. In case you don't, the steelman/charitable version is something like:

  • It's tortuous tortious to fire, or refuse to hire, people because of their race/religion/gender/sex.
  • What happens if you don't fire them, just give them specific job requirements that would any reasonable person would refuse (and might even be illegal on its own), because of their r/r/g/s? Well, now that's illegal.
  • What happens if the employer doesn't give them all the worst jobs, just wink-and-nods to other employees to make that employee's life miserable? Well, now that's illegal.
  • What happens if the employer just happens to hire a whole bunch of people who treat certain people like crap, and not respond to it? Well, now that's illegal. (uh, is 'not sufficiently masculine a sex?' Well, it is now.)
  • Okay, what if it's a genuine coincidence, and the employer's actions to punish rude people is just insufficient? Well, now that's illegal. (wait, is 'being rude' the same thing as 'any reasonable person would refuse' to work with? Well, it is now.)
  • Okay, now you've got a different problem. Anything as small as a single person being slightly rude isn't individually tortuous tortious (uh, in theory). These aren't criminal-law illegal in the way that, say, sodomizing someone with a soap bar without their consent might be. Some of them are even (theoretically) protected the other way around: in Damore's case, federal labor law prohibits employers from acting against employees who doing a very broad definition of organizing or arguing over workplace conditions. It's only in summary that these acts can be become tortuous tortious. But the line between grains and a heap only shows up in retrospect. Well, now employers can (and to avoid liability, must) have a neutral anti-discrimination policy that covers wide breadths of conduct, and that will be preemptively legal if it's used to fire someone.

In practice, this means that Google just sent Damore a note that said:

I want to make clear that our decision is based solely on the part of your post that generalizes and advances stereotypes about women versus men. It is not based in any way on the portions of your post that discuss [the Employer’s] programs or trainings, or how [the Employer] can improve its inclusion of differing political views. Those are important points. I also want to be clear that this is not about you expressing yourself on political issues or having political views that are different than others at the company. Having a different political view is absolutely fine. Advancing gender stereotypes is not.

Is this note pretextual? Is there any overlap between the arguments about inclusion of differing views and 'generalizing stereotypes'? Are there any First Amendment considerations? The NLRB can look at all these questions if they want to, but why would they want to here?

But you could imagine a bizarro!Googler who fits Darwin2500's parody, who wrote at length about how women suck and can't think or correctly perform leadership roles, and nothing else, or perhaps only with pretextual mentions of any speech with meaningful content. And while one of those wouldn't be too rough to deal with, a workforce with nothing but that would have a lot of people looking for somewhere else to work. It's not what the 1964 CRA was meant to handle, but it's not like it's bad as a policy.

And that's genuinely a hard problem to solve without either much more honest actors throughout the enforcement schema, or problems like Damore.

It's tortuous to fire, or refuse to hire, people because of their race/religion/gender/sex.

Anything as small as a single person being slightly rude isn't individually tortuous (uh, in theory).

It's only in summary that these acts can be become tortuous.

"Tortuous" is a word meaning "full of twists and turns; excessively lengthy and complex." I believe the word you wanted in all these cases was tortious: "of the nature of or pertaining to a tort"

yep. That's embarrassing. Thanks, fixed.

employers must be permitted to “nip in the bud” the kinds of employee conduct that could lead to a “hostile workplace,” rather than waiting until an actionable hostile workplace has been created before taking action.

I wish I could just... Send these rulings 20 years back in time and preemptively use them against their creators. How were people so stupid to let this happen?

Do you think this outcome was not predetermined 20 years ago? Everything was set up decades ago.

Because I think most people except the zealots sleepwalked into Current Year, and a great enough shock could have changed the course.

Everything that's happened was due to people falling for a constant creeping barrage of lies and gaslighting.

I'll refer to "most people except the zealots" as normies. What would normies have done, what could normies have done, what would normies have wanted to do if they had precognition twenty years ago? My answer to each question is nothing.

I don't see why regulation is a bad thing here. I don't want AI making hiring decisions, or monitoring what I write on the internet for wrongthink, or deciding verdicts in criminal trials. Anything that helps prevent that (even if imperfect and incomplete) is a good thing in my view.

(You may say that the use of AI in these domains is inevitable and cannot be prevented - but then, why get upset about the regulation in the first place? Why worry about something that you think will have no impact anyway?)

This is the type of vague, awful, impossible regulation that is focused on writing politically correct reports and which actually kills innovation.

I think there should probably be less innovation in this space.

What, specifically, are you worried about losing or missing out on?

I think that's totally fine, but the problem I have is that youve got politicians writing these laws with little to no outside consultation with experts on AI, so they end up being vague and applying to things that aren't AI.

The Qing dynasty also saw no need for disruptive innovations.

And lasted longer than most modern governments.

Went out with a bang, though.

I don't want AI making hiring decisions, [...] or deciding verdicts in criminal trials. Anything that helps prevent that (even if imperfect and incomplete) is a good thing in my view.

What's better about a person making those decisions? The criteria I can think of (accuracy, speed, interpretability/legibility, compliance with standards) don't always favor human decisionmakers.

(I don't want anyone monitoring what I write on the internet for wrongthink, so I'm with you there)

Keeping humans in the loop puts pressure on the processes to be more legible and comprehensible. If you dump everything into an inscrutable ML model, then the danger is that people will simply offload their thinking to the model and take its word as law. When your account gets banned at youtube, no one can actually say why (except in high profile cases) - it’s just, “The Algorithm said so, and we trust The Algorithm”. I don’t want society to work that way. I want there to be a person who has to take responsibility for the decision, and who can explain their reasoning. No hiding behind a binary blob of trillions of parameters.

Of course, humans can build labyrinthian inscrutable bureaucracies too. And humans can be outright evil. But I’d still rather take my chances with humans. Unlike AI, they have skin in the game - they are conscious entities, they have desires and fears. They can be persuaded or bribed, they are subject to political and social pressures, they will grant exceptions under the right circumstances. These are not aberrant modes of operation - they are necessary to the functioning of a humane society.

(2) "ARTIFICIAL INTELLIGENCE SYSTEM" MEANS ANY MACHINE-BASED SYSTEM THAT, FOR ANY EXPLICIT OR IMPLICIT OBJECTIVE, INFERS FROM THE INPUTS THE SYSTEM RECEIVES HOW TO GENERATE OUTPUTS, INCLUDING CONTENT, DECISIONS, PREDICTIONS, OR RECOMMENDATIONS, THAT CAN INFLUENCE PHYSICAL OR VIRTUAL ENVIRONMENTS.

This is not limited to ML, this bill applies to any computer program.

The regulation is bad because even if you remove direct references to age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status and so on, you will still have disparate impact, either because the AI inferred them from oblique references (Latoya Washington living on MLK Boulevard) or because causes of disparate impact correlate with age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status and so on.

The bill opens the doors to non-stop litigation. When a real person or an expert system lower the credit card limit of Latoya Washington living on MLK Boulevard, they leave behind a trail that shows their chain of reasoning worded in a way that pointedly avoids any references to age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status and so on. Everyone knows this and isn't triggered by obvious disparate impact. When the same bank uses an ML model to do the same thing, there's an obvious way in for a lawsuit: disparate impact? check, AI? check, time to sue, good luck proving that your model didn't lower the credit limit because Latoya was black.

I think "disparate impact" is a ridiculous standard. The odds that any decision process will return the same results for two groups which differ on all sorts of socio-economic axes seems unlikely.

I mean, I could get behind that some uses of ML in some fields might be unfair. For example, there might be rational economic reasons to discriminate against certain minorities. If Mormons are 10000x more likely to be killed by bears (because bears are murderist racists or something), then it might make economic sense to not hand out loans to Mormons in bear country. Even if the religion of the applicant is not explicitly present in the input data, a neural network could just learn to infer "applicant is a Mormon" from all sorts of proxies like name, place of birth and so on. If we disallow bank directors saying "no Mormons", we should also disallow such NN for consistency. By contrast, just indirectly discriminating against Mormons because their financial situation is worse (perhaps due to all these bear-related funeral expenses) would seem fine to me even though it has a disparate impact.

While it may be useful to force the market away from the economic optimum in certain situations, the idea to apply this to everything seems profoundly silly. If I (male, 40, overweight) were to post nudes on OnlyFans (not that I intend to do so), I am sure that between user rankings and their recommendation engine, I would end up making a lot less than the median OF model. That is a disparate outcome from an algorithm right there. Should I be able to force OF to push my pictures more?

Or say someone decides to run their blog in French because they have "limited proficiency in the English language". Should Google search be allowed to filter that result if people search for English language websites? That is a disparate impact right there!

I think "disparate impact" is a ridiculous standard. The odds that any decision process will return the same results for two groups which differ on all sorts of socio-economic axes seems unlikely.

"Disparate impact" and other "equity" based arguments are farcical on the face. It only ever cuts one way, and the arguments are deployed extremely selectively. They will never, ever, in a million years apply "disparate impact" arguments on which identity groups pay the most taxes or which identity groups are mostly likely to be victims of crime from outside their community. When non-whites outperform whites, they are just better than whites and should be celebrated. When whites outperform non-whites, it's racism and the thumb must be put on the scale.

Well, except when they did.

That summary honestly left me more confused about Title VII.

That was not a case where "disparate impact" was applied on behalf of whites or men. It's a case where "disparate treatment" against whites in order to remedy "disparate impact" on blacks was determined not to be acceptable.

I suppose you’re right. My mistake.