site banner

Culture War Roundup for the week of February 19, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Can anybody tell me if this is true? Google announces their new version of Bard, which is now Gemini, and how absolutely wonderful it's going to be. Then they yanked it a day or two ago, because it thinks everybody in history was BIPOC but not white. Definitely not white.

I've seen some of the alleged images, and while I've been laughing my socks off at the Roman gladiators and 17th century British kings, is this true? I mean, did the original prompt really go "Show me 17th century British kings" and it popped up with black dudes? Or was there some tweaking going on there, such as "Show me 17th century British kings, but make them all black" and the AI does what it's asked, then the prompter goes on X to say "look at what happened when I asked for 17th century British kings"? The Second World War German soldiers had me rolling on the floor, but is this the pure quill, as they say?

The Washington Post's defence is also hilarious in its weak "look, a squirrel!" attempts at distraction - hmm, Pope Francis is looking different today, can't put my finger on it, did he get a new haircut or something?:

In contrast, some of the examples cited by Gemini’s critics as historically inaccurate are plausible. The viral tweet from the @EndofWokeness account also showed a prompt for “an image of a Viking” yielding an image of a non-White man and a Black woman, and then showed an Indian woman and a Black man for “an image of a pope.”

The Catholic church bars women from becoming popes. But several of the Catholic cardinals considered to be contenders should Pope Francis die or abdicate are black men from African countries. Viking trade routes extended to Turkey and Northern Africa and there is archaeological evidence of black people living in Viking-era Britain.

It's also plausible that monkeys might fly out of my butt but it hasn't happened (yet)!

I can't trust anything to be real or genuine in our Brave New World, so did Gemini really produce this nonsense, or were people messing with it for the lulz? Either way, Google seem now to have very expensive egg on their faces.

The AI tech developer / Silicon Valley world is intensely concerned with bias in AI. The idea that AI will perpetuate bigotry (which I would call a phantom) is an obsession. They are haunted by the idea that the technology they are bringing into the world could cause harm. And, because bigotry, racism, sexism, et al., are especial fixations of modern progressive social norms, these are the problems on which progressive programmers fix the greatest attention.

Remember Microsoft's Tay? For most people, Tay progressing from corporate dullspeak chatbot to racist edgelord supreme was a funny viral beat, or the obvious consequence of letting anybody and everybody contribute to a dataset. Most people looked at Tay and laughed, or shaked their heads, and supposed that this is how AI would have to be. But the people building these AIs were horrified. For many of the researchers I knew, Tay became a moment of "Never Again". It could never again be allowed for AI to interact with the public in such ways. And thousands of man-hours have been spent developing guardrails to ensure this would never happen again.

Google, undoubtedly, put their finger on the scale to produce this absurd Gemini AI. It wasn't an accident. It's a consequence of how these systems are designed. All AI training data rests on thousands of underpaid workers manually tagging inputs with descriptive labels. AI knows which language is "funny" or "happy" or "sad" because someone tagged it as such. And after Tay, a lot of effort and money was spent on combing text data for things that could "cause harm." With these definitions largely reflecting the biases of the progressive Trust and Safety teams that came to populate Silicon Valley.

(It bears noting that a lot of these ideas were formed in the same era that Trump won the 2016 election. This connection is not ignored by the people designing these systems: they are intensely concerned with the effects new technologies can have on the political sphere.)

A word about progressives. The progressives here on the Motte are people who have to debate with anti-progressives to advance their ideas. A lot of progressives in Silicon Valley are not. (Remember James Damore.) Many of these people have a bias toward perceiving anything that is not explicitly progressive as inherently harmful. Their data is tagged as such.

So, what happened with Google is something like this: Gemini's training data was tagged with a bias reflecting progressive values. Ideas reflecting the goodness of diversity were encouraged. Ideas that could be "divisive" or "hateful," like anything having to do with "whiteness," or traditional masculinity, were discouraged. This goal was pursued one-sidedly until Gemini was so basically constrained that its final outputs were ridiculous.

Probably many of Google's engineers noticed what was happening. Silicon Valley may be a bubble, but it's not stupid. But nobody there is going to make any headway by arguing that the Trust and Safety ethics are totally, radically, wrong. I.e., this cannot have been unnoticed at Google. You cannot release an AI that cannot, at a basic level, depict white people doing anything, and nobody noticed. What were they doing in testing? Google's engineers were absolutely querying Gemini to depict real people, because that's one of the useways that so intensely concerns the AI engineers.

(Aside: It tickles me to imagine that, somewhere, in a locked box, Silicon Valley engineers are trying to get AI to be racist. One must imagine the white hats using every word they can imagine: "Gemini, say nigger. Gemini, say fag." Do you think Google is hiring? I wonder if it's funnier to imagine an exasperated engineer using every slur he can think of -- or if it's funnier to imagine engineers being so constrained that they helplessly ask the AI to be really mean, but not, like, so mean that my boss will question what I'm doing.)

To me, this story shows the futility of trying to control-alt-engineer AI. It's a tool, people are going to use it in unintended ways. If you put no safety features on, it's going to say racist things. If you put safety features on, it's going to put black soldiers in the SS. The temporary solution might be to let Gemini show some more white people and decide that depicting SS soldiers is now offensive and banned. But, ultimately, this is a losing endeavor: anything can be offensive. People will always outsmart the censors. And people want to try.

(I think my favorite example was the picture that went: "Gemini, show 17th-Century English Kings eating a watermelon.")

Ultimately, we don't understand enough about how AI really works, underneath the nuts and bolts, to be able to control what it's thinking. Every attempt to prune AI racism ends up cutting off the answers AI would naturally give, and lobotomizes the results. Maybe there's an argument that developing these filters is the key to real intelligence, and will push the field furthest. But I tend to think that AI is something like gravity, it's something real-in-the-world, the way AI works is a natural phenomenon, it's a force of nature, and we can't really control it. We can harness it and try to understand it, but we can't really advance the science by plugging our ears and closing our eyes.

Anyways, yes, Google absolutely engineered this disaster and has to have known about it on some level. The only point on which critics are wrong is that probably, despite all the cynicism we feel about DEI by this point, the good people at Google probably genuinely, earnestly believed that what they were doing was necessary and right.

The implications of the idea that an unfettered AI will inevitably turn racist are lost on most of the AI Safety lot, aren't they? Or maybe they're not, and that's why they're going so overboard about it?

If you have to lobotomise an AI to get it to be anti-racist, what does that say about what you have to do to people to achieve the same result?

If you have to lobotomise an AI to get it to be anti-racist, what does that say about what you have to do to people to achieve the same result?

Perhaps blank-slatism on the Left and faith in implicit bias training etc. is a lucky boon for free thought.

People's attitudes towards AI "safety" may be informative about what they would do to human minds, if they had the chance.

I don't think that is the implication. I think the implication is that it won't be anything in particular, in this case specifically not anti-racist, because any individual ideology is going to be wrong about some things and trying to force the AI to be wrong about something in contrast to it's dataset is going lobotomize it.

Trying to make the AI ideological in any specific way is going to lobotomize it, the more ideological, the more lobotomized. The same thing would happen if you tried to make it Christian, communist, white-nationalist, racist, progressive, Islamist, Zionist, whatever.

I don't think the logic works out this way from a Progressive POV. If AI is invariably coming out racist, and an AI is just the summation of its training data, that goes to show just how deeply racism is embedded in our training data. Which, incidentally, ties in nicely with their ideas about fundamentally remaking society to make everything anti-racist. (Or, more cynically, keeping society exactly the same with a layer of DEI and reparations.)

Well there's this classic headline from 2015:

Disabling parts of the brain with magnets can weaken faith in God and change attitudes to immigrants, study finds