site banner

Culture War Roundup for the week of September 19, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

33
Jump in the discussion.

No email address required.

I wrote a post about de-biasing efforts in machine learning, which got a bit long, so I decided to turn it into an article instead. It's about how corporate anti-bias solutions are mostly only designed to cover their asses, and does nothing to solve the larger (actually important) issue.

(As an aside: does it still count as a "bare link" if I point to my own content, just hosted elsewhere?)

Am I right in coming away with the conclusion that your post seems to be arguing about two very different and almost opposite things?

  1. An AI's general willingness to combine things in unrealistic ways. This is the ability required to produce images of things not heavily represented in the training set, such as female presidents.

  2. An AI's ability to understand and ignore "spam" in its training data (e.g. popular depictions that mislead from "reality"), such as Marvel depictions of Norse mythological figures or faux 80s illustrations.

In a sense, these are directly opposite goals because 1 requires painting something unrealistic, and 2 requires ignoring/penalizing unrealistic outputs. I suppose the common ground is that an AI should default to painting logical/coherent/realistic things unless prompted otherwise. But even this desire is loaded - first of all, the way these AIs are trained, I think, sets them up to hopelessly fail at any measure of how "realistic" their outputs are - we humans have the advantage of our perception of reality being a distinguished input, and also by access to vastly more information crucial to understanding concepts such as causality and physical intuition. It's also clear to us whether we're seeing something real or seeing something fictional, by virtue of that fictional thing always being a subset of the reality we perceive (e.g. a still image on a display, rather than something we're seeing with our naked eyes directly).

In machine learning the aim is to distinguish signal from noise, to extract things that will generalize to future data, as opposed to merely coincidental, contingent things that appear in the training data. This however leads to deeply philosophical issues that some engineer types tend to ignore out of a feeling of superiority over humanities types. It's not clear for example how to distinguish real, fundamental correlations from mere happenstance. For example one may say that a female US president is perfectly possible, it's not a logical contradiction. The fact that we haven't had one is just a contingent fact about our timeline so far. Or perhaps that the fact that black people currently commit more crimes is not an inherent property to them but a happenstance based on external conditions and so on.

By the way, Saul Kripke just died a few days ago. He had a lot to say about necessity. I'm not convinced that the necessary vs contingent stuff (which goes back all the way to theological arguments) is actually all that meaningful but it's good to recognize that people have already discussed these sorts of things.

It's not clear for example how to distinguish real, fundamental correlations from mere happenstance.

You mean, how to distinguish correlation from causation? Isn't this exactly the domain of the scientific/empirical method, and its associated toolbag of trickery?

No, not causation. I mean correlations that will hold at test time and aren't just coincidences. Causation is a different thing.

What other options are there for a nontrivial correlation? One where both are causally connected to a common third factor?