site banner

Small-Scale Question Sunday for October 16, 2022

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

So, what are you reading? (Another thread with this question was in here in the Fun Thread)

I'm still on Gray's Postmodern War. So far it's an interesting blend of history, analysis of the ideas behind military programs, and meditations on the nuances of war. Very quotable.

War explodes around the planet, relentlessly seeking expression in the face of widespread moral, political, and even military censorship, since the old stories of ancient tribal grievances and of the supremacy of male courage, and therefore war, don't sell everywhere.

I'm reading Cathy O'Neill's Weapons of Math Destruction. It has been on my TBR pile for... too many years, now, which makes some of her case studies particularly interesting, in retrospect. I'm a little over halfway through, however, and so far she seems to not appreciate the difference between these two positions:

  • Automated, opaque data aggregation and processing is, by its nature, damaging to something important (e.g. rights, economies, society, mental health, whatever)

  • Automated, opaque data aggregation and processing should be used only to advance my political goals

It's not a bad book, exactly, but I'm concerned that by the time I finish reading it, I will just feel annoyed that it came so highly recommended. A lot of what she says seems basically right, but she essentially telegraphs the eventual capture of so-called "AI alignment" by progressives ideologues. Her hope does not appear (as, I think, advertised) to understand how the application of algorithms to human existence might be objectionable per se, but to find a way to make sure that algorithms apply to human existence only in ways that progressives like.

But in one sense O'Neill accomplished something interesting, at least: she successfully, if inadvertently, became the trendsetter for today. With art generators in the West being specially trained to not produce nudity or violence, while art generators in China are trained to not produce pictures of the 1989 Tiananmen Square Massacre, "AI aligment" "experts" the world over are chattering about how we will avoid building bias into our AI tools by, apparently... building the right bias into our AI tools. In so doing, they are apparently--it so far appears--channeling O'Neill.

Yeah, I recall being pretty disappointed in the book when I read it a few years ago, though I don’t recall why. I seem to recall her making a lot of dubious assumptions