site banner

Culture War Roundup for the week of September 12, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

40
Jump in the discussion.

No email address required.

A few followups to last week's post on the shifting political alignment of artists:

HN: Online art communities begin banning AI-generated images

The AI Unbundling

Vox: What AI Art means for human artists

FurAffinity was, predictably, not the only site to ban AI content. Digital artists online are in crisis mode, and you can hardly blame them -- their primary income source is about to disappear. A few names for anyone here still paying for commissions: PornPen, Waifu Diffusion, Unstable Diffusion.

But what I really want to focus on is the Vox video. I watched it (and it's accompanying layman explanation of diffusion models) with the expectation it'd be some polemic against the dangers of amoral tech nerds bringing grevious harm to marginalised communities. Instead, what I got was this:

There's hundreds of millions of years of evolution that go into making the human body move through three-dimensional space gracefully and respond to rapidly changing situations. Language -- not hundreds of millions of years of evolution behind that, actually. It's pretty recent. And the same thing is true for creating images. So our idea that like, creative symbolic work will be really hard to automate and that physical labor will be really easy to automate, is based on social distinctions that we draw between different kinds of people. Not based on a really good understanding of actually what's hard.

So, although artists are organising a reactionary/protectionist front against AI art, the media seems to be siding with the techbros for the moment. And I kind of hate this. I'm mostly an AI maximalist, and I'm fully expecting whoever sides with Team AI to gain power in the coming years. To that end, I was hoping the media would make a mistake...

There's hundreds of millions of years of evolution that go into making the human body move through three-dimensional space gracefully and respond to rapidly changing situations. Language -- not hundreds of millions of years of evolution behind that, actually. It's pretty recent. And the same thing is true for creating images. So our idea that like, creative symbolic work will be really hard to automate and that physical labor will be really easy to automate, is based on social distinctions that we draw between different kinds of people. Not based on a really good understanding of actually what's hard.

This is definitely not as bad as it could have been but I find the reasoning here really strange. Since when is time evolution spent creating something a good measure of difficulty. Evolution has been "perfecting" tons of chemical reactions since before there were multi-cell organisms and it's trivial for us to cause chemical reactions.

I can't speak for Ted Underwood and it's possible that he hasn't given it much thought.

But it's reasonable in this specific context, because evolution consists of a semi-random exploration of the fitness landscape, and neural net training is an attempt to discover the global minimum in the loss landscape; length of training is trivially expected to contribute to the «polish» and optimization of the feature – some old things like ribosomes can well be approaching the thermodynamical limits of efficiency, and then there's... how we do arithmetic (I've made this point to darwin somewhere in this thread). Animals have been navigating 3D space for a long time, as a result they're pretty good at it.

Further, short-term evolution is necessarily dominated by simple changes – often just a few substitutions here and there affecting quantitative parameters like the rate of expression of a protein which upregulates the secretion of some hormone, that leads to general size change and allometric growth, in other words, unequal scaling of body parts with changed size. Or even more commonly and to a greater extent, selection on «standing» variation, changing distributions of already polygenic traits:

Quantitative selection is a lot easier than people think. If I kidnapped a year’s worth of National Merit Scholars and dropped them on a deserted but fertile island, a new race with an average IQ around 130 would develop ( unless those little brainiacs escaped. You have to watch them all the time). If I dropped a lot of NBA and WNBA players, you’d see the tallest race, if we could just get them to reproduce.

But… there are some subtle points here. Great Danes exist and persist, but they have a bundle of health problems, and they don’t live too long (8-10 years). Wolves last around 15-16 years in captivity, with a record of 20. If you wanted to create a new race with an average adult height of 7 feet, I’m sure you could, but I’d bet money they’d have bad knees.

On the other hand, if they stayed 7 feet tall for a couple of million years, they would not be particularly prone to bad knees. There would be gradual selection for tougher knees: changes in development, changes in bones and tendons and cartilage, eventually perhaps fundamental changes in the architecture of the knee. There would be lots of little changes that made development among those giants more robust, changes that reduced the incidence of many problems that centers fall heir too.

Brain size in ancient and archaic humans was plenty big, but we don’t really see signs of rapid innovation, art, and decent fast food until fairly recently, 50,000 years or so. [...]

So I think Kevin Mitchell ( not the other two) has a point. It’s possible, even likely, that the populations that have relatively high IQs today haven’t had them for very long, and that they’re not terrible well adapted to their new mental horsepower. Susceptible to various mental problems and illusions that would probably be a lot rarer if natural selection had had time to iron out the bugs.

Short bursts of evolution like those are simple to approximate with technology: once we achieve the very basic performance (at least using a somewhat analogous architecture, like with connectionist models), we can keep going, scaling, even if the exponent is more punishing than it was in the organic substrate.

Our higher-order cognition including symbolic thought (probably necessary for art) and speech is physically implemented on the array of almost homogenous cortical columns (with some priority for Wernicke and Broca areas in the case of speech), which has been scaled up by a factor of like, two in the last 2 million years, or something to this effect, depending on where you start assuming hominids had any semblance of speech; and Cochran argues even that was only part of the prerequisite, with real hot stuff – including cave art – starting to happen tens of thousands of years ago. So the expectation is that the change was something even simpler.

Having (presumably) discovered the general trick to learning, particularly in the domain of image recognition, and shown it with decent machine vision and other achievements, we can reasonably expect to cover the rest of the ground very quickly with scaling and scientifically trivial tweaks – which is all there is to those generative models.

We haven't yet shown equivalent mastery in tasks involving locomotion of real robots, though that's probably more an issue of iteration time.

This is all a very good description of how things that have long been under the thumb of evolution approach efficiency thresholds better than things that have not been but I'm still not sure why we should expect that to be the relevant criteria for modern job replacement. Evolution spent a whole lot of time concerned about getting the absolute maximum out of a calorie balancing many concerns so it's resulting 'design' of our ability to jump very high is limited by many those concerns. We're able to use technology to circumvent most of these tradeoffs, a rocket is incomparably better at moving things very high.

Physical job requirements needn't be 'move through 3D space making exactly the same tradeoffs as humans make'. Artificial constructs don't need to worry about protecting a fragile cranium, keeping a supply of oxygen handy, storing all the needed energy inside themselves, reproduction and many more things that are vitally important to humans. They're not solving the same problem set as humans so why would we expect the optimization to be all that fit.

Well that's an easy one: observation bias on part of the commenters. Because everything we could do with neat streamlined engineering, we've automated already or are in the middle of automating. Rockets are simple, do one very basic thing very well, and follow largely from first principles; so do cars and these boxes. In the end, what's left is tasks that genuinely require good spatial perception, mechanical understanding, free navigation in human-centric environments, articulated manipulators with many degrees of freedom, high-fidelity sensors, fast response and so on. Fundamentally, those are tasks the complexity of which comes almost entirely from special context-dependent cases, the long tail of failures to apply generic solutions; like HVAC maintenance or repairing automated boxes in the warehouse. You can either leave it to humans or create something on par with them. And it turns out that for developing (software-wise, first of all) tools that solve hairy tasks like those, galaxy brain engineering doesn't work that well, compared with approaches leveraging stochastic trial and error, learning. So parallels with evolution, and inferences from evolutionary hardness of adaptation, are apt.

But again: it's more of an issue of data availability and iteration time. Training CLIP or SD is much easier, faster and cheaper than training robots.

Artificial constructs don't need to worry about protecting a fragile cranium, keeping a supply of oxygen handy, storing all the needed energy inside themselves, reproduction and many more things that are vitally important to humans.

True but (non-fat) humans are remarkably well-built, most of the body is useful for mechanical performance. I don't know about you but my balls are only a tiny fraction of my overall mass. You don't need to protect the cranium all that much, because error rate is so low (if you're dropping heavy stuff or something, you're already failing at the primary task), and even if you do, a basic helmet would typically suffice. Local energy storage is handy because it simplifies logistics of the workspace. There's only so much that can be trimmed off. Humanoid body really is close to the optimum for many of our tasks, and making a machine perform comparably well is in fact a big challenge.

Our actuators are also very, very good. This is probably the best we can do with current hobbyist tech. Invincible.jpg.

openDogv3 is a really impressive project, but it's also optimized for low-cost and weight. There are a lot of better options out there than 8308s and a 3d-printed gearbox at the enthusiast or hobbyist level; they just blow out the rest of your budget out of the water and dramatically increase cost-of-entry.

That said, while the gap isn't as huge as you're suggesting, it's still pretty big. More efficient artificial approaches usually work by optimizing for entirely different purposes or environments.

I've been thinking today how good (smaller, lighter, more efficient) opendog would become if they just replaced all 3D-printed nonsense with CNC-machined or stamped metal and injection-molded polymers (and of course revamped electronics). Maybe it'd really be on par with Spot then, or (with added sensors, brains etc.) wipe the floor with Chinese knock-off dogs.

But that requires scale. I really hope somebody helps here: we need some sort of Stability for robotics.

If we don't optimize for low cost, at current costs those machines will be completely non-competitive.

What projects do you have in mind?

Yeah, there's a lot of low-hanging fruit available for improvements; even simple drill-press and 6061 aluminum could do a lot. But the toolchains for those processes are much more complicated and the processes themselves much messier, so it's not really in consideration. And, conversely, there's a lot of potential spaces for... more improvizational materials, where people are willing to design around them.

Scale is part of the problem, but you don't need that much scale. The FIRST FRC environment has a ton of devices being sold on scales of hundreds or low thousands that involve a lot of custom metal parts, and while they're not always good, they're definitely extant and productive. Part of that reflects the tax- and labor-advantaged nature of a situations where most customers and some sellers are non-profits or subsidiaries of non-profits, but that's ultimately a political choice: there's no that must favor FRC or Vex but not more productive matters.

The deeper issues... I think the big one is that there's simultaneously a big desire to build everything from 'scratch', but also to see some level of devices as indivisible, at least for this class of project. LEGO could make (arguably, does make, through Mindstorms) an injection-molded-polymer Spot knockoff, but the sort of people who want to build a LEGO kit aren't trying to put together a Spot variant. Even a lot of the Pi-and-cheap-servo posebots are largely marketed under the theory that they're an introduction to everything you'd need to learn for the project.

Some of this is just inevitable Pareto Principle stuff, but I think a lot of it's downstream of the death of manufacturing. The emphasis and ease-of-access to bits makes it so easy to considering scaling and production as someone else's problem, because, for no small part, it has been. I think the extreme time constraints and very limited purchaser base have done more to keep the FIRST ecosystem around as long as it has.

What projects do you have in mind?

There's a few interesting takes on custom motors like the DizzyMotors, but almost all have a step one that involves taking apart a larger, expensive motor. Moteus is getting closer, but it's still (AFAIK) still in a prototype level, and it's very far from anything especially hitting the limits of the medium.