site banner

Culture War Roundup for the week of October 3, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

24
Jump in the discussion.

No email address required.

This week's revolutionary AI advance:

Imagen Video

It's not really revolutionary, as people have been pointing out this is the obvious next step for ages months now. But it still is a milestone worth noting.

As for this:

While our internal testing suggest much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter. We have decided not to release the Imagen Video model or its source code until these concerns are mitigated.

Google's made a habit of this. They announce an amazing advance, and then say no one can have access to it because it can be used for Evil. No matter: Stable Diffusion will have something comparable out in a couple months.

ETA:

Actually, this out of DeepMind might be the bigger advance today, if less flashy:

Press: Discovering Novel Algorithms with AlphaTensor

Paper: Discovering faster matrix multiplication algorithms with reinforcement learning

My greatest fear for AI content generation is it being dominated by woke megacorps, with independent creators permanently locked out of contributing to culture. It looks like Google is investing heavily in that dystopia.

Novelai and stable diffusion being mostly uncensored has been a big white pill so far, but it feels like the shoe is about to drop.

There is an easier solution to this, AIs can be trained by anyone with enough computing power and training AI isn't that expensive. AI is actually fairly democratic as everyone can make their own. Once the cat is out of the box it isn't hard for everyone to get it. Information spreads naturally. The big risk is big corporations and governments access to data to use the AI on. That can give them a tremendous advantage.

Tuning is relatively cheap, but initial training is (currently) expensive. The furry StableDiffusion tweaks probably cost 50-400 USD depending on vendor and management, but the initial StableDiffusion model they're based on reflects ~300k USD at official prices (although probably got at least some bulk discounting).

Some of that'll go down as GPU prices decrease and newer equipment becomes available, but there are some costs for bandwidth and energy that are slower to change. This might go from 'old condo' to 'new car', but it's not likely to go to 'vacation' or 'a couple weeks' savings' for a few years, maybe even the better part of a decade, without dramatic changes to the underlying code.

For data, it varies more. LAOIN's a lot of bandwidth, curation, and drive space, but it's... actually not that incredible for a single (if slightly nuts) person. Other data sources, probably less so, either due to scale (eg video), to availability (eg privacy), or to more esoteric causes (AI music is a legal clusterfuck).

but the initial StableDiffusion model they're based on reflects ~300k USD at official prices (although probably got at least some bulk discounting).

I mean... this is cheap as hell in the scheme of things. It means you only need one startup with a medium sized seed round who sees a strategic advantage in commoditizing that model, and presto, it'll be trained and released. In fact that's exactly the story behind Stable Diffusion.

The reason we don't see a lot of open source models yet is... well, actually, we do see a lot of open source models. GPT-2 is publicly available, Facebook released a large language model roughly equivalent to GPT-3, and the Eleuther crowd also trained and released a large language model. OpenAI just released an open-source speech-to-text model, they already released CLIP as open source (which powers Stable Diffusion and Craiyon among others), StyleGANs 1, 2, 2-ADA and 3 are all publicly available and open source, etc. These models are just a year or two behind the current research papers. Which is about how long it probably takes to reproduce a research paper. Some of them are even better than that, even cutting edge -- like Nvidia's StyleGANs when they were released, like OpenAI's Whisper, like Nvidia's new Get3D.

Yeah, that's fair. I do think it's meaningful if it requires a startup with a medium-sized seed round (or someone willing to risk their retirement), rather than a slightly nutty hobbyist or enthuisiast, but it's not a FAANG-only thing, at least at a lot of common levels.

Well, there are a fair number of wealthy machine learning hobbyists out there. None of them have actually funded this type of thing to date as far as I know, but I could totally imagine some centimillionaire setting up a few-million-dollar charity to just train models and release them based on research as it emerges.