site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

How does this work? My understanding was that the only "learning" that took place is when the model is trained on the dataset (which is done only once, requiring a huge amount of computational resources), and any subsequent usage of the model has no effect on the training.

I'm far from an expert here.

If they want to make the AI 'smarter' at the cost of longer/more expensive training, they can add parameters (i.e. variables that the AI considers when interpreting an input and translating it into an output), and more data to train on to better refine said parameters. Very roughly speaking, this is the difference between training the AI to recognize colors in terms of 'only' the seven colors of the rainbow vs. the full palette of Crayola crayons vs. at the extreme end the exact electromagnetic frequency of every single shade and brightness of visible light.

My vague understanding is that the current models are closer to the crayola crayons than to the full electromagnetic frequency.

Tweaking an existing model can also achieve improvements, think in terms of GANs.

If the AI produces an output and receives feedback from a human or another AI as to how well the output satisfices the input, and is allowed to update its own internals based on this feedback, it will become better able to produce outputs that match the inputs.

This is how a model can get refined without needing to completely retrain it from scratch.

Although with diffusion models like DallE, outputs can also be improved by letting the model take more 'steps' (i.e. run it through the model again and again) to refine the output as far as it can.

As far as I know there's very little benefit to manually tweaking the models once they're trained, other than to e.g. implement a NSFW filter or something.

And as we produce and concentrate more computational power, it becomes more and more feasible to use larger and larger models for more tasks.