site banner

Small-Scale Question Sunday for April 2, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

That picture of Pope Francis in a puffer coat got me thinking:

AI generation of highly realistic images is a problem. Ideally, we would want a reliable way to distinguish truth from lies. So we train another AI to spot the difference. Then someone trains a different AI to fool both humans and AIs.

Will this be an endless arms race? Will one side win?

This is basically what the GAN architecture is--generative adversarial networks.

One, the generator, is being trained to generate e.g. photorealistic images. The other, the discriminator, is being trained to classify images as real or generated.

At first, the generator sucks and the discriminator is unsophisticated. But they co-evolve in an arms race. Afaik this architecture was developed in order to make it less costly to produce the generator (requiring less human grading of outcomes), but it turns out the trained discriminator might be handy as well.

To me, though, it seems that the discriminator's job is intrinsically harder. With infinite training resources, I don't see any way to avoid ending up in a situation without lots and lots of false positives--real images categorized as fakes.

Well, the question is what is the difference between real imagine, and a fake image that is visually indistinguishable on a technical level from a real one? (Assuming you do not have an external knowledge about the subject matter in the image.)