Tech

AI fake-face generators can be rewound to reveal the real faces they trained on

Published

on


The work raises some serious privacy concerns. “The AI community has a misleading sense of security when sharing trained deep neural network models,” says Jan Kautz, vice president of learning and perception research at Nvidia. 

In theory this kind of attack could apply to other data tied to an individual, such as biometric or medical data. On the other hand, Webster points out that the technique could also be used by people to check if their data has been used to train an AI without their consent.

An artist could check if their work had been used to train a GAN in a commercial tool, he says: “You could use a method such as ours for evidence of copyright infringement.”

The process could also be used to make sure GANs don’t expose private data in the first place. The GAN could check if its creations resembled real examples in its training data, using the same technique developed by the researchers, before releasing them.

Yet this assumes that you can get hold of that training data, says Kautz. He and his colleagues at Nvidia have come up with a different way to expose private data, including images of faces and other objects, medical data and more, that does not require access to training data at all.

Instead, they developed an algorithm that can recreate the data that a trained model has been exposed to by reversing the steps that the model goes through when processing that data. Take a trained image-recognition network: to identify what’s in an image the network passes it through a series of layers of artificial neurons, with each layer extracting different levels of information, from abstract edges, to shapes, to more recognisable features.  

Kautz’s team found that they could interrupt a model in the middle of these steps and reverse its direction, recreating the input image from the internal data of the model. They tested the technique on a variety of common image-recognition models and GANs. In one test, they showed that they could accurately recreate images from ImageNet, one of the best known image recognition datasets.

Images from ImageNet (top) alongside recreations of those images made by rewinding a model trained on ImageNet (bottom)

Like Webster’s work, the recreated images closely resemble the real ones. “We were surprised by the final quality,” says Kautz.

The researchers argue that this kind of attack is not simply hypothetical. Smartphones and other small devices are starting to use more AI. Because of battery and memory constraints, AI models are sometimes only half-processed on the device itself and the semi-executed model is sent to the cloud for the final computing crunch, an approach known as split computing. Most researchers assume that split computing won’t reveal any private data from a person’s phone because only the AI model is shared, says Kautz. But his attack shows that this isn’t the case.

Kautz and his colleagues are now working to come up with ways to prevent models from leaking private data. We wanted to understand the risks so we can minimize vulnerabilities, he says.

Even though they use very different techniques, he thinks that his work and Webster’s complement each other well. Webster’s team showed that private data could be found in the output of a model; Kautz’s team showed that private data could be revealed by going in reverse, recreating the input. “Exploring both directions is important to come up with a better understanding of how to prevent attacks,” says Kautz.

Copyright © 2021 Vitamin Patches Online.