AI fake face generators can be rebuilt to show the real faces they trained.

Coates says it is still assumed that you can capture this training data. He and his colleagues have come up with a different way to expose Nvidia’s private data, including photos of faces and other objects, medical data and much more, without the need to access training data at all.

Instead, they developed an algorithm that could recreate the data that was brought to the fore by a trained model that reversed the steps that the model went through while processing the data. Take a trained image recognition network: To identify what is in an image, the network passes it through a series of layers of artificial neurons, each layer extracting different levels of information, from abstract edges to shapes. Up to more recognizable features.

Coates’ team found that they could intercept a model in the middle of these steps and reverse its direction, recreating the input image from the model’s internal data. They tested the technology on various image recognition models and GANs. In one test, they showed that they could accurately reproduce images from ImageNet, one of the best datasets for image recognition.

Images from ImageNet (above) are created by entertaining these images as well as by rewinding the trained models on ImageNet.


Like Webster’s work, recreated images are similar to the original. “We were amazed at the final quality,” says Coates.

Researchers say this type of attack is not just a hoax. Smartphones and other small devices are using more AI. Due to battery and memory constraints, models are sometimes half-processed on the device itself and sent to the cloud for the ultimate computing crunch, known as split computing. Coates says most researchers believe that split computing will not reveal any private data from a person’s phone because only the model is shared. But his attack shows that this is not the case.

Coates and colleagues are now working to find ways to prevent the model from leaking private data. “We wanted to understand the risks so that we could minimize the risks,” he said.

Although they use very different techniques, he thinks that his work and Webster complement each other well. Webster’s team showed that private data can be found in the production of a model. Coates’ team showed that private data can be displayed by reversing and re-creating input. “It’s important to look in both directions for a better understanding of how to prevent attacks,” Coates said.

Write a Comment

Your email address will not be published. Required fields are marked *