Tandon research team reveals GANs cannot completely protect identity

Generative adversarial networks (GANs) are machine-learning (ML) systems used to “scrub” images of any traces of personal identity. But, after exploring the machine-learning frameworks behind these tools, a team of researchers at the NYU Tandon School of Engineering’s Department of Electrical and Computer Engineering suggest that these scrubbed images leave a lot of “residue” behind.

The team—Kang Liu, then a Ph.D. candidate in the department, along with Benjamin Tan, a Research Assistant Professor, and Dr. Siddharth Garg, Institute Associate Professor—recently conducted tests to see how effective tools such as privacy protecting GANs (PP-GANs) actually were. The results, presented at the 35th AAAI Conference on Artificial Intelligence in a paper entitled “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” suggest that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still permitting extraction of secret information.

“Our experimental results highlighted the insufficiency of existing DL-based privacy checks, and potential risks of using untrusted third-party PP-GAN tools,” the paper states. “From a practical stand-point, our results sound a note of caution against the use of data sanitization tools, and specifically PP-GANs, designed by third-parties,” Garg adds.

The study has been covered by several media outlets, including Science Daily, and Helpnetsecurity. The paper can be read in its entirety at https://arxiv.org/pdf/2009.09283.pdf.