A group of Norwegian scientists have created a new DeepFake method that exchanges faces to ensure the right to privacy in situations where it is needed. The work was published in a Cornell University, and features a system named as DeepPrivacy. Unlike other models, it is able to automatically replace a person's face with a non-existent one, generated from a 1.5 million image dataset.
“We present a diverse set of human face data, including unconventional poses, occluded faces, and wide background variability,” states the academic text. In the publication, it is also said that this data can be used for “further training of deep learning models”.
Deep change sequence performed by DeepPrivacy. (Source: DeepPrivacy /GitHub)
The new system could then be used in cases where identity preservation is required, for example in interviews where there are anonymous reports or witnesses. However, with some samples, it is possible to notice flaws during the design of a new face, which may occur according to certain types of lighting and movement.
Still, DeepPrivacy still features an unprecedented automatic DeepFake system with a less controversial goal. Additional information about the model, such as its open source and other samples, can be found at GitHub.