Deepfakes have their roots in the 2016 Face2Face app, but methods to swap faces onto different bodies have been evolving since. After an explosion of press coverage in 2017, the next year followed with FakeApp, with simplified the process and built off Google’s TensorFlow framework. Alternatives have since sprung up, making it clear the practice won’t disappear any time soon. While the technique sounds fairly innocuous on first read, it could have a major impact in the future. The technology is already getting to the point where, with a good dataset, it’s close to indistinguishable to the real thing. This has implications for the spread of fake news, while historic uses of deepfakes include placing celebrities heads on pornstar’s bodies.
Grants, Datasets, and Partnerships
Facebook and its partners, which include Microsoft, the Partnership for AI, and a number of universities, are offering up to $10 million in grants to those who can find new ways to detect altered video. To help with this, it’s releasing a public dataset that uses paid actors to create deepfakes in a consistent environment. Currently, methods to automatically detect fakes look for artifacts, multiple eyebrows, and other errors. However, as the technology behind them is refined these issues will fade, making these solutions ineffective. “This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” said Facebook CTO Mark Schroepfer in a blog post. “To ensure the quality of the data set and challenge parameters, they will initially be tested through a targeted technical working session this October at the International Conference on Computer Vision (ICCV).” The offering will help ensure Facebook, Microsoft, and other tech companies are able to police their platforms properly in the future. There have previously been calls for Facebook to better prevent fake news from spreading, remove violent content, and enforce copyright. Funding the tools to prevent deepfakes before they become a wider issue is likely to save legislative headaches while protecting users.