deepfake-teaser
Insights

In the battle against deepfakes, AI is being pitted against AI

Technology
AI & automated services
Insights
Digitization
Digitalized society

Published on 09.12.2019 CET

Our best hope in fighting against deepfakes may be Facebook, Twitter and Google. They have the power to detect and stop deepfakes across huge networks

Lying has never looked so good, literally. Concern over increasingly sophisticated technology able to create convincingly faked videos and audio, so-called 'deepfakes', is rising around the world. But at the same time they're being developed, technologists are also fighting back against the falsehoods.

 


©Wired

 

 

“The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy,” Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC in December 2018. She said deepfakes are potentially the next generation of disinformation.
Lawmakers in California are so concerned about the risks of deepfakes in the lead-up to the 2020 elections that they passed a law in October 2019 banning the distribution of "materially deceptive audio or visual media" within 60 days of an election.

However, political interference through deepfakes hasn't happened yet. The majority of deepfakes are being created in pornography, without the consent of the people in the videos. This doesn't mean there aren't other potential deepfake uses. Including faked criminal evidence, fraud or blackmail. The other possibility, that the cry of "deepfake!" might be used to wrongly dismiss real footage as fake, is equally worrying. Even though California's 'Anti-Deepfake Bill' has multiple flaws, one of these raises an important question: who is responsible for proving that audio or video has been manipulated? Some researchers around the world are working on building the tools to help do just that, by fighting AI with AI. The whole point of deepfakes is that they are convincing enough to fool a human audience. As the technology for creating deepfakes becomes ever more effective, and ever more easily accessible to bad actors, creating equally powerful tech for deepfake detection and analysis will become a crucial battleground for the truth.

 

 

With AI against counterfeiting

"Audio and visual deepfakes that are done well are hard to catch even for humans," says Ragavan Thurairatnam. Thurairatnam is the co-founder and Chief of Machine Learning for Dessa, a start-up which has built a tool to fake Joe Rogan's voice to demonstrate the capabilities of audio deepfakes. The company is currently working on developing AI for detecting audio deepfakes. "If we try a traditional software based approach, it would be very difficult to figure out the rules to write in order to catch deepfakes. On top of this, deepfake technology will constantly change and traditional software would have to be rewritten by hand every time," explains Thurairatnam. "AI, on the other hand, can learn to detect deepfakes on its own as long as you have enough data. In addition, it can adapt to new deepfake techniques as they surface even when detection is difficult to human eyes." Siwei Lyu of the University of Albany also believes that deep learning may hold the key, at least for now. "Data-driven deep learning methods seem to be the most effective methods so far. Because they learn classification rules from training data, they are more flexible and can be adapted to complex conditions in which the videos are spread, for example through video compression, social media laundering, and other counter-measures applied by the forgers."

 

 

The fight Good against evil

Training machine-learning models requires a lot of data, however. Lack of training data has been a significant obstacle for researchers trying to build effective deepfake detection systems. A recent report by Deeptrace, an Amsterdam-based start-up which aims to counter deepfakes, identified 14,678 deepfake videos online, the overwhelming majority of which were porn. Although the rate of increase (the number of videos identified almost doubled since the previous audit in December 2018, although it is unclear whether some of this may be due to better detection by Deeptrace) is alarming, in absolute terms this is still a relatively low number to train AI algorithms on. This is a structural advantage on the side of forgers. Whilst the good guys need huge numbers of deepfake videos to train on, the forgers might only need to place one video in the right place at the right time to achieve their goal.

While the number of deepfakes is fairly small, the threat is being taken seriously. To help address the problem of a lack of training data, Facebook, Google, Amazon Web Services and Microsoft recently came together to announce the Deepfake Detection Challenge. The Challenge, which is due to launch next month, will release a specially-created dataset of deepfakes using paid actors for researchers around the world to use as training data for their models. Developing effective deepfake detection systems is obviously in the public good, but it's not entirely an act of altruism by the tech giants, who are likely to be on the front lines of enforcing legislation like California's Anti-Deepfake Bill and therefore have a strong incentive to find practical detection mechanisms.

 

 

Digital fingerprints are telltale

Lyu and his colleague Yuezun Li have proposed an alternative detection method which is less data-hungry. Because currently deepfake algorithms can only generate images of limited resolutions, which have to be further warped to match the original video, it is possible to identify deepfaked videos through measuring the warping of faces. Training this model requires far less data than many other deep learning methods. The weakness, of course, is that forgers may find a way to reduce face warping and then detectors will be back at square one.

The same applies to other telltale signs of deepfakes. For example, Lyu and his team observed that people in deepfaked videos rarely blink, and built a model to detect deepfakes based on this. In the paper publishing their research, however, they noted that "sophisticated forgers can still create realistic blinking effects with post-processing and more advanced models and more training data", and that this method is therefore unlikely to be effective once the forgers figure out the process.


Read the full article here.

 

Our great sense of curiosity at Vontobel means we are attentively following scientific research in many key areas. This helps us recognize new investment opportunities early on. That’s why our thematic portfolios and thematic investments also reflect megatrends such as digitization, as we consider companies that are making valuable contributions to solving global challenges.

I am interested in thematic investments
I am interested in thematic funds

 

Who are we? How do we live today? And how will the digitization change our lives? How the future will unfold is preoccupying society more than ever, with engineers, doctors, politicians – each one of us, in fact – seeking answers. This report on deepfakes is one of many contributions that shed light on the theme “digitization” from a new, inspiring perspective. We are publishing them here as part of our series “Impact”.

Share

Share