video editing (TarikVision/Shutterstock.com)

The dangers of 'deep fakes'

False and doctored media can be used for misinformation campaigns, and advanced technologies like artificial intelligence and machine learning will only make them easier to create and more difficult to detect.

Deep fakes are images or videos that combine and superimpose different audio and visual sources to create an entirely new (and fake) video that can fool even digital forensic and image analysis experts. They only need to appear credible for a short window of time in order to impact an election, Sen. Marco Rubio (R-Fla.) warned at a recent Atlantic Council event.

"One thing the Russians have done in other countries in the past is, they've put out incomplete information, altered information and or fake information, and if it's done strategically, it could impact the outcome of an [election]," Rubio said. "Imagine producing a video that has me or Sen. [Mark] Warner [D-Va., who also spoke at the event] saying something we never said on the eve of an election. By the time I prove that video is fake -- even though it looks real -- it's too late."

Rubio, who has warned about the impact of deep-fake technology in the past, is part of a growing group of policymakers and experts to fret over the impact false or doctored videos could have on electoral politics. Earlier this year comedian Jordan Peele and BuzzFeed released a now-viral video that used deep-fake technology to depict former President Barack Obama (voiced by Peele) uttering a number of controversial statements, before warning the viewer about the inherent dangers that such tools pose.

The technology is far from flawless, and in many cases a careful observer can still spot evidence of video inconsistencies or manipulation.  But as Chris Meserole and Alina Polyakova noted in a May 2018 article for the Brookings Institution, "bigger data, better algorithms and custom hardware" will soon make such false videos appear frighteningly real. 

"Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone," Meserole and Polyakova wrote. "However, deep learning and generative adversarial networks have made it possible to doctor images and video so well that it's difficult to distinguish manipulated files from authentic ones."

As the authors and others have pointed out, the algorithmic tools regularly used to detect such fake or altered videos can also be turned around and used to craft even more convincing fakes. Earlier this year, researchers in Germany developed an algorithm to spot face swaps in videos. However, they found that "the same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first place -- and that could make them harder to detect."

Researchers at the National Institute of Standards and Technology and the Defense Advanced Projects Agency have been working to develop technology that can detect deep fakes.

In its Media Forensics Challenge, NIST aims to advance image and video forensics technologies so its easier to determine if an image or video was modified, the exact section that was altered and where the "donor" parts of the image came from.  

DARPA's five-year MediaFor program that launched in September 2015 attempts "to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform."

"We're now in the early days of figuring out how to scale [the system] so we can do things quickly and accurately to stop the spread of viral content that is fake or has been manipulated," Hany Farid, a Dartmouth College digital forensics expert who is participating in the MediaFor program, said in a recent article in Communications of the ACM.  "The stakes can be very, very high, and that's something we have to worry a great deal about."

This article used portions of a story that was first posted on FCW, a sibling site to GCN.

About the Authors

Derek B. Johnson is a senior staff writer at FCW, covering governmentwide IT policy, cybersecurity and a range of other federal technology issues.

Prior to joining FCW, Johnson was a freelance technology journalist. His work has appeared in The Washington Post, GoodCall News, Foreign Policy Journal, Washington Technology, Elevation DC, Connection Newspapers and The Maryland Gazette.

Johnson has a Bachelor's degree in journalism from Hofstra University and a Master's degree in public policy from George Mason University. He can be contacted at djohnson@fcw.com, or follow him on Twitter @derekdoestech.

Click here for previous articles by Johnson.


Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA from West Chester University and an MA in English from the University of Delaware.

Connect with Susan at smiller@gcn.com or @sjaymiller.

inside gcn

  • high performance computing (Gorodenkoff/Shutterstock.com)

    Does AI require high-end infrastructure?

Reader Comments

Thu, Jul 19, 2018

If only Congress had not ignored cybersecurity since the beginning of this decade with constantly failing to pass legislation and voting down bills since 2012 eprhaps we wouldn't be in the predicament we are now; we certainly would have had a way to better understand Russian disinfomration and spear phishing attacks that have been foing on since at least Summer 2015

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group