deepfake disruption (Nataniel Ruiz/YouTube)

Filter protects against deepfake photos and videos

In today’s complex media environment, people can struggle to separate fact from fiction online. A relatively new phenomenon is making that struggle even harder: deepfakes.

Using deep neural networks (a machine learning technique), it’s become increasingly easy to convincingly manipulate images and videos of people by doctoring their speech, movements, and appearance.

In response, researchers have created an algorithm that generates an adversarial attack against facial manipulation systems in order to corrupt and render useless attempted deepfakes.

The researchers’ algorithm allows users to protect media before uploading it to the internet by overlaying an image or video with an imperceptible filter.

When a manipulator uses a deep neural network to try to alter an image or video protected by the algorithm, the media is either left unchanged or completely distorted, the pixels rendering in such a way that the media becomes unrecognizable and unusable as a deepfake.

The researchers have made their open-source code publicly available. Their paper has not yet been peer-reviewed and is available on arXiv.

Nataniel Ruiz, a doctoral candidate in computer science at Boston University and coauthor of the paper, says that the idea for the project came to him after he got interested in the rapidly advancing techniques for creating deepfakes. He hit on the idea of disrupting deepfakes after talking with his doctoral advisor, Stan Sclaroff, dean of Boston University’s College of Arts & Sciences and professor of computer science, about the possible malicious uses of deepfake technology.

Deepfakes first rose to prominence with applications that realistically transpose an individual’s face onto another’s body, yet necessitate large amounts of images of the individual. Recent advances in the field now allow for the creation of fake images and video of people using only a few images. It has also become easier for ordinary citizens to create deepfakes.

Last year, for instance, the iPhone app FaceApp entered the zeitgeist. Created by a Russian company, the app allows everyday users to transform images of individuals into older versions of themselves, change their expression into a smile, or other tricks.

The relative ease with which internet users can create deepfakes could further muddy the waters of what is real and fake online, particularly in arenas like politics. Detecting deepfake images, audio, or video could be one approach to solving this trust problem, although it may prove to be harder than expected. Facebook is currently holding a competition, searching for a team of researchers that can effectively detect deepfakes.

Now, the researchers are pursuing even more sophisticated techniques for disrupting deepfakes.

“We covered what we call ‘white-box’ attacks in our work, where the network and its parameters are known to the disruptor,” says coauthor Sarah Adel Bargal, a research assistant professor of computer science.

“A very important next step is to develop methods for ‘black-box’ attacks that can disrupt deepfake networks [in ways] inaccessible to the disruptor… [and] we are currently working on making this a reality.”

This article is posted from Futurity.

About the Author

Jeremy Schwab is the director of integrated marketing and communications for the College and Graduate School of Arts & Sciences at Boston University.

Featured

  • Russia prying into state, local networks

    A Russian state-sponsored advanced persistent threat actor targeting state, local, territorial and tribal government networks exfiltrated data from at least two victims.

  • Marines on patrol (US Marines)

    Using AVs to tell friend from foe

    The Defense Advanced Research Projects Agency is looking for ways autonomous vehicles can make it easier for commanders to detect and track threats among civilians in complex urban environments without escalating tensions.

Stay Connected