deepfake (VectorMine/Shutterstock.com)

Real-time deepfake detection that keeps getting better

Can an artificial-intelligence-powered deepfake detector call out a doctored image or video quickly enough to prevent it from spreading?

Researchers from the University of Missouri and the University of North Carolina at Charlotte with image processing and cybersecurity expertise have been awarded nearly $1.2 million from the National Science Foundation to find out. They’re designing an AI program they believe will need only a small number of deepfake examples to start to build its knowledge base. As it learns, the program will be able to spot new deepfake techniques, making more accurate detections and preventing mistakes in identifying content.

Relying on a small number of examples overcomes the current challenges of algorithms that typically need a vast number of labeled samples to learn from. By leveraging accumulated knowledge, the deepfake detector will also learn to prevent camouflaged or obscured visual content from being classified as genuine content. The researchers say the project will also mitigate existing unresolved adversarial attacks in machine learning.

Besides providing a more trustworthy environment for billions of social network users, the deepfake detectors will also help ensure the authenticity of visual content for digital forensics.

The project is scheduled to take four years to complete and will include a mobile app to alert smartphone users to the presence of deepfake content in their social media feeds.

About the Author

Connect with the GCN staff on Twitter @GCNtech.

Featured

  • automated processes (Nikolay Klimenko/Shutterstock.com)

    How the Army’s DORA bot cuts manual work for contracting professionals

    Thanks to robotic process automation, the time it takes Army contracting professionals to determine whether prospective vendors should receive a contract has been cut from an hour to just five minutes.

  • Russia prying into state, local networks

    A Russian state-sponsored advanced persistent threat actor targeting state, local, territorial and tribal government networks exfiltrated data from at least two victims.

Stay Connected