fake news (igorstevanovic/Shutterstock.com)

Can Russian cyber meddling be stopped?

Agencies had better get used to online campaigns that spread disinformation, a social media expert told a Senate panel.

The massive numbers of fake email comments tagged to Russian IP addresses that flooded servers at the Federal Communications Commission, for example, might also swamp the servers of other federal agencies in the future, Clint Watts, a fellow at the Foreign Policy Research Institute, told a Jan. 17 Senate Commerce, Science and Transportation Committee hearing on social media and terrorism.

The 500,000 comments to the FCC public comment system is symptomatic of a larger effort to attack the integrity of the federal government's systems and sow suspicion in the democratic process, according to Watts, a former FBI special agent who served on the Joint Terrorism Task Force.

"It's a 'you can't trust the process'" approach, he said, adding that the Russian government has used similar techniques on its own population to make them apathetic and mistrustful of their own elections.

The Russian government has seen success with the low-cost program, according to Watts, which the U.S. government hasn't firmly addressed.

He predicted the program will continue in the U.S. and spread to other countries such as Mexico and Myanmar, where still-emerging technological environments mean many individuals aren't as technology literate.

To address the problem, Watts recommended social media companies verify the authenticity of their users. "Account anonymity today allows nefarious social media personas to shout the online equivalent of 'fire' in a movie theater," Watts said in his written testimony.

He also suggested pulling the plug on "social bots" that can broadcast high volumes of misinformation. They "can pose a serious risk to public safety and when employed by authoritarians a direct threat to democracy," he said. Limits on non-automated accounts should be developed so that they can only make a certain number of posts they can make during an hour, day or week. Watts also suggested social media companies use human verification systems like CAPTCHA to reduce automated broadcasting.

Representatives from social media companies Facebook, Twitter and YouTube, meanwhile, told the committee they are increasingly implementing machine learning and artificial intelligence to detect terror recruitment and messaging on their platforms.

For instance, YouTube Director of Public Policy and Government Relations Juniper Downs told the panel that since June, her company has removed over 160,000 violent extremist videos and terminated some 30,000 channels for violation of policies against terrorist content.

As the next election looms, those companies said they are preparing to sift through some of the political ads and other data that might be steered by questionable sources.

Downs said her company is looking for more transparency and verification about who is behind certain ads that appear on the platform, as well as a transparency report that would provide more detail on that content.

Watts warned the Russian effort may be more insidious than those of other bad actors, such as terrorist groups, because the Russian agents "operate within the rules" of the platforms and don't use the inflammatory language and terms that AI and machine learning systems are trained to discern.

A version of this article was first posted to FCW, a sibling site to GCN.

About the Author

Mark Rockwell is a senior staff writer at FCW, whose beat focuses on acquisition, the Department of Homeland Security and the Department of Energy.

Before joining FCW, Rockwell was Washington correspondent for Government Security News, where he covered all aspects of homeland security from IT to detection dogs and border security. Over the last 25 years in Washington as a reporter, editor and correspondent, he has covered an increasingly wide array of high-tech issues for publications like Communications Week, Internet Week, Fiber Optics News, tele.com magazine and Wireless Week.

Rockwell received a Jesse H. Neal Award for his work covering telecommunications issues, and is a graduate of James Madison University.

Click here for previous articles by Rockwell. Contact him at mrockwell@fcw.com or follow him on Twitter at @MRockwell4.


inside gcn

  • IoT security

    A 'seal of approval' for IoT security?

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group