How disinformation monitoring helps agencies break down attacks
- By Dan Brahmy
- Jul 16, 2021
As hacks, ransomware attacks and data breaches continue to make their way into the spotlight, it can be easy to forget about another more subtle, yet perhaps more sinister, aspect of cyberwarfare: disinformation and influence campaigns.
As we’ve seen in recent years, instances of disinformation campaigns and cyberattacks targeting government agencies have increased, making monitoring tools vital in the fight against interference within elections, government initiatives, public health crises and more. Nefarious campaigns within these spaces can easily reach mainstream consumers, drawing more attention to false and even harmful narratives.
These efforts are believed to primarily target the U.S., based on data pulled from Facebook. The Justice Department recently seized 36 websites, linked to Iranian news website domains that were believed to be launching disinformation campaigns against the U.S. With tensions already on the rise, now is the perfect time for agencies to consider platforms and tools that can help them monitor and counter disinformation.
Disinformation detection platforms offer specific tools that help in identifying these attacks and breaking them down. An attack against a government agency will certainly affect the agency itself, but the impact on social media users and constituents could be even more damaging. As many across the U.S. saw last fall, false narratives about the election amplified by influential authors can take social media by storm. While Facebook, Twitter and YouTube all vowed to “clamp down on election misinformation,” false statements made by former President Donald Trump circled Twitter and were shared and engaged with widely, despite being flagged as “misleading.”
Through semi-supervised machine-learning algorithms, monitoring platforms can detect disinformation by defining suspicious behavior parameters and flagging unusual activity. Over time, the algorithm learns from itself and the communities it analyzes, adding new trends and behavior patterns to its consideration set. It can identify initial suspicious conversations that may lead to cyberattacks or influence campaigns. Platforms that offer cluster analysis add another layer of value, allowing agencies to examine similarities among online communities spreading disinformation and analyze the methods and effects of these attacks to better discern their origin.
Government must be able to quantify the spread of misinformation and examine a campaign’s reach on all fronts. Tools that track the impact of a campaign are crucial to a disinformation detection platform. They can provide real-time analysis on how disinformation against an agency, its leaders and larger community is being viewed and handled. This tracking can help an agency understand where disinformation started, who participated -- including authentic and inauthentic authors -- and any shifts in behavior or sentiment it inspired. All of these insights help agencies stay ahead of future attacks.
Breaking down disinformation attacks by analysis of sentiment, author behavior and online community connections can help an agency understand how and why it was targeted and help it establish new monitoring and security practices going forward. Regardless of how strong an agency’s security systems are, nefarious agents can create fake profiles, manufacture content and visuals and engage with real users who can fall prey to false narratives. Once an inauthentic narrative is amplified by a disinformation campaign and captures the attention of the social media consumer, an agency can rapidly lose the trust of the public.
Some platforms even provide an automated report that agencies can use for proactive monitoring and threat identification.
With the tools and insights offered by disinformation monitoring platforms, government agencies can more efficiently work through current disinformation attacks and prepare for future ones as well.
Dan Brahmy is the co-founder and CEO of Cyabra.