system security (Titima Ongkantong/


The 2016 election: A lesson on integrity

Undoubtedly, much was -- and still is -- being revealed by the 2016 election. For those of us in the IT world, probably the most important lesson from the attacks on the election is that the very heart of our democracy relies on the integrity of our national data and/or systems. While cyber attacks that expose vital data can have serious consequences, the outcomes of cyber attacks that manipulate, change or delete data can be an even greater cause for concern.

Why? Decision-making by senior officials, corporate executives and others is impaired when they can no longer trust the accuracy of the information they are receiving or, even worse, when they have no way of verifying whether the data has or has not actually been manipulated, deleted or changed from its original state. In the case of the 2016 election, while intelligence officials could see that Russian government-linked hackers breached election systems, proving that votes were  -- or were not -- changed is very difficult. In other words, without validating integrity, it’s very hard to prove a negative.

“Integrity” in the context of cybersecurity is defined by SANS as protecting data from modification or deletion by external sources with the ability to make necessary corrections if damage was sustained. Integrity often challenges the traditional security mindset of “keeping the bad actors out,” because it addresses protections once the bad actors are in.  For example, hackers who can gain access as a local user may then escalate their privileges to an administrator who can make changes that appear to be legitimate, such as a change to an encryption setting. Or maybe the attackers make no change at all, but simply use their access to study how the system supports a critical process -- such as the voting  process -- so that they will know how to make a change that negatively impacts the next election.

So why has the integrity of data and systems played such a small role in the cyber strategy that most agencies have relied on over the years? Historically, agencies have prioritized security controls surrounding the voting process that protect the "C" and "A" of the CIA (confidentiality, integrity, availability) triad. But successfully protecting the confidentiality of election data has created a false sense of security, because it does not matter whom people voted for if a change in the system ultimately impacts the final outcome. If change cannot be detected,  the integrity of the election process cannot be assured. This is one of the reasons why determining the impact of any Russian interference in the 2016 election has been so challenging.

Viewing the entire election process (not just data collection) with an integrity mindset is critical. “With the election process, there can be integrity violations anywhere,” said Ron Ross, computer scientist and fellow at the National Institute of Standards and Technology. 

“Some people think of a system as being hardware, software, firmware, etc., but an election system is larger than just these parts -- from paper-based ballot transactions to data reduction after ballots are counted, rolled up and tallied into a summary,” Ross said. “Any time these actions are supported by computers, the trustworthiness of the computer has to be of top priority. That’s why integrity is important in the election system.”

And that is why integrity must be monitored across the entire election process.

Proposed legislation prioritizes the protection of confidentiality and privacy surrounding election threat-sharing practices and what agencies should do when they become aware of a cyber incident. Agencies are referred to the NIST Cybersecurity Framework for further guidance on technical best practices. Unfortunately, direct guidance for integrity monitoring is not provided in the framework.

If integrity monitoring is not elevated as a policy and funding priority by legislators, future instances of a nation-state tampering with an election will cause tremendous damage, regardless of whether or not election data was actually changed. Simply casting doubt on the integrity of an election means that we must question the outcome.

So, what is the answer?

With strong integrity monitoring, IT managers can provide evidence that nothing was changed during the election process. There are four basic steps to ensure integrity:

1. Start with secure deployment. All organizations involved in the election should work to ensure they’re deploying systems that meet standards for risk acceptance. That means there must be risk assessment criteria, and IT managers must be able to apply those standards to servers, images, containers and any other system that gets deployed -- whether on-premise, virtual or in the cloud. All systems in the organization should get this treatment.

2. Baseline every system that’s deployed. The time to establish a baseline for a system is when it’s first deployed, before it is ever exposed to the network. That baseline is crucial for being able to identify changes and determine how they might affect the risk posture of a system. The baseline should be closely correlated with the standards for secure deployment of that type of system.

3. Monitor systems for change. Detecting change is at the heart of integrity management. Once secure systems have been deployed and baselined, IT managers must be able to detect changes that compromise the integrity of that system. This process requires a close connection between change detection, baselines and the change process for the organization.

4. Investigate and remediate changes. Not every change requires action. Implementing a reconciliation process that separates the wheat from the chaff is crucial. Changes that are business as usual and associated with change orders or planned updates always should be retained in case later analysis is needed, but they don’t require a response. Changes that can’t be recon­ciled or changes that impact risk must be investigated and remediated immediately. Sufficient detail about the changes is required to make decisions.

While the Justice Department's July 13 indictments of Russian hackers did not conclude votes were changed in the 2016 election, most agencies don’t have the luxury of having national security resources monitor change on their networks. They need automated systems that will tell them not just if changes were made, but what changes were made.

“If you lose confidence in the election system, that gets to the heart of democracy and the freedom that we have cherished for over 200 years in this country," Ross said. "So the fact that we’re using computers today in a big way -- and a lot of them are not trustworthy -- is a huge problem that we have to solve in order to maintain both personal privacy and security.”

About the Author

Keren Cummins is federal director at Tripwire where she works with federal customers on the complexities of managing critical security controls and on strengthening their risk management strategies.


  • Records management: Look beyond the NARA mandates

    Records management is about to get harder

    New collaboration technologies ramped up in the wake of the pandemic have introduced some new challenges.

  • puzzled employee (fizkes/

    Phish Scale: Weighing the threat from email scammers

    The National Institute of Standards and Technology’s Phish Scale quantifies characteristics of phishing emails that are likely to trick users.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.