Moving IT security from human to machine speed
- By Bill Aubin
- Mar 26, 2018
Cybersecurity is a top priority for federal agencies. Since taking office, President Donald Trump has proposed a cyber funding increase of 4 percent across the government, including significant hikes for the Department of Homeland Security and the Pentagon. There are even larger cyber funding spikes at key agencies, with a 23 percent jump at the Energy Department, a 33 percent jump at the Nuclear Regulatory Commission and a 16 percent hike at the Department of Veterans Affairs.
IT analyst firm Gartner, too, has reported government CIOs will increase spending on cybersecurity. Its survey showed that CIOs at defense and intelligence agencies see artificial intelligence as a crucial technology investment -- more so than their counterparts in other industries. This highlights the need for AI investment at the government level, as well as an improved approach to defense and intelligence through technology.
We must move IT security from human to machine speed. Cyber criminals have certainly done so with automated tools and tactics. Much has been said about the AI arms race between the U.S., China and Russia, and to keep government secure, agencies must embrace AI and machine learning-powered cybersecurity measures.
For the purposes of this article, AI refers to “the broader concept of machines being able to carry out tasks in a way that we would consider ‘smart,’” and machine learning as “a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves,” as defined in this Forbes column.
The data challenges with federal cybersecurity
Federal agencies house reams of sensitive and classified data, making them a high-value target for cyber criminals. But major threats to the security of this data include the lack of visibility IT managers have into their networks and users’ behaviors coupled with the amount of time it takes them to find threat actors using valid credentials inside their IT infrastructure.
That doesn't even account for the threats that originate on the inside. The number of people at the federal level who have access to top-secret information has increased from hundreds of thousands to almost a million across thousands of private companies and public agencies. Government agencies estimate there is one insider threat for every 6,000 to 8,000 employees because it is too difficult to monitor each individual and their behavior online.
Meanwhile, malicious and negligent insider attacks continue to catch organizations unprepared, according a study by Mimecast. In fact, 43 percent of businesses need a month or longer to detect employees accessing unauthorized files, according to a Ponemon Institute study.
The location of sensitive and regulated data, who has access to it and what their behaviors are -- from the outside or from within -- are big blind spots that continue to make government agencies vulnerable to external and insider threats.
Additionally, the staggering talent shortages plaguing cybersecurity as a whole leave fed security teams understaffed, overtaxed and struggling to enact the right strategy to properly address cybersecurity. In August 2016, the Government Accountability Office reported that federal chief information security officers faced significant challenges in recruiting and retaining personnel with high-demand skills.
Where humans fall short
When it comes to protecting sensitive data, it’s important to keep sight of the ways big data, advanced behavioral analytics and machine learning can help, from detecting and responding to threats, to proactively fighting them. Machine learning enables organizations to apply mathematical models to make sense of copious amounts of data and to automatically take action based on those insights. Machine learning-based behavioral analytics and anomaly detection applied to cybersecurity data can vastly improve and automate the way organizations monitor for unusual behavior, detect threats and handle incident response.
The Chelsea Manning case is a perfect example of how machine learning could have helped prevent a government employee from releasing classified information to the public. As a U.S. Army private and intelligence analyst, Manning released top secret information to WikiLeaks because she wanted to “spark a domestic debate on the role of military and foreign policy.” After the breach, President Barack Obama signed an executive order creating the National Insider Threat Task Force, which impacted nearly every federal department and required federal employees to monitor each other for suspicious activity. While monitoring for suspicious behavior can be effective, it is more efficient when that human effort is supplemented by a machine.
The role machines play
Machines can digest huge amounts of an organization's data stored and flag unusual behavior. The diverse information required to get a complete picture of what's happening and what might happen means gathering network traffic, endpoint data, cloud and identity data and other information from logs, and marrying that with contextual information such as threat intelligence, asset criticality measures and vulnerability data.
Today, security analysts are turning to security data lakes to easily store and access this critical data. Once agencies have collected the data, they must apply advanced analytics using machine learning algorithms to improve threat detection and reaction time. These tools translate help agencies identify potential threats, prioritize remediation of vulnerabilities and architectural adjustments, and identify and understand attacks already in progress.
Speed and context matter. For example, most agencies have billions of security-related logs per day, and security analysts must be able to automatically and intelligently parse critical data. When security teams organize their logs and enrich them with important contextual information, they can more accurately and efficiently detect suspicious activity like insider threats and conduct incident investigation. Machine-speed analytics adds another layer of protection to the human effort.
To adequately address today’s (and tomorrow’s) threats, IT teams must augment their efforts. Machine learning-powered cybersecurity solutions can equip security analysts with the tools they need to automate much of their jobs, reducing the number of analysts federal agencies need to hire in the first place.
Cybersecurity will continue to be a huge priority at the federal level. As more agencies embrace machine learning-powered solutions, they can better thwart both external and insider threats, lighten the workload of security analysts, and stay technologically competitive against other nations.