robot typing (maxuser/


AI cybersecurity: Let's take some deep breaths

The concept of artificial intelligence in cybersecurity has taken on nearly mythic proportions over the past couple years. Many articles have been written on the topic, breathlessly heralding the potential benefits and hazards posed by the rise of machine learning.

However, all the hype surrounding AI tends to obscure a very important fact. The best defense against a potential AI cyber attack is rooted in maintaining a fundamental security posture that incorporates continuous monitoring, user education, diligent patch management and basic configuration controls to address vulnerabilities.

Let's take some deep breaths and explore how each of these four fundamental security practices can aid in the fight for cybersecurity’s future (and present).

Identifying the patterns

AI and machine learning are all about patterns. Hackers, for example, look for patterns in server and firewall configurations, use of outdated operating systems, user actions and response tactics and more. These patterns give them information about network vulnerabilities they can exploit.

Network administrators also look for patterns. In addition to scanning for patterns in the way hackers attempt intrusions, they are trying to identify potential anomalies like spikes in network traffic, irregular types of network traffic, unauthorized user logins and other red flags.

To identify a pattern, IT managers must first establish a baseline. Like people who monitor their resting heart rate to determine overall health, only by establishing a baseline will administrators be able to detect when something out of the ordinary occurs.

By collecting data and monitoring the state of their network under normal operating conditions, administrators can set up their systems to automatically detect when something unusual takes place -- a suspicious network login, for example, or access through a known bad IP. This fundamental security approach has worked extraordinarily well in preventing more traditional types of attacks, such as malware or phishing. It can also be used very effectively in deterring AI-enabled threats.

Educating the users

An agency could have the best monitoring systems in the world, but the work they do can all be undermined by a single employee clicking on the wrong email. Social engineering continues to be a large security challenge for the government because workers easily can be tricked into clicking on suspicious attachments, emails and links. Indeed, employees are considered by many as the weakest links in the security chain, as evidenced by a recent SolarWinds survey that found that careless and untrained insiders represented the top source of security threats.

Educating users on what not to do is just as important as putting security safeguards in place. Experts agree that routine user testing reinforces training. Agencies must also develop plans that require all employees to understand their individual roles in the battle for better security. And don't forget a response and recovery plan, so everyone knows what to do and expect when a breach occurs. Test these plans for effectiveness. Don’t wait for an exploit to find a hole in the process.

Patching the holes

Microsoft’s “Patch Tuesday” schedule helps take the guesswork out of when the company's patches are released, and members of the open source community also regularly announce new patches that become available for their tools. Still, many teams fail to immediately download, test and deploy these patches, making their infrastructure an easy target for diligent hackers.

Hackers know when a patch is released, and in addition to trying to find a way around that patch, they will not hesitate to test if an agency has implemented the fix. Not applying patches opens the door to potential attacks -- and if the hacker is using AI, those attacks can come much faster and be even more insidious.

Do not take this chance. When an applicable patch is released, test and implement it as quickly as possible. Maintain relationships with critical vendors and ensure that their patch management releases have been documented by the IT staff.

Checking off the controls

The Center for Internet Security has issued a set of controls designed to provide agencies with a checklist for better security implementations. While there are 20 actions in total, implementing at least the top five -- device inventories, software tracking, security configurations, vulnerability assessments  and control of administrative privileges -- can eliminate roughly 85 percent of an organization’s vulnerabilities.

All of these practices -- monitoring, user education, patch management and adherence to CIS controls -- can help agencies fortify themselves against even the most sophisticated AI attacks.

Reaching the “dream target”

Certainly, AI attacks will become more sophisticated over time. Machines will continue to probe for weaknesses, and they will learn, based on patterns, what does and does not work. As the machines learn, attacks will become harder to predict and defend against.

Fortunately, government agencies can achieve what has become known as the “dream target.” Using sophisticated network management and configuration tools, agencies are already able to automate attack response protocols and experience real-time remediation. This puts them ahead of the game and provides a solid base for preventing current and future AI cyberattacks, as well as responding with appropriate AI-based countermeasures.

In anticipating and preparing for that future, we must calm down, take some deep breaths, and work on building solid security fundamentals today. That may not be as awe-inspiring as the proverbial rise of the machines, but it can still be very effective.

About the Author

Destiny Bertucci is head geek at SolarWinds.

inside gcn

  • false missile alert

    Avoiding a repeat of Hawaii's 'wrong button' mistake

Reader Comments

Sat, Jan 6, 2018 Joseph P Rovira Evans

AI can speed things up and depending on the resources that a well developed AI system can depend on, like initial normal operating environments that is so important in its use to be be able to know when things are not normal. Human presence and input is the key into making AI a formidable cybersecurity defense system. We just don't have the ability to program, tell or describe ways to explain a human's ability to suddenly make sense of how all these different variables fit together and thus solve a problem. The "light bulb" effect. It actually would be a bit scary if we could artificially replicate that in any usable way. I can think of so many times that one little thing would suddenly make everything so clear to me. Oh, that makes sense now, thank you for that, i just didn't see it until you came along with the last bit of the puzzle.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group