cybersecurity

Where do you draw the line on securing critical infrastructure?

The National Institute of Standards and Technology released its Cybersecurity Framework for critical infrastructure this week, a set of voluntary standards and best practices that the administration would like to see widely adopted by operators of systems critical to the nation’s economy and security.

The framework is a good and necessary step toward improving the nation’s cybersecurity, but it would be a mistake to think that it can achieve real security by itself. Multistage attacks against high-value targets are exploiting upstream vulnerabilities to provide easy access to critical resources in government as well as in sensitive private-sector systems. 

Enforceable baseline standards for a much wider range of systems are necessary to prevent these attacks. 

This vulnerability was brought home with the breach of RSA in 2011 that exposed critical data about the company’s SecurID authentication token. That began with a spear phishing attack against RSA’s parent company EMC, deploying a zero-day exploit to give attackers a foothold inside the company. This exposed RSA, and data stolen from the security company later was used in an attack against defense contractor Lockheed Martin.

A more recent example is the theft of information about tens of millions of credit cards. The attackers apparently used a network link with a heating, ventilation and air conditioning contractor to penetrate card payment systems at Target stores and possibly other retailers. The attack did not use HVAC control systems; the initial compromise could have been in almost any type of connected system.

The interconnections among information systems today make it difficult, if not impossible, to set limits on what infrastructure should be designated critical for government and the private sector. Multistage attacks can be simple or sophisticated, but they all exploit weak links that might in themselves be of little value. These attacks can  then escalate access to critical resources without having to penetrate a hardened perimeter. They can avoid setting off intrusion alarms and can make the breaches more difficult to detect.

This does not mean that critical systems should not get close attention when it comes to cybersecurity. Effective security needs to be risk-based, which means that those systems presenting the greatest risk get the most attention. But it does illustrate the risk of sharply defining the perimeters of critical, high-value systems without considering what those systems are connected to, what those secondary systems are connected to and what those systems are connected to.

Cybersecurity is a big job, and when approaching a big job it makes sense to prioritize. But don’t be lulled into thinking the job is done when the top priority is completed. Priorities are like an old fashioned rail fence: If you take off the top rail, you’ll find another top rail beneath it. Even if our critical infrastructure is protected, we cannot assume that we are secure until the infrastructure that connects to it is secure, down to the HVAC contractors if necessary.

Posted by William Jackson on Feb 14, 2014 at 11:52 AM0 comments


fingerprints

Approximate matching can help find needles in haystacks

Finding malicious code is not too difficult if you have a fingerprint or signature to look for. Traditional signature-based antivirus tools have been doing this effectively for years. But malware often morphs, adapts and evolves to hide itself, and a simple one-to-one match no longer is adequate.

The National Institute of Standards and Technology is developing guidance for a technique called approximate matching to help automate the task of identifying suspicious code that otherwise would fall to human analysts. The draft document is based on work of NIST’s Approximate Matching Working Group.

“Approximate matching is a promising technology designed to identify similarities between two digital artifacts,” the draft of Special Publication 800-168 says. “It is used to find objects that resemble each other or to find objects that are contained in another object.” 

The technology can be used to filter data for security monitoring and for digital forensics, when analysts are trying to spot potential bad actors either before or after a security incident.

Approximate matching is a generic term describing any method for automating the search for similarities between two digital artifacts or objects. An “object” is an “arbitrary byte sequence, such as a file, which has some meaningful interpretation.”

Humans can understand the concept of similarity intuitively, but defining the aspects of similarity for algorithms can be challenging. In approximate matching, similarity is defined for algorithms in terms of the characteristics of artifacts being examined. These characteristics can include byte sequences, internal syntactic structures or more abstract semantic attributes similar to what human analysts would look for.

Different methods for approximate matching operate at different levels of abstraction. These range from generic techniques at the lowest level to detect common byte sequences, to more abstract analysis that approach the level of human evaluation. “The overall expectation is that lower level methods would be faster, and more generic in their applicability, whereas higher level ones would be more targeted and require more processing,” the document explains.

Approximate matching uses two types of queries: resemblance and containment. Two successive versions of a piece of code are likely to resemble each other, and a resemblance query simply identifies two pieces of code that are substantially similar. With a containment query, two objects of substantially different size, such as a file and a whole-disk image, are examined to determine whether the smaller object, or something similar to it, is contained in the large one.

As described in the document, approximate matching usually is used to filter data, as in blacklisting known malicious artifacts or anything closely resembling them. “However, approximate matching is not nearly as useful when it comes to whitelisting artifacts, as malicious content can often be quite similar to benign content,” NIST warns.

The publication lays out essential requirements of approximate matching functions as well as the factors—including sensitivity and robustness, precision and recall and security—that determine the reliability of the results.

Comments on the publication should be sent by March 21 to match@nist.gov with “Comments on SP 800-168” in the subject.

Posted by William Jackson on Feb 07, 2014 at 10:23 AM1 comments


Internet

Mobile, enterprise users drive US IPv6 growth

According to the latest quarterly State of the Internet report from Akamai, Western nations are leading the way in use of next generation Internet Protocols, with Asia surprisingly lagging behind.

The amount of IPv6 Internet traffic hitting Akamai’s global content distribution network grew sharply in the third quarter of 2013, and the United States and Europe appear to dominate in adoption of the next-generation Internet Protocols.

Only one Asian nation, Japan, was included among the top 10 countries generating IPv6 traffic, with 1.9 percent of its traffic using IPv6. The United States was in fifth place with 4.2 percent. 

“IPv6 uptake in Asia was not as high as we expected it to be,” said David Belson, Akamai’s senior director of industry and data intelligence and lead author of the report. “That was surprising, given the shortage of IPv4 addresses,” in that region.

A limited number of IP addresses are available in Version 4 of the Internet Protocols, and those are beginning to run out. Increasingly, large allocations of addresses are being made from the much larger pool of IPv6 addresses. Because the two versions are not compatible and Internet connected systems have to be readied for the new protocols, many vendors, carriers and infrastructure operators are tracking their adoption closely. 

In the United States, federal agencies are required to accept IPv6 traffic on all public-facing systems. Agencies must upgrade applications that communicate with public Internet servers to use native IPv6 by the end of the 2014 fiscal year. 

The reason for the higher rate of adoption in the Western countries appears to be leadership from mobile carriers as well as government. “It was good for them to put out a deadline” for enabling IPv6 in government systems, Belson said of the U.S. government. But the largest driver is adoption of the protocols by large mobile carriers. Because of quick market growth and a high turnover rate for devices, mobile users are in the forefront of IPv6 adoption, whether they know it or not.

Still, adoption of the new protocols in this country remains spotty. Comcast, the nation’s largest Internet service provider, reports that 25 percent of its customers are provisioned with dual-stack broadband connections supporting IPv6. But consumer hardware such as routers and cable modems tend to stay in place longer than mobile devices, reducing the rate of adoption of the new protocols.

One interesting pattern found by Akamai in IPv6 traffic is that volumes drop each Saturday, meaning that there probably is a higher level of IPv6 adoption on enterprise networks as opposed to consumer ISPs.

Although Internet growth is expected to be in the IPv6 address space, IPv4 is not yet dead. Akamai identified almost 761 million unique IPv4 addresses hitting its network in the third quarter, a growth of 1.1 percent over the previous quarter and a surprising 11 percent increase over the past year.

The United States, which has the largest allocations of IPv4 addresses, saw the number of IPv4 addresses grow by 9.3 percent over the past year.

This growth and the slow, spotty uptake of IPv6 mask the fact that the pool of available IPv4 addresses continues to shrink. ARIN, the American Registry for Internet Numbers, is down to its last two/8 blocks of IPv4 addresses of 16.7 million each, making large pools of the addresses difficult to obtain. Inevitably, IPv6 will be growing.

Posted by William Jackson on Jan 31, 2014 at 11:14 AM0 comments


Congress

After Target cyberattack, Congress votes to do precisely nothing

The latest cybersecurity bill to be introduced in Congress took a small step forward last week. After the legislation passed out of a House subcommittee, its co-sponsors released a statement saying that, “the recent Target incident in which 110 million Americans’ personal information was compromised only underscores the very real and serious nature of the cyberthreat today.”

In response to the unprecedented attack, the National Cybersecurity and Critical Infrastructure Protection Act of 2013 (H.R. 3696) does precisely nothing. It is not just that the bill fails to do anything. Its purpose is actually to avoid doing anything and to codify the status quo; a policymaking status that current events have repeatedly shown to be inadequate.

Recognizing that our national security inevitably is bound up with the security of the nation’s privately owned critical infrastructure, the Homeland Security Department has for some years been tasked with providing voluntary technical and operational assistance to the private sector. DHS supports these firms in cooperation with the agencies that have regulatory authority over specific financial sectors, such as financial services and energy.

But DHS never has had authority to go beyond just offering assistance and advice on best practices.

This is the situation that would be formalized under H.R. 3696. The bill, according to House Homeland Security Committee Chairman Rep. Michael McCaul (R-Texas), who introduced it in December, “prohibits new regulatory authority at DHS and is budget neutral.” That is, the department gets no power to do anything and gets no money to do it. Instead, it codifies existing efforts such as the National Cybersecurity and Communications Integration Center, the National Infrastructure Protection Plan and the National Cybersecurity Incident Response Plan.

There is nothing wrong with these programs, as far as they go; which is not far enough. But the nation’s critical infrastructure is increasingly networked and accessible through the Internet, which exposes it to the full range of threats across the globe.

The emergence of complex, multistage exploits that quietly penetrate critical targets by leveraging vulnerabilities several links away from the target mean that it is difficult to be sure any system is effectively isolated. Because of this level of complexity and interconnectivity, it almost is impossible to find a system that might not be rated critical.

Given Congress’ record on passing cybersecurity legislation, the specific provisions of H.R. 3696 probably aren’t important. But it is disappointing to see that so many in Congress still refuse to acknowledge that the nation needs a strong baseline of protections for the systems on which its security and economy depend.

The belief has been that the private sector will set up effective cybersecurity on its own because it is in its interest to do so. But it has been shown over and over that this is not adequate.

Effective security cannot be legislated, and the last thing this nation needs is a technology prescription from politicians. But regulations with teeth that define required outcomes and responsibilities could go a long way toward ensuring that industry does what is needed to protect its own systems – and gets the assistance it needs from government.

Posted by William Jackson on Jan 24, 2014 at 7:57 AM0 comments