Google Glass

The next security perimeter? You're wearing it.

The idea of wearable technology is not new to government. In the military, the concept of using hands-free technology to integrate soldiers in the field into mobile ad hoc networks is part of the Defense Department’s vision of network-centric warfare. But what happens when unmanaged personal or wearable devices are brought into the workplace to connect with the enterprise network?

The result is another layer of security concerns for agencies that still are struggling with the challenges presented by the bring-your-own-device movement.

Some of the challenges presented by products such as Samsung Galaxy Gear and Google Glass are not new. In many ways, “it’s just an alternative form factor,” said Paul Christman, Dell Software’s vice president of public sector. “They are fairly consumer oriented, and they tend to be fairly low tech,” mostly acting as sensors to gather data such as location and health metrics.

The challenge with these devices is not only to secure the data they gather and the connections they use but also to decide who owns and controls the data. 

Joggers who wear a fitness monitor might assume the data is theirs; but odds are they are sharing it with someone else, whether they know it or not. As devices become more sophisticated and are used to access data at work, they will have to be managed and the data they access secured.

Progress is being made in addressing the workplace security challenge in traditional BYOD, often by compartmentalizing the devices to create separate personal and work partitions. Typically, the user cedes a degree of control over the personal device so that workplace IT administrators can enforce policy in the partitioned workspace.

“The same model can apply” in wearable technology, Christman said. “But how do you compartmentalize Google Glass?”

Technologically, the challenge is not that great. Based on their experience with laptops and smartphones, IT pros can port existing security tools to the new form factors as the devices become sophisticated enough to accept them. The real hurdle is making the decision to do so and doing it early enough that administrators do not find themselves in an endless loop of catch-up as the new technology comes online.

Fortunately, the call for security is going out early. “There are a lot of people sounding the alarm from the get-go,” Christman said. “Geolocation data is getting a lot of attention now. That’s one of the things that needs to be addressed first.” The security of local wireless connection protocols used by small devices, such as Bluetooth and near field communication, also needs to be addressed.

And along with the technology fixes there will have to be “polite rules of society” for when and where we use technology and when it’s time to take the glasses off, Christman said. Rules such as “turn the camera off in the locker room” are probably a good idea.

The social and legal niceties of mobile devices are no trivial matters. A man was shot to death in Florida last month in an apparent argument over texting in a movie theater, and a California woman was ticketed late last year for driving with Google Glass. The charge against the woman was dismissed in January, but the questions about liability and legality remain unanswered.

Posted by William Jackson on Feb 21, 2014 at 12:31 PM1 comments


cybersecurity

Where do you draw the line on securing critical infrastructure?

The National Institute of Standards and Technology released its Cybersecurity Framework for critical infrastructure this week, a set of voluntary standards and best practices that the administration would like to see widely adopted by operators of systems critical to the nation’s economy and security.

The framework is a good and necessary step toward improving the nation’s cybersecurity, but it would be a mistake to think that it can achieve real security by itself. Multistage attacks against high-value targets are exploiting upstream vulnerabilities to provide easy access to critical resources in government as well as in sensitive private-sector systems. 

Enforceable baseline standards for a much wider range of systems are necessary to prevent these attacks. 

This vulnerability was brought home with the breach of RSA in 2011 that exposed critical data about the company’s SecurID authentication token. That began with a spear phishing attack against RSA’s parent company EMC, deploying a zero-day exploit to give attackers a foothold inside the company. This exposed RSA, and data stolen from the security company later was used in an attack against defense contractor Lockheed Martin.

A more recent example is the theft of information about tens of millions of credit cards. The attackers apparently used a network link with a heating, ventilation and air conditioning contractor to penetrate card payment systems at Target stores and possibly other retailers. The attack did not use HVAC control systems; the initial compromise could have been in almost any type of connected system.

The interconnections among information systems today make it difficult, if not impossible, to set limits on what infrastructure should be designated critical for government and the private sector. Multistage attacks can be simple or sophisticated, but they all exploit weak links that might in themselves be of little value. These attacks can  then escalate access to critical resources without having to penetrate a hardened perimeter. They can avoid setting off intrusion alarms and can make the breaches more difficult to detect.

This does not mean that critical systems should not get close attention when it comes to cybersecurity. Effective security needs to be risk-based, which means that those systems presenting the greatest risk get the most attention. But it does illustrate the risk of sharply defining the perimeters of critical, high-value systems without considering what those systems are connected to, what those secondary systems are connected to and what those systems are connected to.

Cybersecurity is a big job, and when approaching a big job it makes sense to prioritize. But don’t be lulled into thinking the job is done when the top priority is completed. Priorities are like an old fashioned rail fence: If you take off the top rail, you’ll find another top rail beneath it. Even if our critical infrastructure is protected, we cannot assume that we are secure until the infrastructure that connects to it is secure, down to the HVAC contractors if necessary.

Posted by William Jackson on Feb 14, 2014 at 11:52 AM0 comments


fingerprints

Approximate matching can help find needles in haystacks

Finding malicious code is not too difficult if you have a fingerprint or signature to look for. Traditional signature-based antivirus tools have been doing this effectively for years. But malware often morphs, adapts and evolves to hide itself, and a simple one-to-one match no longer is adequate.

The National Institute of Standards and Technology is developing guidance for a technique called approximate matching to help automate the task of identifying suspicious code that otherwise would fall to human analysts. The draft document is based on work of NIST’s Approximate Matching Working Group.

“Approximate matching is a promising technology designed to identify similarities between two digital artifacts,” the draft of Special Publication 800-168 says. “It is used to find objects that resemble each other or to find objects that are contained in another object.” 

The technology can be used to filter data for security monitoring and for digital forensics, when analysts are trying to spot potential bad actors either before or after a security incident.

Approximate matching is a generic term describing any method for automating the search for similarities between two digital artifacts or objects. An “object” is an “arbitrary byte sequence, such as a file, which has some meaningful interpretation.”

Humans can understand the concept of similarity intuitively, but defining the aspects of similarity for algorithms can be challenging. In approximate matching, similarity is defined for algorithms in terms of the characteristics of artifacts being examined. These characteristics can include byte sequences, internal syntactic structures or more abstract semantic attributes similar to what human analysts would look for.

Different methods for approximate matching operate at different levels of abstraction. These range from generic techniques at the lowest level to detect common byte sequences, to more abstract analysis that approach the level of human evaluation. “The overall expectation is that lower level methods would be faster, and more generic in their applicability, whereas higher level ones would be more targeted and require more processing,” the document explains.

Approximate matching uses two types of queries: resemblance and containment. Two successive versions of a piece of code are likely to resemble each other, and a resemblance query simply identifies two pieces of code that are substantially similar. With a containment query, two objects of substantially different size, such as a file and a whole-disk image, are examined to determine whether the smaller object, or something similar to it, is contained in the large one.

As described in the document, approximate matching usually is used to filter data, as in blacklisting known malicious artifacts or anything closely resembling them. “However, approximate matching is not nearly as useful when it comes to whitelisting artifacts, as malicious content can often be quite similar to benign content,” NIST warns.

The publication lays out essential requirements of approximate matching functions as well as the factors—including sensitivity and robustness, precision and recall and security—that determine the reliability of the results.

Comments on the publication should be sent by March 21 to match@nist.gov with “Comments on SP 800-168” in the subject.

Posted by William Jackson on Feb 07, 2014 at 10:23 AM1 comments


Internet

Mobile, enterprise users drive US IPv6 growth

According to the latest quarterly State of the Internet report from Akamai, Western nations are leading the way in use of next generation Internet Protocols, with Asia surprisingly lagging behind.

The amount of IPv6 Internet traffic hitting Akamai’s global content distribution network grew sharply in the third quarter of 2013, and the United States and Europe appear to dominate in adoption of the next-generation Internet Protocols.

Only one Asian nation, Japan, was included among the top 10 countries generating IPv6 traffic, with 1.9 percent of its traffic using IPv6. The United States was in fifth place with 4.2 percent. 

“IPv6 uptake in Asia was not as high as we expected it to be,” said David Belson, Akamai’s senior director of industry and data intelligence and lead author of the report. “That was surprising, given the shortage of IPv4 addresses,” in that region.

A limited number of IP addresses are available in Version 4 of the Internet Protocols, and those are beginning to run out. Increasingly, large allocations of addresses are being made from the much larger pool of IPv6 addresses. Because the two versions are not compatible and Internet connected systems have to be readied for the new protocols, many vendors, carriers and infrastructure operators are tracking their adoption closely. 

In the United States, federal agencies are required to accept IPv6 traffic on all public-facing systems. Agencies must upgrade applications that communicate with public Internet servers to use native IPv6 by the end of the 2014 fiscal year. 

The reason for the higher rate of adoption in the Western countries appears to be leadership from mobile carriers as well as government. “It was good for them to put out a deadline” for enabling IPv6 in government systems, Belson said of the U.S. government. But the largest driver is adoption of the protocols by large mobile carriers. Because of quick market growth and a high turnover rate for devices, mobile users are in the forefront of IPv6 adoption, whether they know it or not.

Still, adoption of the new protocols in this country remains spotty. Comcast, the nation’s largest Internet service provider, reports that 25 percent of its customers are provisioned with dual-stack broadband connections supporting IPv6. But consumer hardware such as routers and cable modems tend to stay in place longer than mobile devices, reducing the rate of adoption of the new protocols.

One interesting pattern found by Akamai in IPv6 traffic is that volumes drop each Saturday, meaning that there probably is a higher level of IPv6 adoption on enterprise networks as opposed to consumer ISPs.

Although Internet growth is expected to be in the IPv6 address space, IPv4 is not yet dead. Akamai identified almost 761 million unique IPv4 addresses hitting its network in the third quarter, a growth of 1.1 percent over the previous quarter and a surprising 11 percent increase over the past year.

The United States, which has the largest allocations of IPv4 addresses, saw the number of IPv4 addresses grow by 9.3 percent over the past year.

This growth and the slow, spotty uptake of IPv6 mask the fact that the pool of available IPv4 addresses continues to shrink. ARIN, the American Registry for Internet Numbers, is down to its last two/8 blocks of IPv4 addresses of 16.7 million each, making large pools of the addresses difficult to obtain. Inevitably, IPv6 will be growing.

Posted by William Jackson on Jan 31, 2014 at 11:14 AM0 comments