A European Union study of the evolving cyber threat landscape identified a handful of emerging areas that are likely to be high-profile targets in the immediate future, with mobile computing topping the list.
Hardly a shocking conclusion, and the rest of the list also contains few surprises. There are social technology (usually referred to as social networking in this country), critical infrastructure, cloud computing and big data.
But there was one area flagged in the report that doesn’t get much attention here as a separate IT segment: Trust infrastructure. This is defined as "any information system that provides strong authentication and aims at establishing a trusted, secure connection between two end points."
In the United States we usually lump this function in applications or networks as identity management. But the EU study takes a broader view, which reflects a stronger emphasis on privacy and the idea that identity resides with the individual, not with the resources being accessed.
Maybe the concept of a trust infrastructure will gain traction here under the National Strategy for Trusted Identities in Cyberspace, a multi-pronged, public/private effort headed by the National Institute of Standards and Technology.
Among the programs under way, the administration is launching an initiative to use commercial cloud services to authenticate third-party credentials for accessing government sites, called the Federal Cloud Credential Exchange. The U.S. Postal Service will be operating an FCCX pilot.
A successful citizen-to-government identity bridge could help replace the outmoded password paradigm with strong, manageable credentials so the United States could have its own trust infrastructure. Considering it’s apparently an emerging target for cyber criminals, it seems a bridge worth crossing.
The EU study was conducted by the European Network and Information Security Agency, which analyzed more than 140 reports from the security industry and other organizations.
The study broke down the top threats by six areas: mobile computing, social technology, critical infrastructure, trust infrastructure, cloud computing and big data, and listed whether those types of threats were increasing, remaining stable or decreasing in each area.
Mobile computing, for example, faces increasing threats from drive-by attacks, worms and Trojans, exploit kits, botnets and phishing, among others. The current threats to the trust infrastructure include denial-of-service attacks, compromised confidential information, targeted attacks, physical theft, loss or damage of equipment, and identity theft.
Mobile users do get one small piece of good news from the report. Among 16 threats across six computing areas, only one threat is decreasing: spam in mobile computing.
Posted on Jan 10, 2013 at 2:07 PM0 comments
National security increasingly depends on the ability of agencies at the federal, state and local level to cooperate across organizational and jurisdictional lines.
“This cooperation, in turn, demands the timely and effective sharing of intelligence and information about threats to our nation with those who need it, from the president to the police officer on the street,” says the president’s National Strategy for Information Sharing and Safeguarding, released in December.
This requirement has been complicated by turf wars, siloed databases and a lack of interoperable technology and policies. The strategy calls for a shift to interoperable shared services (read: cloud) and integrated policies that will require system upgrades in what the strategy calls an “extremely austere budget environment.” In other words, don’t look for the needed improvements in the nation’s information sharing infrastructure to happen any time soon.
But the real headache in meeting the strategy’s goals will be establishing the common identity and access management schemes that are needed to enable secure sharing.
Today’s systems, policies and procedures were developed to lock information down, not to share it securely. The security posture has been defensive and outward-facing.
“The focus of information safeguarding efforts in the past was primarily bound to systems and networks at specific classification levels,” the strategy says. This focus will have to shift to the data itself, regardless of where or what it is, with common standards for tagging data with metadata to enable discovery across multiple databases and using common platforms for identity and access management.
It might be relatively easy to move federal, state and local intelligence to interoperable cloud platforms in standardized formats, although it will take money that is unlikely to be available any time soon. But common identity and access management schemes systems will be tough to do.
It has been more than eight years since Homeland Security Presidential Directive 12 mandated an interoperable electronic ID card that could be used across all executive branch agencies and their contractors for both physical and logical access. Standards and specifications were developed by the National Institute of Standards and Technology in record time, and millions of Personal Identity Verification Cards have been issued to government employees and contractors. But despite this progress, most agencies still lack platforms for using the cards for unified access control for both physical facilities and IT systems, and there is little if any interoperation between departments accepting each other’s cards.
The problem is both a lack of mutual trust between agencies and the legacy systems underlying access control that have not been updated to accommodate a common set of electronic credentials.
Specifications have been developed for PIV-Interoperable cards, which could be used by citizens and state and local governments and accepted by federal agencies, and this could serve as a model for the kind of interoperable environment envisioned in the president’s strategy. But PIV-I cards are not being adopted and the lack of cross-agency acceptance of PIV cards at the federal level shows the difficulty of establishing a broad-based, secure scheme for identity and access management.
Getting thousands of law enforcement, security and intelligence agencies on the same page and willing to share their most valuable assets over a single system supported by interoperable technology is likely to prove a serious hurdle to achieving the president’s vision.
Posted on Jan 07, 2013 at 1:49 PM0 comments
Many threat forecasters focus on what is changing in IT to define the coming threat landscape, but researchers crunching the numbers from eight years of Verizon Data Breach Investigation Reports
say the past is a better indicator, and see little change in the future.
“I feel pretty confident that 2013 will be very similar to 2012,” said Wade Baker, managing principal of Verizon’s Research Intelligence Solutions Knowledge (RISK) Team and a principal author of the annual report.
The threat landscape is defined not by emerging technology, but by tried and true techniques that persist from year to year.
That means that the top issues that agencies will continue to see in 2013 will be authentication exploits involving the theft or improper reuse of passwords and other credentials, and Web application exploits, which have been a favorite tool of the hacktivists who target government sites.
Not on the list of likely trouble spots are cloud computing and mobile devices, which are getting a lot of attention in 2013 forecasts. “They are significant changes,” Baker acknowledged. “But I’m not convinced that these infrastructure and device-level changes have demonstrably changed the threat environment yet.”
The Data Breach Incident Reports are annual statistical analyses of information breaches that have been investigated in a number of countries. The most recent report includes information from 855 incidents gathered from the U.S. Secret Service as well as from authorities in Australia, the Netherlands, Ireland, and the United Kingdom.
Which raises the question: Can historical information be used to predict the future? Baker thinks it can, at least for the near-term. The top threats have changed little over the past five years, he said. They might shift in the rankings some, but the mix remains largely the same. “Every year we look at this and we say things are going to change, and they don’t,” he said.
That is not to say there have not been changes over the past eight years. There has been an increase in the use of stolen credentials to exploit authentication systems and a corresponding decrease in the exploitation of vulnerabilities in code. Cybercriminals have become more professional and the exploit tools they use have become commodities, putting more powerful weapons in the hands of non-technical hackers and crooks.
But governments face a different kind of attacker, Baker said. Those who target governments are driven less by profit motive and more by activism and national interests. This means that the most significant threats might not be the most numerous ones. Agencies, along with large corporations with valuable intellectual property, also will continue to be targeted by state-sponsored espionage, which is harder to detect and might not make it onto the radar screen as often.
And all of this raises another question: Which is the more significant threat: the numerous ones we know about, or the emerging threats that we don’t know? That won’t be clear until we know what the impact of emerging threats are. Hindsight is always better than forecasts.
Posted on Jan 02, 2013 at 12:02 PM2 comments
Malware writers have come up with a gift for us this Christmas season: Code that monitors its environment before executing. If nothing is stirring, not even a mouse, it remains quiet, hiding itself.
“Until the left mouse button is released, the code will remain dormant making it immune from automated analysis by a sandbox,” FireEye researchers wrote in a recent blog post about a new Trojan they call Upclicker.
This is important because with the huge number of malware variants out there — Symantec estimates the number of new variants at more than a million a day — signature-based detection tools cannot keep up with the onslaught, and users increasingly rely on sandboxing and automated analysis to detect the bad actors on their computers. These tools look at what a piece of code actually does to decide whether it should be allowed to run.
Malware writers know this and look for ways to hide. Symantec, back in October, issued an alert that some malware has begun monitoring its surroundings to determine whether it is in a virtual environment (i.e., a sandbox), where it can be tricked into revealing itself. One effective technique is for the malware to watch for mouse activity, a reliable indicator of human involvement. If the malware does not receive its prompts from a mouse click, it assumes it is in a sandbox and remains quiet, hoping to be released into the machine, where it can do its job.
FireEye researchers analyzed the new Trojan Upclicker, which uses this technique to hide. Only when executed with a left-click from a mouse does it inject malicious code into the browser, which opens a communications channel with a command server.
Neither Symantec nor FireEye as yet offer any specific suggestions for thwarting this behavior, although FireEye warned that “we expect to see more such samples that can use a specific aspect like pressing specific keys, specific mouse buttons, or movement of the mouse a certain distance to evade the automated analysis.”
In the ongoing cat-and-mouse game of cybersecurity it is likely that defensive techniques will be developed to address these threats. Signatures could be developed to look for the “hook” commands in the malware that monitor mouse or other activity, or the analysis tools might be able to detect this monitoring activity and flag it as suspicious.
Whatever solution we end up with, it is all but certain that the bad guys will come up with a new way around it. Just more things for us to worry about.
Posted on Dec 14, 2012 at 11:31 AM0 comments