Many threat forecasters focus on what is changing in IT to define the coming threat landscape, but researchers crunching the numbers from eight years of Verizon Data Breach Investigation Reports
say the past is a better indicator, and see little change in the future.
“I feel pretty confident that 2013 will be very similar to 2012,” said Wade Baker, managing principal of Verizon’s Research Intelligence Solutions Knowledge (RISK) Team and a principal author of the annual report.
The threat landscape is defined not by emerging technology, but by tried and true techniques that persist from year to year.
That means that the top issues that agencies will continue to see in 2013 will be authentication exploits involving the theft or improper reuse of passwords and other credentials, and Web application exploits, which have been a favorite tool of the hacktivists who target government sites.
Not on the list of likely trouble spots are cloud computing and mobile devices, which are getting a lot of attention in 2013 forecasts. “They are significant changes,” Baker acknowledged. “But I’m not convinced that these infrastructure and device-level changes have demonstrably changed the threat environment yet.”
The Data Breach Incident Reports are annual statistical analyses of information breaches that have been investigated in a number of countries. The most recent report includes information from 855 incidents gathered from the U.S. Secret Service as well as from authorities in Australia, the Netherlands, Ireland, and the United Kingdom.
Which raises the question: Can historical information be used to predict the future? Baker thinks it can, at least for the near-term. The top threats have changed little over the past five years, he said. They might shift in the rankings some, but the mix remains largely the same. “Every year we look at this and we say things are going to change, and they don’t,” he said.
That is not to say there have not been changes over the past eight years. There has been an increase in the use of stolen credentials to exploit authentication systems and a corresponding decrease in the exploitation of vulnerabilities in code. Cybercriminals have become more professional and the exploit tools they use have become commodities, putting more powerful weapons in the hands of non-technical hackers and crooks.
But governments face a different kind of attacker, Baker said. Those who target governments are driven less by profit motive and more by activism and national interests. This means that the most significant threats might not be the most numerous ones. Agencies, along with large corporations with valuable intellectual property, also will continue to be targeted by state-sponsored espionage, which is harder to detect and might not make it onto the radar screen as often.
And all of this raises another question: Which is the more significant threat: the numerous ones we know about, or the emerging threats that we don’t know? That won’t be clear until we know what the impact of emerging threats are. Hindsight is always better than forecasts.
Posted by William Jackson on Jan 02, 2013 at 9:39 AM2 comments
Malware writers have come up with a gift for us this Christmas season: Code that monitors its environment before executing. If nothing is stirring, not even a mouse, it remains quiet, hiding itself.
“Until the left mouse button is released, the code will remain dormant making it immune from automated analysis by a sandbox,” FireEye researchers wrote in a recent blog post about a new Trojan they call Upclicker.
This is important because with the huge number of malware variants out there — Symantec estimates the number of new variants at more than a million a day — signature-based detection tools cannot keep up with the onslaught, and users increasingly rely on sandboxing and automated analysis to detect the bad actors on their computers. These tools look at what a piece of code actually does to decide whether it should be allowed to run.
Malware writers know this and look for ways to hide. Symantec, back in October, issued an alert that some malware has begun monitoring its surroundings to determine whether it is in a virtual environment (i.e., a sandbox), where it can be tricked into revealing itself. One effective technique is for the malware to watch for mouse activity, a reliable indicator of human involvement. If the malware does not receive its prompts from a mouse click, it assumes it is in a sandbox and remains quiet, hoping to be released into the machine, where it can do its job.
FireEye researchers analyzed the new Trojan Upclicker, which uses this technique to hide. Only when executed with a left-click from a mouse does it inject malicious code into the browser, which opens a communications channel with a command server.
Neither Symantec nor FireEye as yet offer any specific suggestions for thwarting this behavior, although FireEye warned that “we expect to see more such samples that can use a specific aspect like pressing specific keys, specific mouse buttons, or movement of the mouse a certain distance to evade the automated analysis.”
In the ongoing cat-and-mouse game of cybersecurity it is likely that defensive techniques will be developed to address these threats. Signatures could be developed to look for the “hook” commands in the malware that monitor mouse or other activity, or the analysis tools might be able to detect this monitoring activity and flag it as suspicious.
Whatever solution we end up with, it is all but certain that the bad guys will come up with a new way around it. Just more things for us to worry about.
Posted by William Jackson on Dec 14, 2012 at 9:39 AM0 comments
“It’s difficult to make predictions, especially about the future.”
That quote has been attributed to everyone from Mark Twain to physicist Niels Bohr and baseball great Yogi Berra, but its uncertain origin doesn’t make it any less true. So I took the easy way out in last year’s predictions of coming trends in cybersecurity, saying that 2012 would be a lot like 2011.
“Popular technologies that came to the fore in 2011 will continue to be the targets for choice in the coming year,” I wrote. “It is a classic case of, ‘If you build it, they will come.’”
Despite giving myself this softball pitch, I still managed to bat only about .500 for the year. Which really isn’t too bad, either for baseball or prognostication.
To be honest, however, I should give the credit — where any is due — to my sources. Here is how things played out in spotting the pain points for 2012:
Bring your own device
I’ll give myself a hit on this one. The migration of increasingly powerful mobile devices into the workplace was a major concern for administrators who had to find ways to manage and secure the devices and control access to sensitive resources. Malware for the devices continued to grow, especially for Androids, and even legitimate applications have proved to be leaky, buggy and grabby.
It should be noted, however, that mobile devices still have not become the platform of choice for delivering attacks to the enterprise or stealing sensitive information in bulk lots. Like network administrators, the bad guys still are figuring out how to effectively manage and make the most use of these devices. Still, the risk has to be taken seriously.
Another hit. Social networking has proved to be a double-edged sword, becoming an important medium for business communication and at the same time providing a rich source of data for social engineering and misinformation.
It is no surprise that increasingly popular sites have become tools for phishing attacks and launching malicious code. The risks do not seem to have outweighed the perceived advantages yet, as organizations constantly look for ways to use social channels, focusing their concerns on making them more effective rather than more secure. Getting more attention than the malicious use of the sites are the privacy policies of the companies running them.
This one was neither a hit nor a miss -- more of a foul ball. Over the past year, cloud services have proved no more or no less secure than other platforms. Cloud computing is a hot business opportunity in government, but both providers and customers seem to be cautious enough about the security of the services that it has not become a major issue.
But with several high-profile service outages by major cloud service providers in the last two years, reliability has emerged as more of an issue than security. Google suffered a brief outage in October, but Amazon was the worst hit (or the biggest offender) with three outages of its Web Services in 2011 and 2012. Most recently, its Northern Virginia data center in Ashburn was knocked out by severe weather in June and then again because of an equipment failure in October.
Planning for outages and data backup are as important as security when moving critical operations or services to the cloud.
This was a miss. Not that the exhaustion of new IPv4 address space and the switch to the next generation of Internet Protocols wasn’t a big story in 2012. But the volume of IPv6 traffic has remained so small, even as federal agencies and major online organizations enable it, that it still has not emerged as a security problem.
The risks remain, of course. It is difficult to say whether security tools for IPv6 are operating at parity with IPv4 tools, and as the volume of IPv6 traffic inevitably grows this will be an issue. There is also the chance that largely unmanaged IPv6 traffic could be used as a channel for slipping past traditional defenses. But so far these issues have not created large problems.
This one seems to be a miss as well. The threat was that 2012's high-profile events -- such as the London Olympics and the U.S. presidential election -- would be used to ensnare victims with phishing attacks and search engine poisoning. Some of this did happen, but it didn’t seem to be any worse than any other year.
All in all, a so-so set of predictions for 2012. What will 2013 bring? No one can say. But that won’t stop us. Stay tuned.
Posted by William Jackson on Dec 12, 2012 at 9:39 AM1 comments
Desktop Web browsers use a number of indicators to help users safely navigate the risky seas of the World Wide Web, but as more users go online with mobile devices they are not seeing critical security information, according to a study from Georgia Tech.
Mobile browsers can use security features such as Secure Sockets Layer and Transport Layer Security. But you might not know if these features are in operation because the lock logo or HTTPS we see in the URL window of a desktop browser might not be there on your smart phone.
“The drastic reduction in screen size and the accompanying reorganization of screen real estate significantly changes the use and consistency of the security indicators and certificate information that alert users of site identity and the presence of strong cryptographic algorithms,” according to a Georgia Tech-led team of researchers.
As a result, even security experts using 10 of the most popular mobile browsers were unable to determine, from information presented on the screen, whether they were visiting a malicious site.
User demand is bringing mobile devices into the enterprise, and this lack of browser assurance could add one more risk for government workers using mobile devices to access resources.
It isn’t that the browsers do not provide the information at all, said Chaitrali Amrutkar, a doctoral student at Georgia Tech’s School of Computer Science and principal author of the paper. “The manufacturers have included a subset of indicators,” in their browsers, she said. But their use is inconsistent, limited and difficult to see. “They are very different when it comes to the information provided,” she said.
Five of the browsers tested, for example, do not have an interface to let users look at digital certificate information from a website being visited. “I won’t be able to tell who signed the cert,” she said.
If experts can’t tell whether the visited site is secure, what chance do the average users have? As it turns out, the average mobile users might not be any worse off without the information, because they tend to ignore it when it is available anyway. That probably is one reason why studies have found mobile users are more likely to be phishing victims.
The Georgia Tech paper doesn’t address the issue of how to make users do what they should do. “This is just a first step,” providing an assessment of the shortcomings of the browsers, Amrutkar said. What to do about those shortcomings is a separate issue that needs to be addressed by vendors and the security community.
Then they can worry about making the users pay attention.
Posted by William Jackson on Dec 10, 2012 at 9:39 AM0 comments