Debates over the state of antivirus technology and tools have resurfaced yet again after the executive in charge of Symantec’s information security business was quoted in the Wall Street Journal a month ago as saying antivirus is dead.
Now, that should be a big deal, since Symantec has made its reputation and fortune off the back of the antivirus business, and it still makes up some 40 percent of its revenue. According to Symantec’s Brian Dye, the company no longer thinks of antivirus as any kind of money maker. Antivirus catches less than half of the cyber attacks that now occur, he said.
However, this is only the latest in a series of announced deaths of the venerable technology, which has for so long been a keystone of enterprise security. In 2012, the Flame malware was discovered to have infected systems around the world and to have been resident on those systems for up to two years without having been detected by antivirus software. It was seen as a huge failure for antivirus, and the potential death knell for the technology.
None of this is news to most security professionals, who have been preaching the vulnerability of “traditional” security for some time and the need for layered, in-depth defense. Symantec now certainly believes that, since it has a new philosophy (and new products and solutions to sell) which emphasizes this approach.
But, is antivirus now really useless? That would be bad news for many government organizations, which still rely to a great extent on legacy systems such as antivirus for the core of their security. Lastline Labs, which looks at these kinds of issues, is one outfit that isn’t ready to toll the bell for antivirus yet, though it does say it’s staggering badly.
The main problem, it believes, is that antivirus takes too long to catch up with malware. From tests run for over a year, from May 2013 to May 2014, it found that, on any given day, at least half of the AV scanners it tested failed to detect new malware. Even after two months, a third of the scanners were still not detecting it.
Eventually, AV scanners do start to catch up. Two weeks was the common lag time. But, even after a year, according to Lastline, there were malware samples that still evaded 10 percent of the scanners tested.
Source: Lastline. Click chart for larger view.
As the graph shows, there’s a major problem with the 1 percent of malware that consistently evades capture by antivirus systems. That likely represents advanced malware that more sophisticated criminals use to persistently target and infiltrate organizations, Lastline said. Unfortunately, unlike more opportunistic cyber events, attacks that use such malware are the ones that usually cause the most serious security breaches.
Traditional antivirus is not dead, Lastline believes, but it does need to be complemented with other approaches, such as those based on dynamic analysis of samples and network anomaly detection. The National Security Telecommunications Advisory Committee came to a similar conclusion in a report to the president last year, and it’s the basis of many of the next generation of security tools that are now being unveiled.
Meanwhile, until budget-constrained agencies can catch up with this flow, many will have to persist with the AV systems they already have while being aware of their limitations.
Which brings up another point.
In February of this year, a Senate report on the federal government’s cybersecurity track record found that agencies that had recently suffered major breaches had consistently failed to patch security software, including antivirus, with some as many as two years behind on their updates.
Even the admittedly limited effectiveness of traditional antivirus systems won’t survive that.
Posted by Brian Robinson on Jun 06, 2014 at 9:00 AM1 comments
The influx of consumer IT into the workplace — often unmanaged and unseen by administrators — is speeding up, and it isn’t just the fault of irresponsible employees.
“People need to get their work done, and they’ll do anything to get it done,” said Oscar Fuster, director of federal sales at Acronis, a data protection company. When tools that can help them appear in the marketplace, and in their own homes, they chafe when administrators do not let them use them. The result often is an unmanaged shadow infrastructure of products and services such as mobile devices and cloud-based file sharing that might be helpful for the worker but effectively bypasses the enterprise’s secure perimeter.
It is not all the fault of the administrators. They have policy, regulation and legislation to comply with. But if someone doesn’t do something quickly, agencies will soon find that their sensitive data is outside of their control.
What is needed is a more agile approach to acquiring and managing technology that doesn’t leave the government two years behind the consumer curve in acquiring tools. Departments must be willing to decentralize authority so that agencies can adapt quickly to their technology needs, and more freely interpret legislative mandates.
“It’s easier said than done,” Fuster said. But most IT legislation is technology neutral, and policies can be fashioned to accommodate new technology more quickly than is happening now, he says. “The second you fall behind, people will start cutting corners.”
Shadow IT is not a new problem. In the early days of the home PC, workers could use removable hard drives to work at home, and floppy disks could move files easily from one office to another. The difference was that 40 years ago it took more tech savvy and a little more investment to get outside the perimeter. When the world went wireless 15 or so years ago, there was an exponential jump in the ability to think and work outside the box.
Things have shifted again with handheld mobile devices and nearly ubiquitous network access. Consumer cloud services can put an entire suite of productivity tools in your hand, but it also takes data outside the administrator’s control.
The solution is two-fold. Because the enterprise itself is becoming more fluid, more attention is needed to the security of the data itself. Encryption and controls to monitor its movement, coupled with more well-defined access control, can help protect data and see who is using it and where. This addresses not just the shadow IT challenge, but the insider threat and the growing use of stealthy exploits that can sit quietly in the system and slowly export data.
At the same time, be open to accommodating workers so that they are less tempted to work around you. One powerful tool is the ability to manage mobile devices within your legacy infrastructure. Windows Phone has a small percentage of the mobile market, but the latest Windows 8.1 update allows administrators to use a common set of management tools from the server through the desktop to the handheld device. Even if your workers prefer an Android or iPhone, this can be a good compromise to making your workplace more flexible.
Posted by William Jackson on May 30, 2014 at 8:03 AM0 comments
While most of the emphasis in cybersecurity seems to be on external threats and the damage suffered when network and data defenses are breached, threats from insiders are getting more attention in the aftermath of the Snowden and Wikileaks revelations. What to do about those is another question, since the tools currently used by organizations to track incursions don’t seem up to the task.
It’s not a new phenomenon. The FBI a long time ago began voicing its concern about threats from privileged users of data, both in government and industry. The issue has its very own website at the FBI, and the concern within government was bolstered by a White House memo published at the end of 2012 aimed at the heads of agencies.
Now comes a survey by the Ponemon Institute, sponsored by Raytheon, that shows where the recognition/mitigation gap lies.
Over all of the government and industry sources surveyed, for example, 88 percent said they recognized that the insider threat is a cause for alarm, and that the abuse will increase. At the same time, however, they said they have difficulty identifying what specific threatening action looks like.
Source: Insider Threat Ponemon Survey Report
“Responders said they just don’t have enough contextual information from their existing tools, which also throw up too many false positives,” said Michael Crouse, Raytheon’s director of insider threat strategies. “There’s a real need for a different way to attack the problem.”
Unlike external threats, where malicious intent is assumed, the situation with insiders is more nuanced. Of those who access sensitive or confidential information that isn’t necessary for their jobs, for example, survey respondents said as many as two-thirds are simply driven by curiosity.
In government, you can probably add the frustration of people under increasing pressure to get the job done and who don’t want to spend the time working through the red tape necessary to access information they think they need. Who hasn’t asked a buddy in the office to help with that kind of thing?
Other recent studies have also made the point that insider threats come from relatively innocent actions as much, or even more, than malicious events. Verizon’s 2014 Data Breach Investigation Report, for example, showed that misuse by insiders could come from something as simple as sending an email to the wrong person or attaching files that shouldn’t be attached.
One simple move toward an answer would be for organizations to properly configure tools they do have, something Crouse said is “the easiest and most cost-effective” thing they can do. Beyond that, agencies need complementary tools, such as end-point monitoring that show how users behave when they access data through an end-point, detailing IM traffic, contextual emails and whether they are cutting and pasting information in ways they haven’t previously.
That’s all well and good, of course, but there’s a big catch. While nearly 90 percent of those surveyed in the Ponemon report said they understood the need for enhanced security, only 40 percent had any kind of a dedicated budget to spend on tools specifically aimed at insider threats. That’s why most organizations — and certainly government agencies — have to limp along by trying to jerry-rig existing, and unsuitable, cybersecurity tools to do the job.
One of the reasons for that budget shortfall, Crouse gamely admitted, is that companies like his have not done a good job explaining the ROI from money spent on these tools. What organizations don’t understand, he said, is that while the number of actual breaches from insiders is low compared to those from external threats, the impact from those breaches is substantially higher.
“I don’t think they truly understand either the monetary or mission impact from these insider breaches,” he said. “They’re just now trying to get their heads around that.”
Posted by Brian Robinson on May 23, 2014 at 9:30 AM1 comments
The government says it did not know about the Heartbleed vulnerability in OpenSSL before it was publicly disclosed. But White House Cybersecurity Coordinator Michael Daniel says that if it had known, it might not have told us.
“In the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest,” Daniel wrote in a recent White House blog post. But not always. “Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack, stop the theft of our nation’s intellectual property or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.”
Daniel goes on to explain some of the criteria used in deciding when and when not to disclose a serious vulnerability.
Over the years, the security community has come to a consensus on how to handle disclosure of security vulnerabilities in software. The discoverer first informs the product’s vendor, giving the company time to develop a patch or workaround before reporting it publicly. This protocol is not mandatory, however. Researchers can use the threat of disclosure to pressure vendors to respond to vulnerabilities, and some companies offer a bounty for new vulnerabilities to encourage researchers to cooperate. But the value of a new vulnerability can be much greater than a bounty.
In the end, how a vulnerability is handled depends on the motives and morals of the discoverer. For criminals, a good zero-day vulnerability—one for which no fix yet exists—is money in the bank. For governments, it can be an espionage tool or a weapon. The Stuxnet worm, an offensive weapon widely believed to have been developed by the United States and Israel, exploited several zero-day vulnerabilities.
Daniel said there are practical limits on hoarding bugs. “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest,” he wrote. “But that is not the same as arguing that we should completely forgo this tool as a way to conduct intelligence collection and better protect our country in the long-run.”
Daniel said there are no hard and fast rules for determining when to disclose, but the administration has a “disciplined, rigorous” process for deciding. The criteria include:
- How widely used and important is the vulnerable product?
- How serious is the vulnerability. Can it be patched, and how much harm could it do if it falls into the wrong hands?
- Would we know if someone else was using it?
- What is the value of the intelligence we could gather with it, and are there other ways to gather it?
- Is someone else likely to discover it?
Heartbleed potentially leaked sensitive information protected by OpenSSL, which is very widely used to protect online commerce and other transactions. The vulnerability was critical, and although a fixed version of the software was released, replacing it will take some time.
Would we know if someone was using it? Maybe. Gathering useful information requires a high number of connections to a vulnerable server, which could be detected in activity logs. Shortly after the disclosure, Canadian police arrested a young man for allegedly using Heartbleed to steal tax data.
As for the value of intelligence to be gained, who can say? Is someone else likely to discover it? Yes, given that it was in open source software available to anyone. And did someone else discover it? Yep. Researchers at Codenomicon and Google Security.
So, did the National Security Agency discover Heartbleed first, and if they did would they have told us? According to White House criteria, it would be a good candidate for disclosure. But we’ll probably never know.
Posted by William Jackson on May 16, 2014 at 9:03 AM1 comments