Next-gen cybersecurity means anticipating threats

Next-gen cybersecurity means anticipating threats

The recent announcement of a forward-looking cyberthreat tool from the Georgia Tech Research Institute (GTRI) is an example of a developing trend in security of using broad-based data that bad guys themselves put out to try and get ahead of threats. It’s also a tacit admission that security solely based on reacting to threats is not, and will not, work.

The GTRI tool, called BlackForest, collects information from the public Internet such as hacker forums and other places those said bad guys gather to swap information and details about the malware they write and sell. It then relates that information to past activities, and uses all of that collated intelligence to warn organizations of potential threats against them – and once attacks have happened, how to make their security better.

Ryan Spanier, the head of GTRI’s Threat Intelligence Branch, said the intention is give organizations some kind of predictive ability so that, if they see certain things happening, they’ll know they may need to take action to protect their networks.

These and similar tools are badly needed. The CyberEdge Group, in its 2014 Cyberthreat Defense Report, found that more than a quarter of the organizations it surveyed had no effective foundation for threat defense. Overall, investment in those next-generation tools that could be most effective against advanced threats is still “fairly low.”

In addition, it said, because of the speed at which threats are deployed these days, the relative security and confidence of today can be gone tomorrow, and IT security teams can only make educated guesses at what attackers will try next, and where they will try it. The bottom line, it said, is that maintaining effective cyberthreat defenses not only requires constant vigilance, “but also an eye on the road ahead.”

It’s something both government and industry organizations are starting to push with more urgency. Greg Garcia, the former head of cybersecurity and communications at the Department of Homeland Security, recently said he expects to see more investment in tools that will help banks and financial institutions anticipate emerging risks. As the new executive director at the Financial Services Sector Coordinating Council for Critical Infrastructure Protection and Homeland Security, he knows how important that will be for an industry that is a primary target for cyberattacks.

The National Institute of Standards and Technology is also trying to push government agencies in that direction. In the first iteration of a cybersecurity framework it published in February this year, NIST listed four levels at which the framework could be implemented and which would “provide context on how an organization views cybersecurity risk and the processes in place to manage that risk.”

The highest level, Tier 4, is labeled Adaptive and describes an organization that “actively adapts to a changing cybersecurity landscape and responds to evolving and sophisticated threats in a timely manner” and has “continuous awareness of activities on their systems and networks.” Though NIST takes pains to say that the tiers don’t represent actually maturity of cybersecurity defenses, it also says agencies should be “encouraged” to move to higher levels.

The methodology GTRI uses for BlackForest is not that new to the security field, at least in broad terms. Security companies have for years trawled global networks to identify threats and develop defenses against them, and that’s the basis for the regular update of antivirus signatures they send to their customers. As CyberEye recently pointed out, however, those techniques are become less effective and are all but useless against the most sophisticated, and most damaging, kinds of malware.

Success for organizations in the future will not be based on how many attackers it can keep out of their networks and systems, but how fast and how effectively they can detect and respond to attacks that are already on the inside. That’s the understanding for a rush to big data analytics, which organizations are betting on will enable that kind of timely response. Gartner believes that, by 2016, fully 25 percent of large companies around the world will have adopted big data analytics for that purpose.

Whether or not BlackForest and similar tools provide the level of security their developers say they will is still to be seen. After all, the attackers have proven they are just as intelligent and creative as defenders. But these tools merely indicate the direction security needs to go, because the regular way of doing things just ain’t working.

Posted by Brian Robinson on Aug 01, 2014 at 10:55 AM1 comments


Role for humans in cybersecurity automation

Security automation: Are humans still relevant?

Cybersecurity is being pushed in two directions. On the one hand, the growing complexity of information systems and the onslaught of threats facing them are putting a premium on speed. Automation is the future of security, said Matt Dean, vice president of product strategy at FireMon. Decisions made about who and what gains access to resources need to be smarter and faster.

“We’ve got to get humans out of the equation,” Dean said. “They can’t react fast enough.”

The trend toward automation is evident in the government’s growing emphasis on continuous monitoring of systems and networks. It is the only practical way to achieve the situational awareness promised by continuous monitoring. Agencies are supposed to be using SCAP-compliant security tools, and the “A” in SCAP stands for Automation: Security Content Automation Protocols. 

On the other hand, Randy Hayes, who leads Booz Allen’s global predictive intelligence business, said more humans are needed in the loop.

“You do need fully automated solutions,” Hayes said. But machines can’t do it all. Agencies need security operations centers (SOCs) staffed with highly trained analysts to monitor alerts and connect the dots, using human intelligence to anticipate attacks in a way that even the fastest machines can’t do. “We need to bring more intelligence tradecraft to bear.”

Hayes advocates an approach called resiliency, an operational strategy that treats cybersecurity like warfare. Protecting yourself from an attack with static defenses provides a false sense of security, he said. Attacks must be anticipated through knowledge of the enemy and blocked before they occur.

The two views of security are not mutually exclusive. As Hayes acknowledged, automated solutions are necessary, if not sufficient, for cybersecurity. And proponents of automation recognize that a primary benefit is to free analysts from routine chores so that they can concentrate on the threats that require human attention.

The conflict comes down to two questions: How many humans are needed in the cybersecurity loop and how many humans can we afford?

How many are needed will vary depending on the size, complexity and criticality of the enterprise being protected, of course. The more effective the automated tools being used, the more attention humans can give to serious issues. But with increasingly tight budgets and an employment market in which government is competing with the private sector for scarce human resources, agencies are likely to be perennially short staffed with experienced cybersecurity professionals.

Hayes is convinced that the money to provide adequate human intelligence for cybersecurity across government already is there, if budgets are just prioritized properly at the highest levels of management. Many agencies already are operating their own SOCs or have access to shared facilities, Hayes pointed out.

But human staffing remains a problem for cybersecurity analysis, according to a report from the Homeland Security Department’s inspector general.  Evaluating DHS efforts to coordinate federal cyber operations centers, the IG found that the National Cybersecurity and Communications Integration Center’s (NCCIC) incident response capability could be hindered by the inability of the Office of Intelligence and Analysis and the Industrial Control Systems CERT to provide around the clock staffing. Cyberattacks can happen at any time, but the Office of Intelligence and Analysis provides coverage only 14 hours a day for five days a week, which less than half of the week. NCCIC told the IG it does not have funding to hire more analysts.

Doubtless, more effective use could be made of existing budget and staff, but it is unlikely that personnel for effective 24/7 analyst staffing in government SOCs will be available soon. To fill this gap, there will have to be greater reliance on automation rather than humans for the time being.

Posted by William Jackson on Jul 25, 2014 at 8:28 AM1 comments


Will NIST break its close relationship with the NSA in developing cryptographic and cybersecurity standards?

NIST's future without the NSA

Will the National Institute of Standards and Technology break its close relationship with the National Security Agency in developing cryptographic and cybersecurity standards? That seems very likely following a recent report by an outside panel of experts, and it will have implications for federal agencies.

The report by the Visiting Committee on Advanced Technology (VCAT), which was released July 14, came after last year’s revelation as a part of the Edward Snowden leaks that the NSA had inserted a “backdoor” into a NIST encryption standard that’s used to generate random numbers. NIST Special Publication 800-90A, the latest version published in 2012, describes ways for generating random bits using a deterministic random bit generator (DRBG). That’s an important step for many of the cryptographic processes used to secure computer systems and protect data.

The backdoor allowed the NSA to basically circumvent the security of any system it wanted to get data from, and that could be a substantial number. The DRBG was used as the default in RSA’s FIPS 140-2 validated BSAFE cryptographic library, for example, before that was ended in 2013. Up until then, BSAFE had been widely used by both industry and government to secure data.

The main damage done by these revelations is not in whatever data the NSA managed to extract because of this, but in the confidence organizations will have in what NIST does in cybersecurity going forward. And for government agencies that’s critical, since they are required by law to adhere to the standards NIST puts out.

NIST removed the offending DRBG algorithm from 800-90A in April and reissued the standard. It advised federal agencies to ask vendors whose products they used if their cryptographic modules rely on DRBG and, if so, to ask them to reconfigure those products to use alternative algorithms.

But the damage has been done. Not only do other NIST standards developed in coordination with the NSA now need critical review, according to VCAT committee member Ron Rivest, a professor at MIT, but the process for developing future standards needs reassessment and reformulation.”

As Edward Felten, a professor of computer science and public affairs at Princeton University and another of the VCAT members, wrote in the committee’s report, if government has to conform to NIST standards, but everyone else uses something different, it “would be worse for everybody and would prevent government agencies from using commercial off-the-shelf technologies and frustrate interoperation between government and non-government systems.”

Simply put, that’s not possible. Government is no longer in the position of being able to develop systems for its own use and depends absolutely on commercial products. So, the scramble to shore up NIST’s reputation is on. 

NIST says it has already instituted processes to strengthen oversight of its standards making, and could make more along the lines of the recommendations made in the VCAT report. Congress got in on the act a few months ago with an amendment to the FIRST Act, a bill to support science and research, that strips the requirement in law that NIST consult with the NSA when developing information security standards.

However, it still allows NIST to voluntarily consult with the NSA, something the VCAT report also goes to some lengths to recommend. That’s a tacit admission that NIST and the government overall can’t do away with NSA input on security. There have been suggestions that the NSA’s role in information assurance should be given over the Department of Homeland Security or the Defense Department, but that seems unlikely.

The fact is that the NSA probably has the greatest depth of expertise in cryptography and security in the entire government, and both the DHS and DOD rely on it as much as NIST does. How to reconcile all of that while urgently repairing the trust that’s needed of NIST and its standards, both in government and industry, will be one of the more fascinating things to watch over the next few years.

Posted by Brian Robinson on Jul 18, 2014 at 10:22 AM0 comments


Windows Server 2003: The end is nearer than you think

Windows Server 2003: The end is nearer than you think

With a year left before Microsoft finally ends support for Windows Server 2003, migrating to a new OS might not seem like a pressing issue. But Microsoft technical evangelist Pierre Roman warns that it really is just around the corner.

“We estimate that a full server migration can take up to 200 days to perform,” he wrote in a recent TechNet blog post. “If you add applications testing and migration as well, your migration time can increase by an additional 300 days.”

So if you did not get ahead of the game, you already are late.

Do you really need to transition to a new OS? “In a lot of cases, when things are working fine people feel it’s best not to tamper with it,” said Juan Asenjo, senior product marketing manager for Thales e-Security. This is especially so in the case of servers running mission critical applications for which uptime and availability are critical performance metrics.

This means that there is a large installed base of Windows Server 2003 in government enterprises. The Energy Department’s Lawrence Berkeley National Laboratory called Windows Server 2003 “the most secure out-of-the-box operating system that Microsoft has made.” But it also noted that it was not perfect and that “a large number of vulnerabilities have surfaced since this OS was first released.” The end of Microsoft support means that every vulnerability discovered in the software after July 2015 will be a zero-day vulnerability and will remain so, putting many mission-critical applications at risk.

Server 2003 was the first Windows server to include functionality for PKI cryptography, used to secure many applications. “It was a good incentive for the adoption of PKI technology,” said Asenjo. But the security offered by the 11-year-old server often is not adequate for current needs, which increases the risk of leaving it in place.

Mainstream support for Windows Server 2003 ended in 2010, after it had been superseded by Server 2008. Server 2012 has since been introduced. Microsoft’s lifecycle support policy gives a five-year grace period of extended support, however, which includes security updates and continued access to product information. That period ends July 14, 2015, unless organizations can qualify for and afford the costly custom support.

Information Assurance Guidance from the NSA warns that not only will the unsupported server be vulnerable to newly discovered vulnerabilities, which creates a “high level of risk,” but that newer applications eventually will not run with it. The agency “strongly recommends that system owners plan to upgrade all servers to a supported operating system well before this date in order to avoid operational and security issues.”

Roman recommends the same basic four-step program for transitioning to a newer server OS that is used in any migration program:

  1. Discover: Catalog software and workloads.
  2. Assess: Categorize applications and workloads.
  3. Target: Identify the end goal.
  4. Migrate: Make the move.

The process is not necessarily simple or fast, however. “There is no single migration plan that suits all workloads,” said Joe Schoenbaechler, vice president of infrastructure consulting services for Dell Services.

Fortunately, Dell – and a number of other companies – are offering migration assistance with help in developing and executing plans. If you don’t already have a plan, or are not well into it, you might consider asking for some help.

Posted by William Jackson on Jul 11, 2014 at 9:29 AM0 comments