Will NIST break its close relationship with the NSA in developing cryptographic and cybersecurity standards?

NIST's future without the NSA

Will the National Institute of Standards and Technology break its close relationship with the National Security Agency in developing cryptographic and cybersecurity standards? That seems very likely following a recent report by an outside panel of experts, and it will have implications for federal agencies.

The report by the Visiting Committee on Advanced Technology (VCAT), which was released July 14, came after last year’s revelation as a part of the Edward Snowden leaks that the NSA had inserted a “backdoor” into a NIST encryption standard that’s used to generate random numbers. NIST Special Publication 800-90A, the latest version published in 2012, describes ways for generating random bits using a deterministic random bit generator (DRBG). That’s an important step for many of the cryptographic processes used to secure computer systems and protect data.

The backdoor allowed the NSA to basically circumvent the security of any system it wanted to get data from, and that could be a substantial number. The DRBG was used as the default in RSA’s FIPS 140-2 validated BSAFE cryptographic library, for example, before that was ended in 2013. Up until then, BSAFE had been widely used by both industry and government to secure data.

The main damage done by these revelations is not in whatever data the NSA managed to extract because of this, but in the confidence organizations will have in what NIST does in cybersecurity going forward. And for government agencies that’s critical, since they are required by law to adhere to the standards NIST puts out.

NIST removed the offending DRBG algorithm from 800-90A in April and reissued the standard. It advised federal agencies to ask vendors whose products they used if their cryptographic modules rely on DRBG and, if so, to ask them to reconfigure those products to use alternative algorithms.

But the damage has been done. Not only do other NIST standards developed in coordination with the NSA now need critical review, according to VCAT committee member Ron Rivest, a professor at MIT, but the process for developing future standards needs reassessment and reformulation.”

As Edward Felten, a professor of computer science and public affairs at Princeton University and another of the VCAT members, wrote in the committee’s report, if government has to conform to NIST standards, but everyone else uses something different, it “would be worse for everybody and would prevent government agencies from using commercial off-the-shelf technologies and frustrate interoperation between government and non-government systems.”

Simply put, that’s not possible. Government is no longer in the position of being able to develop systems for its own use and depends absolutely on commercial products. So, the scramble to shore up NIST’s reputation is on. 

NIST says it has already instituted processes to strengthen oversight of its standards making, and could make more along the lines of the recommendations made in the VCAT report. Congress got in on the act a few months ago with an amendment to the FIRST Act, a bill to support science and research, that strips the requirement in law that NIST consult with the NSA when developing information security standards.

However, it still allows NIST to voluntarily consult with the NSA, something the VCAT report also goes to some lengths to recommend. That’s a tacit admission that NIST and the government overall can’t do away with NSA input on security. There have been suggestions that the NSA’s role in information assurance should be given over the Department of Homeland Security or the Defense Department, but that seems unlikely.

The fact is that the NSA probably has the greatest depth of expertise in cryptography and security in the entire government, and both the DHS and DOD rely on it as much as NIST does. How to reconcile all of that while urgently repairing the trust that’s needed of NIST and its standards, both in government and industry, will be one of the more fascinating things to watch over the next few years.

Posted by Brian Robinson on Jul 18, 2014 at 10:22 AM0 comments


Windows Server 2003: The end is nearer than you think

Windows Server 2003: The end is nearer than you think

With a year left before Microsoft finally ends support for Windows Server 2003, migrating to a new OS might not seem like a pressing issue. But Microsoft technical evangelist Pierre Roman warns that it really is just around the corner.

“We estimate that a full server migration can take up to 200 days to perform,” he wrote in a recent TechNet blog post. “If you add applications testing and migration as well, your migration time can increase by an additional 300 days.”

So if you did not get ahead of the game, you already are late.

Do you really need to transition to a new OS? “In a lot of cases, when things are working fine people feel it’s best not to tamper with it,” said Juan Asenjo, senior product marketing manager for Thales e-Security. This is especially so in the case of servers running mission critical applications for which uptime and availability are critical performance metrics.

This means that there is a large installed base of Windows Server 2003 in government enterprises. The Energy Department’s Lawrence Berkeley National Laboratory called Windows Server 2003 “the most secure out-of-the-box operating system that Microsoft has made.” But it also noted that it was not perfect and that “a large number of vulnerabilities have surfaced since this OS was first released.” The end of Microsoft support means that every vulnerability discovered in the software after July 2015 will be a zero-day vulnerability and will remain so, putting many mission-critical applications at risk.

Server 2003 was the first Windows server to include functionality for PKI cryptography, used to secure many applications. “It was a good incentive for the adoption of PKI technology,” said Asenjo. But the security offered by the 11-year-old server often is not adequate for current needs, which increases the risk of leaving it in place.

Mainstream support for Windows Server 2003 ended in 2010, after it had been superseded by Server 2008. Server 2012 has since been introduced. Microsoft’s lifecycle support policy gives a five-year grace period of extended support, however, which includes security updates and continued access to product information. That period ends July 14, 2015, unless organizations can qualify for and afford the costly custom support.

Information Assurance Guidance from the NSA warns that not only will the unsupported server be vulnerable to newly discovered vulnerabilities, which creates a “high level of risk,” but that newer applications eventually will not run with it. The agency “strongly recommends that system owners plan to upgrade all servers to a supported operating system well before this date in order to avoid operational and security issues.”

Roman recommends the same basic four-step program for transitioning to a newer server OS that is used in any migration program:

  1. Discover: Catalog software and workloads.
  2. Assess: Categorize applications and workloads.
  3. Target: Identify the end goal.
  4. Migrate: Make the move.

The process is not necessarily simple or fast, however. “There is no single migration plan that suits all workloads,” said Joe Schoenbaechler, vice president of infrastructure consulting services for Dell Services.

Fortunately, Dell – and a number of other companies – are offering migration assistance with help in developing and executing plans. If you don’t already have a plan, or are not well into it, you might consider asking for some help.

Posted by William Jackson on Jul 11, 2014 at 9:29 AM0 comments


Stakes rising as malware matures

Stakes rising as malware matures

With the constant drumbeat of cybersecurity worries that government has to deal with, it’s easy to lose sight of the trees when it comes to threats, and to consider them all as part of the same dark forest. But as two recently discovered exploits show, malware writing is as much a creative industry as any legitimate software business, and organizations need to be aware of the details to successfully defend their data and systems.

One of the newest pieces of malware is actually a throwback of sorts. MiniDuke was first identified in February 2013 by Kaspersky Lab, which described it as a “highly customized malicious program” used to attack multiple government entities and institutions both in the United States and around the world using a backdoor exploit.

The head of the Lab, Eugene Kaspersky, said then that it reminded him of older style of malware of the late 1990s and early 2000s, written with Assembler language and being very small in size, just 20 kilobytes or so. The combination of these “experienced, old school writers using newly discovered exploits and clever social engineering” against high profile targets he believed to be “extremely dangerous.”

In particular, according to the Lab’s analysis, MiniDuke was programmed to avoid analysis through a hard-coded set of tools in certain environments such as VMware, showing that the writers “know exactly what antivirus and IT security professionals are doing in order to analyze and identify malware.”

Following that first discovery, MiniDuke attacks decreased and eventually seemed to disappear. Apparently, however, it was only going underground, and it reappeared in an even more sophisticated form earlier this year. Among others, the targets apparently include organizations involved with government, energy, telecom and military contracting.

The new backdoor, also known as TinyBaron or CosmicDuke, spoofs a number of popular applications that run in the background on a system, can start up via the Windows Task Scheduler and can steal information using a broad range of extensions and file name keywords. Kaspersky Lab says it assigns a unique ID to each of the malware’s victims, which allows for specific updates of the malware. It also uses a custom obfuscator to prevent anti-malware tools from detecting it.

Remote access attacks

At the end of June, US-CERT issued an advisory about malware apparently aimed at industrial control systems, which some analysts claimed could cause Stuxnet-level damage to power plants and other sites through denial of service attacks. According to security firm Symantec, the attackers, known as Dragonfly, could potentially cause much greater chaos than Stuxnet, with victims already compromised in the United States, Spain, France, Italy, Germany, Turkey and Poland. 

The attackers use two main pieces of malware called remote access tools, Symantec said. Backdoor.Oldrea, also known as Havex or Energetic Bear, acts as a backdoor to a victim’s system and, once installed, can extract system information. The other main tool is Trojan.Karagany, openly available on the underground malware market, which can upload stolen data, download new files and run executable files on infected computers.

The Dragonfly group, possibly Eastern European and state sponsored, is “technically adept and able to think strategically,” Symantec said. “Given the size of some of its targets, the group found a ‘soft underbelly’ by compromising [the targets’] suppliers, which are invariably smaller, less protected companies.”

Defenses against the Dragonfly attacks include both antivirus and intrusion prevention signatures, but, given that the attacks had been ongoing and undetected for a while, a large number of systems probably remain infected.

As well as targeted attacks, general phishing attacks by cybercriminals aimed at stealing personal and financial information from institutions are also on the rise. While government sites are less than 2 percent of the overall targets for these attacks, according to the Anti-Phishing Working Group, the United States has by far the biggest number of phishing websites. Globally, it said, the number of infected machines has risen to nearly 33 percent.

The question is how government can best position itself against these attacks, which seem to be increasing both in number and sophistication. Keeping them out entirely no longer seems a plausible strategy, and the consensus is moving more towards limiting the damage they can cause.

Posted by Brian Robinson on Jul 07, 2014 at 12:53 PM0 comments


Man stares at screen

Can telework improve cybersecurity?

Federal officials gave high marks to the administration’s digital government strategy and telework initiatives in a recent survey, and the Mobile Work Exchange concluded that the future is bright for continued investment in technology to enable these efforts.

Yet in the same survey, 88 percent of human resource managers said they had an employee leave because of a lack of telework opportunities, and more than half said they had trouble landing the best candidates for a job because of teleworking restrictions. This makes federal agencies less competitive in the workforce marketplace at a time when another recent study by the RAND Corporation concluded that the shortage of cybersecurity professionals is a threat to national security.

“A shortage exists, it is worst for the federal government and it potentially undermines the nation’s cybersecurity,” said the RAND examination of the cybersecurity labor market. 

Although the report concluded that that the cybersecurity workforce shortage is “a crisis that requires and urgent remedy,” it also noted that work is underway to correct this situation and advised that “fears be tempered” on the subject.

Short-term shortages are likely to persist for some years, but the labor market eventually will correct itself in the long run with higher wages and more education and training programs.

In the meantime, however, employers will have to compete for a scarce resource. “Government agencies face a more difficult challenge, since their pay scales are constrained; they may therefore focus on hiring entry-level employees and training them,” the RAND report said.

Top-tier cybersecurity professionals can earn up to $250,000 a year in the private sector, according to the report, but federal salaries top out at a little more than $150,000, and most agencies have little flexibility in offering more money.

Government clearly will have to compete for these professionals in areas other than pay. Flexible working conditions, including the opportunity to be mobile on the job and telework, is one place agencies can improve their hiring and retention, the study by the Mobile Work Exchange suggested. Just about every study on the subject has shown that teleworking improves employee satisfaction.

Overall, the survey of 154 federal executives gave the government a B- in pursuing digital government initiatives, and a B+ on telework, and nearly 70 percent of respondents reported a positive return on their telework investments. But culture and resistance from managers remain roadblocks to fully taking advantage of the benefits of telework, ahead of security, technology and funding concerns.

The study estimated that the government has spent about $373 per worker – or $1.6 billion total – providing technology to enable a more mobile workforce, primarily by supplying laptops, smartphones and management software. But if frontline managers do not embrace the idea of a mobile workforce, agencies are likely to continue having trouble in hiring the best young people for entry level positions and have even more trouble hanging on to them after they have been trained and have become experienced professionals.

Posted by William Jackson on Jun 27, 2014 at 12:46 PM3 comments