If a demonstration is needed that security is a process, not a product, and that it depends on management, not technology, the Veterans Affairs Department provides it.
The Government Accountability Office recently recited to a House panel a litany of weaknesses in the sprawling department’s struggling IT security program. The VA inspector general has identified development of an info security program as a “major management challenge,” and auditors have flagged inadequate security controls in financial systems as a material weakness for 12 years. GAO warnings date back to 1998, and it has reported consistent weaknesses in security control areas at VA since 2007.
“The persistence of similar weaknesses over 16 years later indicates the need for stronger, more focused management attention and action to ensure that VA fully implements a robust security program,” Gregory Wilshusen, GAO’s director of information security issues, told a House VA oversight subcommittee on March 25.
In an effort to refocus management attention, Rep. Jackie Walorski, (R-Ind.) on April 2 introduced a bill, H.R. 4370, to “improve the transparency and the governance of the information security program of the department.” The contents of the bill are not yet available, but Walorski said in a statement that it would provide “a clear roadmap for immediately securing its system.”
The department’s security shortcomings have been so consistent for so long that they merit attention. The size of the department and the scope of its mission make it one of the greatest IT security challenges in government. VA operates the nation’s largest healthcare system, providing healthcare for about 6 million veterans, administers financial benefits for millions more and manages veterans’ graves all across the country.
In June last year, the House VA Oversight and Investigations Subcommittee recommended designating the VA network a “compromised environment,” and said that VA should establish controls to reclaim it, “from nation state sponsored organizations.”
Department CIO Stephen W. Warren in a November 2013 letter to subcommittee Chairman Rep. Mike Coffman, responded that “VA has in place a strong, multi-layered defense to combat evolving cybersecurity threats, including monitoring by external partners and active scanning of Web applications and source code.”
But from January 2010 through October 2013, more than 29,000 possible data breaches were reported by VA. In his letter, Warren noted that “virtually all of VA’s data breaches are paper-based, equipment loss or unencrypted e-mailing of sensitive information.”
VA is addressing the equipment loss issue by encrypting laptops and desktops, which began last year in conjunction with the department’s upgrade to the Windows 7 OS. Warren reported that as of Oct. 29, 87 percent of the computers, more than 330,000 systems, were running Windows 7 and most of the rest were expected to be upgraded by the end of January 2014. He noted, however, that some pockets were likely to remain due to what he called “blocker” applications, “applications that are not compatible with Windows 7 and have not yet been replaced.”
Whether Congress will be able to significantly improve VA’s cybersecurity with new legislation remains an open question. Wilshusen, in last month’s testimony to the subcommittee, said that “many of the actions and activities specified in the bill are sound information security practices and consistent with federal guidelines. If implemented on a risk-based basis, they could prompt VA to refocus its efforts on steps needed to improve the security of its systems and information.”
But he cautioned that security should be risk-based and not based on technology requirements that could quickly become outdated.
Posted by William Jackson on Apr 04, 2014 at 9:26 AM0 comments
Cybersecurity more and more resembles nothing less than old-fashioned warcraft, with both sides confident in the weaponry they have and in their ability to either penetrate or defend borders. As the threat of cyberconflicts ratchets up, the two modes of warfare seem at times to be getting chillingly similar.
The latest expression of confidence came from Defense Secretary Chuck Hagel, who on March 28 spoke to an audience at the National Security Agency headquarters to mark the retirement of Gen. Keith Alexander, the head of both the NSA and the U.S. Cyber Command.
The Pentagon is well on its way to building a modern cyberforce, he said, which will be 6,000 strong by 2016.
The force will improve the U.S. ability to “deter aggression in cyberspace, deny adversaries their objectives,” and defend the country from cyberattacks. At the same time, however, he pointed out the “proliferation of destructive malware” that is being used to constantly, and aggressively, probe and disrupt networks.
More confidence shone through in a recent report that surveyed IT and security professionals in both the military and civilian agencies. Nearly all of them, some 94 percent, rated their own agency’s cybersecurity readiness as either good or excellent, saying they feel they have the right tools, processes and policies in place.
(Well, OK the survey also found 9 percent of the respondents were unsure if there even were cyberthreats that affected their agency).
Perhaps of most interest, though, was what kinds of threats they considered the most serious. Insider threats, which until relatively recently were seen as the greatest, have fallen behind those from “external hacking,” even in the age of Wikileaks and Edward Snowden.
In fact, of the six top threats, insiders come in fifth, behind external hacking, malware, social engineering and SPAM, and just ahead of distributed denial of service.
Where do the bad guys come out in all of this? It’s no secret they’ve become much more sophisticated in their ability to get on the inside of networks, but a report from the RAND Corp., Markets for Cybercrime Tools and Stolen Data, shows also just how professionalized and extensive their ability has become.
The black and gray markets for hacking tools and services, and for the ill-gotten gains they produce, are expanding and growing in complexity, the RAND report said. What was once a varied landscape of discrete, ad hoc networks of individuals motivated by little more than ego and notoriety, it said, “has emerged as a playground of financially driven, highly organized, and sophisticated groups.”
Adding to the complexity for government defenders are the rapidly emerging and highly secretive markets for zero-day vulnerabilities, RAND said, which are available in both licit and illicit markets.
The potential impact of these market-driven tools was seen in the 2013 attack on Target stores, which were confirmed earlier this year. The malware used for that was a tailored version of the “BlackPOS” malware, which according to writer Brian Krebs was available on the black market for the low, low price of $1,800 to $2,300.
Of course, Target seems to have screwed up in so many ways in its own security. A report from the Senate Committee on Commerce, Science and Transportation lays it out in excruciating detail.
Nevertheless, it all makes a point. The business of creating malware and other tools to attack US networks and infrastructure now really is a business, with all of the profit-based energy and innovation that brings with it. Add the even more focused abilities of nation states, and the threat industry is vibrant.
Hagel and others are confident that government has the ability to withstand it. Are they right?
Posted by Brian Robinson on Mar 31, 2014 at 12:12 PM0 comments
In January 2002, Microsoft’s Bill Gates—then chairman—sent out his trustworthy computing memo, spurred by a growing wave of dissatisfaction about the security failures of the company’s operating systems and applications. As a result of past failures, Microsoft has helped to change the way we think about software development.
The late 1990s and early 2000s were difficult times in Microsoft security. A major vulnerability in the Universal Plug and Play feature of Windows XP was found just months after the release of the OS in 2001. In January 2002 the Electronic Privacy Information Center in Washington sent a letter to state attorneys general complaining of the lack of privacy controls in Microsoft’s Passport, Wallet and .Net services.
“I remember at one point our local telephone network struggled to keep up with the volume of calls we were getting,” Matt Thomlinson, vice president of security for Microsoft, said of the impact of the XP bug in an online history Microsoft’s security initiative. “We actually had to bus in engineers, many of whom were working on the next version of Windows, from their offices around campus to the call center. We needed every person available to talk to customers and walk them through how to get their systems cleaned.”
On Feb. 1, 2002, Richard Purcell, head of Microsoft’s corporate privacy office, announced in Washington a month-long moratorium on new coding.
Gates, Purcell told the audience at a privacy and data security conference, “is really annoyed by the incredible pain we put everyone through in computing.” As a result, “we are not coding new code as of today for the next month,” he said. The company instead would spend the time going over old code as a first step in cleaning out bugs. “It’s time to get the garage cleaned out.”
Twelve years later, the Trustworthy Computing initiative is not finished, and probably never will be. David Aucsmith, senior director of Microsoft’s Institute for Advanced Technology for Governments, said recently in in Washington, “I do not believe you can create a secure computer system.”
The problem is, “we build systems far more complex than our ability to understand them,” Aucsmith said. Because we don’t know what we don’t know, built-in security inevitably will be incomplete, and software and hardware will always have to adapt to newly discovered threats and exploits. “Nothing static remains secure.”
But the Secure Development Lifecycle (SDL) that grew out of the Microsoft initiative has helped to change the way developers think about software security. The SDL process now shows up as a requirement in government procurements, and the National Security Agency says it has made an impact on OS security.
“A fundamental goal of the SDL process is to reduce the attack surface,” NSA said in an evaluation of Windows 7 security for the Defense Department and the intelligence community. “Since adoption of the SDL process, the number of Common Vulnerabilities and Exposures on Microsoft products in the National Vulnerability Database has declined.”
“A preliminary System and Network Analysis Center analysis has determined that the new Windows 7 security features, coupled with the use of the SDL process throughout the development cycle, has assisted in the delivery of a more secure product,” the assessment concluded.
We still are a long way from being as secure as we want to be or can be. But there has been progress.
Posted by William Jackson on Mar 21, 2014 at 6:32 AM1 comments
The use of iris recognition to ensure security is a familiar concept, and is already used by some federal agencies. Pressured by Congress, the National Institute of Standards and Technology has been developing the necessary standards to enable it to be deployed throughout government.
But there’s a snag. Unlike with fingerprints, which have been used in identity and forensic investigations for decades and are well understood, iris recognition isn’t. Even though the uniqueness of the iris was noted at the same time as that of the fingerprint back in the late 1800s, the technology to exploit the iris has only been developed recently. People are still grappling with some of the fundamental definitions.
One of the question is how long the various iris templates used in biometrics databases are valid, because (so some people insist) the iris changes as people age. That’s not a minor problem. If it’s true, then a significant number of those inaccurate templates could exist at any one time, potentially throwing out false red flags that could cause security chaos.
That particular debate seems to be coming to a head. University and NIST researchers have recently been playing ping pong in an academic argument over this aging effect. Researchers at the University of Notre Dame, for example, produced a study questioning the value of current iris templates. NIST, which runs the Iris Exchange (IREX) as a support for iris-based applications, countered with its own study that downplayed those results. The Notre Dame researchers then came back with their own counter, basically saying NIST had screwed up the methodology it used.
This isn’t the only potential problem with iris recognition. Security researchers have also identified ways that bad guys could essentially copy the digital code for iris scans and reproduce them at will, essentially eliminating that biometric from the identity profile of any affected individual.
It’s not clear if any of this will affect the rollout of iris scanning systems, and the claim for iris recognition as one of the basic biometric supports of future security systems, along with fingerprint, voice and face recognition. Based on the previous assumption of iris recognition as a rock-solid science, agencies have already planned for its extensive use.
The Defense Department has been using iris scans for over a decade in Iraq, Afghanistan and other places to detect terrorists, and it plans to use it for physical access to facilities in combination with Common Access Cards. The FBI wants to use iris recognition in its Next Generation Identification System, the eventual replacement for its famed Integrated Automated Fingerprint Identification System. And Congress has been pushing NIST to come up with the necessary standards for other government uses of iris recognition, chiding officials in committee hearings about not living up to earlier promises.
Other governments around the world aren’t waiting. India has already enrolled hundreds of millions in a national identity system that includes iris recognition. Mexico began using iris scans on ID cards several years ago, and Argentina is also using it in its national identity system.
There are other incentives brewing, not least the use of iris recognition in mobile systems. Apple is reportedly looking at adding iris scans in future systems to the fingerprint identification it already uses, while Samsung on the Android side of things is rumored to also be interested. Since more and more government IT seems to be driven by consumer innovations, that could also accelerate the use of iris recognition in government apps.
However, if there are problems with iris recognition, what would that mean for security? No security technology is foolproof but, based on that “rock-solid” assumption, iris recognition is perceived to be as close to it as you can come. If there really are major flaws that can be exploited, then agencies will be building security systems with unexpected holes in them.
Posted by Brian Robinson on Mar 14, 2014 at 9:43 AM4 comments