Here we go again: A new exploit for Java and a new call to disable it in your browser, at least until a fix is issued.
The U.S. Computer Emergency Readiness Team (US-CERT) released the advisory Jan. 10. Oracle released the fix three days later, but the issue is not dead. The CERT of Carnegie Mellon’s Software Engineering Institute advises that “unless it is absolutely necessary to run Java in Web browsers, disable it as described below, even after updating.”
But the solution notes that because of a potential bug in the Java installer, the necessary control panel could be missing in some Windows systems. Also, “we have encountered situations where Java will crash if it has been disabled in the Web browser as described above and then subsequently re-enabled,” the institute’s advisory says. “Reinstalling Java appears to correct this situation.”
So you have to ask yourself, is Java absolutely necessary to my mission? And you have to decide what the pros and cons are of disabling it in your enterprise. It might not be a simple decision.
Java is a widely used programming language for client-server Web applications, and has been a common target since 2010. Exploits are significant concerns because Java runs on so many computers whether or not users are aware of it. If users aren’t aware, it might not be updated regularly.
Oracle issued an out-of-cycle patch in August for a serious vulnerability that resulted in calls to disable Java. The most recent vulnerability was found in Java 7 Update 10, which could allow unauthenticated attackers to remotely execute code. Update 11, released Jan. 13, sets default Java security settings to “high” so that users are prompted before running unsigned or self-signed applets.
“The fix, from our testing, works, so it’s not an issue,” said Gavin O’Gorman, senior threat intelligence analyst at Symantec Security Response. But O’Gorman agrees that disabling Java, and all other browser plug-ins is a good policy, except on trusted sites. “You’re opening yourself up to exploits with any plug-in you enable on your browser,” he said.
What do you lose by disabling Java? “Personally, I don’t see much of a difference,” he said.
But Java is useful. “It is deeply embedded in enterprise applications,” said A.N. Ananth, CEO of Prism Microsystems.
The government has established a Federal Desktop Core Configuration baseline for a variety of operating systems that originally called for disabling Java for all zones. But when it was found that needed Java-based applications failed, this was amended to allow Java at a “high security” (the new default) setting for intranet and trusted-site zones.
“I hesitate to say that government can afford” to turn Java off, although it might be easier for an agency than for a business, said Ananth.
“I’m not for whacking Java completely,” he said. Getting rid of it might eliminate Java-specific vulnerabilities, but new vulnerabilities will come along in whatever replaces it. “The emperor has no clothes,” he said. “Everything you turn on proves to be vulnerable at some point.”
So turn off Java if you don’t need it, but first decide whether or not you need it. And while you’re at it, evaluate all the other tools that could introduce vulnerabilities into your enterprise because nothing is invulnerable.
Posted by William Jackson on Jan 14, 2013 at 9:39 AM2 comments
A European Union study of the evolving cyber threat landscape identified a handful of emerging areas that are likely to be high-profile targets in the immediate future, with mobile computing topping the list.
Hardly a shocking conclusion, and the rest of the list also contains few surprises. There are social technology (usually referred to as social networking in this country), critical infrastructure, cloud computing and big data.
But there was one area flagged in the report that doesn’t get much attention here as a separate IT segment: Trust infrastructure. This is defined as "any information system that provides strong authentication and aims at establishing a trusted, secure connection between two end points."
In the United States we usually lump this function in applications or networks as identity management. But the EU study takes a broader view, which reflects a stronger emphasis on privacy and the idea that identity resides with the individual, not with the resources being accessed.
Maybe the concept of a trust infrastructure will gain traction here under the National Strategy for Trusted Identities in Cyberspace, a multi-pronged, public/private effort headed by the National Institute of Standards and Technology.
Among the programs under way, the administration is launching an initiative to use commercial cloud services to authenticate third-party credentials for accessing government sites, called the Federal Cloud Credential Exchange. The U.S. Postal Service will be operating an FCCX pilot.
A successful citizen-to-government identity bridge could help replace the outmoded password paradigm with strong, manageable credentials so the United States could have its own trust infrastructure. Considering it’s apparently an emerging target for cyber criminals, it seems a bridge worth crossing.
The EU study was conducted by the European Network and Information Security Agency, which analyzed more than 140 reports from the security industry and other organizations.
The study broke down the top threats by six areas: mobile computing, social technology, critical infrastructure, trust infrastructure, cloud computing and big data, and listed whether those types of threats were increasing, remaining stable or decreasing in each area.
Mobile computing, for example, faces increasing threats from drive-by attacks, worms and Trojans, exploit kits, botnets and phishing, among others. The current threats to the trust infrastructure include denial-of-service attacks, compromised confidential information, targeted attacks, physical theft, loss or damage of equipment, and identity theft.
Mobile users do get one small piece of good news from the report. Among 16 threats across six computing areas, only one threat is decreasing: spam in mobile computing.
Posted by William Jackson on Jan 10, 2013 at 9:39 AM0 comments
National security increasingly depends on the ability of agencies at the federal, state and local level to cooperate across organizational and jurisdictional lines.
“This cooperation, in turn, demands the timely and effective sharing of intelligence and information about threats to our nation with those who need it, from the president to the police officer on the street,” says the president’s National Strategy for Information Sharing and Safeguarding, released in December.
This requirement has been complicated by turf wars, siloed databases and a lack of interoperable technology and policies. The strategy calls for a shift to interoperable shared services (read: cloud) and integrated policies that will require system upgrades in what the strategy calls an “extremely austere budget environment.” In other words, don’t look for the needed improvements in the nation’s information sharing infrastructure to happen any time soon.
But the real headache in meeting the strategy’s goals will be establishing the common identity and access management schemes that are needed to enable secure sharing.
Today’s systems, policies and procedures were developed to lock information down, not to share it securely. The security posture has been defensive and outward-facing.
“The focus of information safeguarding efforts in the past was primarily bound to systems and networks at specific classification levels,” the strategy says. This focus will have to shift to the data itself, regardless of where or what it is, with common standards for tagging data with metadata to enable discovery across multiple databases and using common platforms for identity and access management.
It might be relatively easy to move federal, state and local intelligence to interoperable cloud platforms in standardized formats, although it will take money that is unlikely to be available any time soon. But common identity and access management schemes systems will be tough to do.
It has been more than eight years since Homeland Security Presidential Directive 12 mandated an interoperable electronic ID card that could be used across all executive branch agencies and their contractors for both physical and logical access. Standards and specifications were developed by the National Institute of Standards and Technology in record time, and millions of Personal Identity Verification Cards have been issued to government employees and contractors. But despite this progress, most agencies still lack platforms for using the cards for unified access control for both physical facilities and IT systems, and there is little if any interoperation between departments accepting each other’s cards.
The problem is both a lack of mutual trust between agencies and the legacy systems underlying access control that have not been updated to accommodate a common set of electronic credentials.
Specifications have been developed for PIV-Interoperable cards, which could be used by citizens and state and local governments and accepted by federal agencies, and this could serve as a model for the kind of interoperable environment envisioned in the president’s strategy. But PIV-I cards are not being adopted and the lack of cross-agency acceptance of PIV cards at the federal level shows the difficulty of establishing a broad-based, secure scheme for identity and access management.
Getting thousands of law enforcement, security and intelligence agencies on the same page and willing to share their most valuable assets over a single system supported by interoperable technology is likely to prove a serious hurdle to achieving the president’s vision.
Posted by William Jackson on Jan 07, 2013 at 9:39 AM0 comments
Many threat forecasters focus on what is changing in IT to define the coming threat landscape, but researchers crunching the numbers from eight years of Verizon Data Breach Investigation Reports
say the past is a better indicator, and see little change in the future.
“I feel pretty confident that 2013 will be very similar to 2012,” said Wade Baker, managing principal of Verizon’s Research Intelligence Solutions Knowledge (RISK) Team and a principal author of the annual report.
The threat landscape is defined not by emerging technology, but by tried and true techniques that persist from year to year.
That means that the top issues that agencies will continue to see in 2013 will be authentication exploits involving the theft or improper reuse of passwords and other credentials, and Web application exploits, which have been a favorite tool of the hacktivists who target government sites.
Not on the list of likely trouble spots are cloud computing and mobile devices, which are getting a lot of attention in 2013 forecasts. “They are significant changes,” Baker acknowledged. “But I’m not convinced that these infrastructure and device-level changes have demonstrably changed the threat environment yet.”
The Data Breach Incident Reports are annual statistical analyses of information breaches that have been investigated in a number of countries. The most recent report includes information from 855 incidents gathered from the U.S. Secret Service as well as from authorities in Australia, the Netherlands, Ireland, and the United Kingdom.
Which raises the question: Can historical information be used to predict the future? Baker thinks it can, at least for the near-term. The top threats have changed little over the past five years, he said. They might shift in the rankings some, but the mix remains largely the same. “Every year we look at this and we say things are going to change, and they don’t,” he said.
That is not to say there have not been changes over the past eight years. There has been an increase in the use of stolen credentials to exploit authentication systems and a corresponding decrease in the exploitation of vulnerabilities in code. Cybercriminals have become more professional and the exploit tools they use have become commodities, putting more powerful weapons in the hands of non-technical hackers and crooks.
But governments face a different kind of attacker, Baker said. Those who target governments are driven less by profit motive and more by activism and national interests. This means that the most significant threats might not be the most numerous ones. Agencies, along with large corporations with valuable intellectual property, also will continue to be targeted by state-sponsored espionage, which is harder to detect and might not make it onto the radar screen as often.
And all of this raises another question: Which is the more significant threat: the numerous ones we know about, or the emerging threats that we don’t know? That won’t be clear until we know what the impact of emerging threats are. Hindsight is always better than forecasts.
Posted by William Jackson on Jan 02, 2013 at 9:39 AM2 comments