Government credentials show up on paste sites

Government credentials show up on paste sites

While much attention has been paid to the very public attacks on government agencies, particularly the breach at the Office of Personnel Management, less has been made of the whereabouts of the exfiltrated data. So how easy is it for John Doe to get his hands on the information let loose in these attacks? Extremely, it seems, according to one recent report.

Security threat analyst Recorded Future, using open source intelligence on 17 paste sites over a single year ending in November 2014, discovered possible exposure of 47 U.S. government agencies across 89 unique domains. The Energy Department alone had email/password combinations on the sites for nine different domains.

A paste site gives users -- usually programmers and coders -- a place to store and share short items in plain text. Pastebin is the best known of these, though there are dozens of others. Anyone on the web can access them, and large companies such as Facebook have started to mine them for information to make their own sites more secure.

Credentials that grant access to agency networks have become a major target for Black Hats because they more easily open up an organization’s data. In fact, most of the sophisticated attacks on government agencies were enabled by attackers who had privileged account information.

Hackers in search of credentials often target agency contractor or business partner sites, as those organizations' employees are given agency access privileges for certain uses. And Recorded Future, in fact, found that most of the exposures at the paste sites were from these kinds of third-party websites, along with government employees using their government email accounts to register for web-based services -- a growing security concern in itself.

The Recorded Future study can’t specify the actual damage from all of this posted information, but it’s easy to infer the possibilities.

Much of the potential damage could be significantly lessened with the use of fairly simple security steps such as requiring two-factor authentication for network access. However, as the Recorded Future report pointed out, OMB has found that many major agencies don’t employ this safeguard for privileged access. The OPM breach was directly tied to this lack of two-factor authentication.

Recorded Future shared the results of its analysis with the government and agencies last year, well before it made them public. It also made a list of helpful suggestions for agencies to protect  themselves against the effects of the paste site exposures:

  • Enable multi-factor authentication and/or VPNs.
  • Require government employees to use stronger passwords that change with greater regularity.
  • Gauge and define use of government email addresses on third-party sites.
  • Maintain awareness of third-party breaches and regularly assess exposure.
  • Ensure Robot Exclusion Standard (robots.txt) is set for government login pages to prevent listing of webmail/web-services in search engines.

All good suggestions. How many would you guess will be standard operating procedure at agencies a year from now?

Mudge to the rescue!

One of the other problems that plague government, along with industry at large, is being able to gauge the quality and reliability of the software it acquires. As last year’s Open SSL Heartbleed affair showed, even well established software can be vulnerable.

Peter Zatko, known affectionately in security circles by his hacker handle Mudge, is leaving his job at Google to help the government create a CyberUL, a cyber version of the famous Underwriters Laboratory that is considered a stamp of approval for the worthiness of many products. He first made his announcement on Twitter.

Zatko went to Google via the Defense Advanced Research Projects Agency, where he was developing technical skills and techniques for use in cyber combat. Before that he was with BBN Technologies and other security research companies.

Not much is yet known of what Zatko will be doing for the government, but he was reportedly a member of the L0pht hacker collective in the 1990s, which published a paper that described a possible model for a CyberUL.

Posted by Brian Robinson on Jul 06, 2015 at 11:06 AM0 comments

What’s worse: Living with legacy systems or replacing them?

What’s worse: Living with legacy systems or replacing them?

The recent revelation of a breach at the Office of Personnel Management, which could have resulted in the theft of personal information of millions of government employees, also points up the broader problem government has with legacy systems -- whether it’s worth spending the money to secure them.

Not that securing the OPM’s systems would have done much good in this case --  according to the Department of Homeland Security Assistant Secretary for Cybersecurity Andy Ozment, the systems were not directly penetrated.  Instead, attackers obtained OPM users’ network credentials and got to the systems and data from the inside.

Donna Seymour, the OPM’s CIO, told a recent House Committee on Oversight and Government Reform  that the department was implementing database encryption, but that some of legacy systems were not capable of accepting encryption.

Some of the OPM’s systems are over 20 years old and written in COBOL, she said, which would require a full rewrite to include encryption and other security such as multi-factor authentication.

This is a government-wide problem. Many of the financial and administrative systems that are central to the agencies’ daily operations use the nearly 60-year old COBOL. Most agency CIOs have targeted those systems for replacement, but it’s not a simple rip-and-replace job -- any mistake could have a severe impact on the agency’s ability to fulfill its mission.

For that reason, many agencies have chosen to maintain those systems for now, but that’s not cheap, either. The OPM itself said last year that maintaining its legacy systems could cost 10-15 percent more a year as people with the right kind of expertise retire. And throughout government, legacy systems account for over two-thirds of the annual IT spend.

That expertise is unlikely to be replaced. Colleges aren’t turning out COBOL-trained coders anymore, and, with COBOL way down the list of popular languages, that won’t change. Agencies could bring in consultants to rewrite the code. But, again, not cheap.

And COBOL is unlikely to disappear anytime soon. Because of its ubiquity and utility, many organizations will continue to use COBOL until it’s pried out of their cold, dead hands. Meanwhile, old mainframe companies that have recently refocused on the cloud continue to update their COBOL tools to keep pace with current IT trends.

It’s not as if problems with legacy systems were the only reason for the breaches at OPM. Lawmakers also berated agency officials for their lack of attention to security governance issues that had been brought up years ago and were highlighted yet again last year in an OPM Inspector General report.

But the legacy issues are real and, according to some reports, extend even to “legacy” security systems such as signature-based firewalls, intrusion prevention systems and other widely installed devices that are just not capable of stopping modern, fast, sophisticated and chameleon-like threats.

However, at least the situation with the federal government is probably not as bad as that of a public school district in Grand Rapids, Mich., which is still running the air conditioning and heating systems for 19 schools using a Commodore Amiga -- as in the 1980s-era personal computer that was popular for home use -- because a replacement system reportedly will cost up to $2 million.

At least, we hope not.

Posted by Brian Robinson on Jun 19, 2015 at 10:55 AM7 comments

IT security’s blind spot

IT security’s blind spot

Good network and data security is made up of several parts. Technology and culture are certainly important elements, but so is perception. If you don’t know what’s going on in your IT infrastructure, then how can you be so sure that you are protected as well as you think? Hubris has been a big reason for many of the most serious breaches over the past few years.

That seems to be true also for emerging infrastructures that include cloud services. Many organizations and government agencies have not made the move to cloud yet, or have done so only hesitantly. And perhaps they are fooling themselves in thinking they have this transition under control, and that they’ll be able to manage the security implications

Skyhigh Networks took a look at this, using anonymized, actual usage data collected from public sector organizations in both the United States and Canada. In its Cloud Adoption and Risk in Government report for the first quarter of this year, Skyhigh discovered, among other things, that government on average was underestimating the use of cloud services by its employees more than ten-fold.

“That’s startling because we tend to think the government sector is very locked down,” said Kamal Shah, vice president of product and marketing at Skyhigh. “In reality, employees are finding and using cloud services to help them get their jobs done, regardless of what the official policies are.”

When they asked government IT officials what services they thought employees were using, they’d come up with anything in between 60 and 80, he said. The Skyhigh study found the average public sector organization uses 742 separate and unique cloud services.

If that sounds like a lot, compare that to the fact that SkyHigh already tracks some 12,000 unique services in its database and is adding around 500 new services each month. There’s a lot of room for that average to climb still higher in the future.

This is all part of the unregulated, shadow IT mess that government already faces. That threatens to become much worse over the next few years with the rise of the Internet of Things -- dubbed the Internet of Threats by some -- and that’s spooking many organizations around the world into trying to figure out answers. If agencies thought they had a problem with BYOD, they haven’t seen anything yet.

The kind overconfidence surfaced by Skyhigh is showing up in other reports also. Cybersecurity certification company RedSeal recently produced its own survey of 350 C-level company executives, of whom a solid 60 percent said they could “truthfully assure the board beyond a reasonable doubt” that their organization was secure. As RedSeal pointed out, those assertions were made at the same time that many reports showed a high incidence of network breaches in up to 97 percent of all companies.

What seems clear from the RedSeal survey is that most executives have no clue about what’s really happening in their networks. In what’s a clear repudiation of that two-thirds “beyond a reasonable doubt” number, 86 percent of respondents  acknowledged they had gaps in their network visibility, and almost as many admitted that makes it impossible to effectively secure their networks.

Even outside of the security implications, this lack of knowledge by executives about how IT is being used in their organizations causes problems. As a part of its study of usage data, for example, Skyhigh found that 120 of those 742 total services used on average by agencies were for collaboration purposes. That puts a lot of overhead on IT to deliver all of those unique services, and actually injects confusion into what should be a very organized affair. Fewer services actually aid in collaboration since it means more people would likely be on the same page.

As far as security is concerned, all this shadow IT greatly increases the chance that networks will be breached and, particularly, that users will have their network identification and authentication information stolen. The average public sector employee’s movements online are being monitored by an average of 2.7 advertising and web analytics tracking services, Skyhigh pointed out, and those are increasingly being used by cyber criminals as a base for so-called watering hole attacks.

This is important, Shah said, because the cloud is increasingly being used by attackers to move data out of organizations. In one agency, the company noted over 86,000 tweets coming from just one ID address in a day. When the machine and that address were isolated, the agency found a bot that was exfiltrating data at 140 characters at a time.

That’s an example, Shah said, of the fact that if you can’t analyze data that can show anomalous behavior, you won’t find it. “That’s a blind spot for most organizations,” he said.

All of this is fueling the emergence of new security services, focused on visibility into cloud data traffic, called cloud access security brokers (CASBs). They emerged in 2012, Gartner said, and are set to become an essential component of software-as-a-service deployments by 2017.

Hype aside -- and not forgetting Skyhigh, RedSeal and others have a vested interest in selling these services -- these reports and surveys at a minimum indicate that overconfidence by those charged with providing security for IT is at least as big a problem as the technological challenges they face. It might also be the toughest to fix.

Posted by Brian Robinson on Jun 05, 2015 at 12:36 PM0 comments

More bad news: The bad guys are getting better

More bad news: The bad guys are getting better

If there’s one lesson to be gained from all the security breaches and revelations of major bugs in security protocols in 2014, it’s that attackers are upping their game and finding more opportunities. That’s only reinforced by several new studies.

German security company G Data, for example, reported a huge increase in the number of new malware strains in the second half of the year -- on average, a new type was discovered every 3.75 seconds! For the year as a whole, just under six million new malware strains were seen in the wild, some 77 percent more than 2013's total.

Not all kinds of malware saw an increase. Those using backdoor vulnerabilities in software fell, for example, and worms and spyware remained relatively flat. But rootkits, while still a very small percentage of the overall number of malware, jumped more than ten-fold in the second half of the year.

Rootkits are software included in malware that help to embed the malicious part of the package in a system and ensure the persistence of additional attacks by helping the malware evade the scanners and monitors now used to detect it.

Not surprisingly, malware developers are mainly targeting the ubiquitous Microsoft platforms, with malware programmed as .NET applications continuing to rise. Overall, new variants for Windows platforms made up 99.9 percent of the new malware variants.

More problems could arise with Microsoft’s withdrawal of support for Windows XP in April last year, G Data said, because systems still using this operating system are “unprotected against attacks on existing or newly discovered security holes going forward.”

Akamai Technologies' most recent State of the Internet survey similarly reported more than double the number of distributed denial of service attacks in the first quarter of 2015 compared to first quarter 2014, and over 35 percent the number in the final quarter.

DDoS attacks may not be such a big deal for the public sector, which gets only around two percent of the total. But Akamai noted a potentially dangerous trend in the 2015 attacks, with peak DDoS attacks of 100 Gbps making up a significantly bigger part of the total. That suggests attackers have been developing better ways to maximize the impact of their work.

At the rate attacks are progressing, Akamai said, security researchers are concerned about what attackers may be able to accomplish by this time next year. Add to that the fact that employing current attacks techniques “has not required much skill,” and even relatively inexperienced attackers could be capable of creating major damage as more potent tools enter the picture and attack bandwidth increases.

And what, then, to make of the recent news that the Defense Department is going to take a “no holds barred” approach with users who threaten security with sloppy cyber habits? Bad cyber hygiene “is just eating our shorts,” according to David Cotton, deputy CIO for Information Enterprise at the Pentagon.

Users will be given a very short time to comply with DOD password-security policies or to change behavior that invites phishing attacks while using  third-party social media accounts. The Pentagon is also pushing vendors to come up with more timely patches for security vulnerabilities, though recent research also points to the need to make sure patches are updated on all hosts at the same time.

The DOD, along with the intelligence agencies, is considered to be better at security than most other parts of the government, so it’s a little startling to read that the Pentagon’s crackdown as aimed at giving department leadership “a consolidated view of basic network vulnerabilities.”

Isn’t this supposed to be the very first thing organizations do when assessing security needs? And if the DOD doesn’t even have this bit of the puzzle sorted out, how is it ever going to successfully defend against the threats indicated by the G Data and Akamai reports?

Perhaps it’s finally time for government organizations to give up on security that is user focused. The Cloud Security Alliance’s “Dark Cloud” project could be one way of doing that.

Posted by Brian Robinson on May 22, 2015 at 8:37 AM1 comments