Look for more attacks coming from privileged accounts

Look for more attacks coming from privileged accounts

Abuse of privileged accounts has been understood for a long time to be a major security concern, since it opens up broad access to an organization’s data and IT resources. Up to now, however, the focus has mainly been on how this applies to the so-called insider threat.

Perhaps that has to change. A new report from security solutions vendor CyberArk, which surveyed many of the world’s top security forensics experts, makes the worrisome claim that most, if not all, of the more sophisticated, targeted attacks from the outside are due to exploitation of privileged accounts. And it’s something that many agencies are unaware of.

What’s more, attackers have become adept at using an organization’s business and trading partners to gain access to systems. Even if the organization itself has built its security to make direct attacks on its privileged accounts hard, small and medium-sized partners will probably not have the same experience and expertise. However, they may well have been given privileged access to the organization’s systems just because it’s the easier way to do business.

The oft-cited Target breach, for example, which resulted in the theft of millions of customers’ credit card account data, was due to attackers first getting Target network credentials through its air conditioning vendor.

In August this year, Department of Homeland Security contractor USIS, which does background checks on the agency’s employees, revealed a breach that had “all the markings of a state-sponsored attack” and bled details on up to 25,000 DHS workers.

Even more startling is the report’s assertion that, on average, organizations have as many as three to four times the number of privileged accounts as they do employees. That makes for a very broad field from which attackers can gain access to those accounts.

This is often overlooked in organizations, according to Udi Mokady, CyberArk’s chief executive, since privileged accounts are rightly seen as the key to the IT kingdom and therefore are naturally assumed to be limited.

“In fact, there are many of them,” he said. “They are built into every piece of information technology so, at a minimum, there is one for every desktop system in the organization. But they are also built into every server and every application, operating system, network device, database and so on.”

Those all come with built-in accounts for a system administrator to monitor and control them. But in the modern environment of virtualized IT, every time a new system is spun off, yet another operating system and set of applications is created, along with the virtualization layer with its administrative functions. So the number of privileged accounts tends “to grow exponentially,” Mokady said.

If all of this isn’t enough to give security professionals nightmares, the report also points out that the Internet of Things (IoT) is coming – and quickly. That speaks to allthe embedded systems organizations will have to account for outside of the regular IT infrastructure, “the pieces of technology with a brain,” as Mokady put it.

Critical infrastructure organizations are just one example of what this will entail, with energy companies having privileged accounts also to manage, for example, industrial control systems that take care of power plants and electricity grids.

“These are computers, though not the typical idea of a computer, but they are often targeted by attackers even more than the regular pieces of the IT infrastructure,” Mokady said.

They are also probably more vulnerable than regular IT systems. The report points out that embedded devices require regular firmware updates and typically have more complex quality assurance cycles, which may in turn cause them to lag behind other products as far as security is concerned.

The basic problem for an organization’s security defense is that when attackers gain access to privileged accounts they can penetrate systems undetected, without throwing up alarms and red flags. Intrusion detection tools now are useless. Agencies  have to assume that breaches have occurred, that attackers are already inside their networks, and hunt for them.

Managing this situation will mean both a turnaround in traditional thinking about security, though the idea of first stopping attacks at the network perimeter is fast losing ground to the idea that the focus has to be on the inside, which will help. Still, to guard against the privileged account threat, organizations have to spread their arms wide enough to actively monitor and manage those accounts.

No question, that’s a major headache. But some of the current tools can be reworked to help, Mokady said, such as using firewalls to segment user access within an organization, and using encryption to more closely guard data. There are also automated tools now on the market, like those from CyberArk itself, that can detect and count the number of privileged accounts within an organization, and also to automate and change credentials and encrypt and segregate them in secure vaults.

The report lays out generic guidelines that organizations can follow that, just by tightening the regular security practices they follow, will greatly improve their ability to defend against attacks:

  • Inventory privileged accounts.
  • Make it harder to get privileged access.
  • Proactively monitor privileged accounts.
  • Perform regular, recurrent housekeeping.
  • Monitor and limit the privilege of service accounts.
  • Apply patches as quickly as possible.
  • Practice classic defense in depth.

Posted by Brian Robinson on Nov 21, 2014 at 9:50 AM0 comments


NIST marks top security requirements for government cloud

NIST marks top security requirements for government cloud

Cloud computing offers both unique advantages and challenges to government users. The advantages are well-advertised: Greater efficiency, economy and flexibility that can help agencies meet rapidly changing computing needs quickly and cheaply while being environmentally friendly.

Among the challenges, security is the most commonly-sited concern in moving mission-critical services or sensitive information to the cloud.

To address this, a recently released roadmap from the National Institute of Standards and Technology recommends a plan to ensure cloud offerings meet government security needs while being flexible enough to adapt to the policies and requirements of multiple tenants, including foreign governments. The plan involves periodic assessments of security controls and development of international profiles and standards.

The recommendations are brief and make up a small part of the 140-page document released by NIST in October but categorized as “high priority.”

The final version of the U.S. Government Cloud Computing Technology Roadmap has been several years in the making and reflects more than 200 comments on the initial draft, released in 2011.

Security is the first of three high-priority requirements addressed in volume one. Interoperability and portability – the ability of data to be moved from one cloud facility to another—are the others.

The government already has established the Federal Risk and Authorization Management Program (FedRAMP), which became operational in 2012 to ensure that cloud service providers meet a baseline set of federal security requirements, easing the task of certifying and authorizing the systems for government operations. But the NIST roadmap addresses security requirements that extend beyond federal users.

Security in the cloud is complicated by a number of factors. First, it upsets the traditional IT security model that relies on logical and physical system boundaries. “The inherent characteristics of cloud computing make these boundaries more complex and render traditional security mechanisms less effective,” the roadmap says.

Second, a cloud system has to meet not only U.S. government security needs, but also those of other customers sharing the environment, and so security policy must be de-coupled from U.S. government-specific policies. “Mechanisms must be developed to allow differing policies to co-exist and be implemented with a high degree of confidence, irrespective of geographical location and sovereignty.”

Moreover, a comprehensive set of security requirements have not yet been fully established, the roadmap says. “Security controls need to be reexamined in the context of cloud architecture, scale, reliance on networking, outsourcing and shared resources,” the authors write. “For example, multi-tenancy is an inherent cloud characteristic that intuitively raises concern that one consumer may impact the operations or access data of other tenants running on the same cloud.”

NIST says recommended priority action plans for cloud security are:

  • Continue to identify cloud consumer priority security requirements, on at least a quarterly basis.
  • Periodically identify and assess the extent to which risk can be mitigated through existing and emerging security controls and guidance. Identify gaps and modify existing controls and monitoring capabilities.
  • Develop neutral cloud security profiles, technical security attributes and test criteria.
  • Define an international standards-based conformity assessment system approach.

Posted by William Jackson on Nov 14, 2014 at 8:52 AM0 comments


Attacks on open source call for better software design

Attacks on open source call for better software design

Another day, another major vulnerability for government systems, it seems. This time it affects Drupal, a popular, open source content management system that’s been used for an increasing number of agency websites, including the White House.

An announcement from the organization that oversees Drupal warned several weeks ago of a vulnerability that would allow an attacker to use an SQL injection, where malware can be inserted into a system because of an error in database code, for example. Depending on the content of the attacker’s request, it said, the attack could lead to privilege escalation, arbitrary PHP execution or other scenarios that put data at risk.

However, the real danger of this vulnerability was revealed several weeks later, when the Drupal organization put out another announcement warning that, even if the patch issued at the time of the original announcement was applied, timing was critical. If sites weren’t patched “within hours” of the vulnerability announcement, the damage may have already been done.

Automated attacks began compromising sites shortly after the vulnerability was revealed, and those who waited to patch their systems then should assume their sites were compromised.

Even if the system appears to be patched, the Drupal organization warned, attackers may have “fixed” it themselves after they injected their malware, in order to keep other attackers out and to try and fool IT administrators into thinking it was safe. Attackers may also have created backdoors to later get into affected systems .

If timely patches weren’t applied, then the Drupal security team outlined a lengthy process required to restore a website to health:

  • Take the website offline by replacing it with a static HTML page.
  • Notify the server’s administrator emphasizing that other sites or applications hosted on the same server might have been compromised via a backdoor installed by the initial attack.
  • Consider obtaining a new server, or otherwise remove all the website’s files and database from the server. (Keep a copy safe for later analysis.)
  • Restore the website (Drupal files, uploaded files and database) from backups from before 15 October 2014.
  • Update or patch the restored Drupal core code.
  • Put the restored and patched/updated website back online.
  • Manually redo any desired changes made to the website since the date of the restored backup.
  • Audit anything merged from the compromised website, such as custom code, configuration, files or other artifacts, to confirm they are correct and have not been tampered with.

This year has been “Annus Horribilis” for open source software used in government. The Heartbleed OpenSSL bug revealed in April was considered “one of the scariest ever” in terms of its potential for attackers to get access to data. A steady stream of scares followed, and by October when the Shellshock bug in Linux and Unix operating systems was announced people seemed to be suffering from bug fatigue, even thought it was deemed as potentially damaging as Heartbleed.

Consequently, warning bells started ringing, again, about the inherent security of open source software. As the theory goes, open source is, by nature, open to the widest range of bad guys who could compromise it. Various industry types have tried to downplay that, however, putting it down to human mistakes that could happen anywhere.

Others point out that most of the compromised software has one thing in common: it was built on pre-fabricated modules. That’s generally considered a benefit. Because developers don’t have to repeat what’s gone before, they can use a more Lego-like approach and only write code where it’s needed.

That leads to a much speedier time to market, but it also means that whatever errors are included in those modules gets passed along. Some security vendors estimate that as much as 90 percent of the code used for in-house developments is based on these components.

We need more and better tools that scan these components for potential vulnerabilities before they are tied into actual products. That’s something the National Institute of Standards and Technology, for example, has recognized with its recent effort to develop better guidelines for systems and software design.

On a related note, Google recently came out with its nogotofail tool that can be used to test networks for weak transport layer security and secure socket layer connections. That won’t address every bug out there – it doesn’t address the Drupal bug, for example – but it will go some way toward fixing the kinds of vulnerabilities that Heartbleed and similar bugs introduce.

Posted by Brian Robinson on Nov 07, 2014 at 10:14 AM4 comments


Critics await ‘The Return of Open Enrollment’

Critics await 'The Return of Open Enrollment'

Last year’s rollout of online health insurance exchanges under the Affordable Care Act was – to put it mildly – disappointing. It turned out that providing insurance to millions of people, many of whom had not been covered before, was a lot more complex than expected. What’s more, neither the technology nor the processes were up to the job.

The situation wasn’t helped by all of the states that refused to establish their own online portals, putting additional pressure on the central federal site at HealthCare.gov. Still, when the dust had settled, more than 7 million people had enrolled to buy insurance.

When the exchanges failed to perform as expected during the first ACA open enrollment period, call centers provided a vital backup. Maximus, the Reston, Va., company that supports many government health and human services programs, provided call center services, fielding 4.8 million calls for five state exchanges and the District of Columbia as well as the federal site.

With the second open enrollment period set to open Nov. 15, what does the company expect this time around? “The system is much more mature now,” said Jim Miller, senior vice president for strategic solutions at Maximus. “There has been an awful lot going on in the last year.”

But that  doesn’t mean things will be easy. While Maximus and the exchange operators have the experience from OE1 to draw on, OE2 is expected to present a new set of challenges.

The federal and state sites that failed last year have undergone major overhauls, and they should be better able to perform. But the upcoming enrollment period will be shorter, from Nov. 15 through Feb. 15. New sets of questions and problems are expected as many of those already insured come back to renew their coverage. And after the low-hanging fruit was addressed last time, a new harder-to-reach population is being pursued this time around.

All of which means the call centers are gearing up for another busy season. But anticipating failure is what call centers are all about, Miller said. “Our responsibility is the alternative to success. We have to be ready for any contingency. We have to ask ourselves, what are the likely problems?”  

Maximus uses interactive voice response to direct calls and to access the proper resources, customer relationship management software to gather information on calls and callers and knowledge management systems to generate scripts addressing common problems.

The company anticipated significant problems last year in providing a complex product such as health insurance to first-time buyers. But it didn’t expect the almost complete failure of technology on many sites that kept customers from connecting or finishing their enrollment. Because of that, the number of agents in place to handle calls had to be scaled up from an initial 2,500 to 4,000. “We were able to flex to meet that demand,” Miller said.

But it wasn’t just a failure of technology that drove people from their browsers to their phones. Although online exchanges are the preferred method for enrolling and selecting policies for both the states and the federal exchanges, many users are not comfortable with self-service websites and want to talk to a real person.

One of the problems with the initial rollout of ACA enrollment, in addition to underestimating the complexity of the process that was being automated, was to overestimate what technology was capable of achieving. Technology alone does not serve all citizen needs, even when it works.

Still, with upgrades to the exchanges and an increased focus on the needs of citizens, “we think it will go better this year,” Miller said.

Posted by William Jackson on Oct 31, 2014 at 12:08 PM0 comments