crowdsourced security

Crowdsourcing cyber threat defense

Paul Revere’s ride on April 18, 1775, to warn colonial troops at Concord. Mass., after seeing two lanterns in the Old North Church in Boston signaling the approaching British was arguably, said Mark Jaster, “the first successful evidence of an intelligence network operating in the United States.”

So when he founded his cybersecurity company -- 418 Intelligence Corp. -- Jaster selected a name that referred to that early intelligence network, with “418” representing April 18.

Far from focusing on old intelligence technologies, however, Herndon, Va.-based 418 Intelligence has just received a $350,000 grant from the Department of Homeland Security to develop a unique game-based forecasting platform for responding to cyber threats.

The idea behind 418 Intelligence’s platform is that when an organization detects a cyber attack or threat it will submit information about the event to the platform where it can be assessed by a recruited crowd of cybersecurity specialists. 

“We are designing an online game experience that I call ‘fantasy football for cyber,’" Jaster said. “We are asking defenders to come to the table with their playbooks. The whole point is to ask the crowd -- who are journeymen cybersecurity analysts -- under this condition and within these parameters, what is going to happen?”

Given the sensitivity companies and government agencies have about revealing vulnerabilities, Jaster said a critical part of the platform design is developing a utility to anonymize the submissions of cyber attack details. Integrating a commercial technology that will provide rules-based encryption and digital-rights access for safely sharing the data is also vital, he said.

Once the analysts -- recruited primarily from government agencies, though Jaster said he also plans to reach out to the private sector for participation – have the threat data in hand, they submit what they believe would be the most effective steps for neutralizing the threat. “Then we ask an observer to bet on who is going to be most effective,  and just how effective a specific control tactic will be against a specific attack tactic,” Jaster said. We're trying to get "a calibrated estimate" of a defense's effectiveness from the recruited crowd "that we then use as the prompt for real observers on the outside to validate whether those estimates are accurate with anonymized data.”

So what’s in it for the crowd of analysts?  “There are a couple of reasons why they will want to be involved,” Jaster said.  “One, just like the open source communities reward people for being experts in software development, they get reputation. They will gain stature in the community that they can use in their professional work to advance themselves and to gain influence and to sharpen their skills," he said. "That turns out to usually be the most lasting motivator.”

Second, Jaster said the company also has plans to commercialize crowdsourcing of cyber analysis. Participating analysts would benefit both by having access to real data that has been anonymized and that they can earn income from.

“What we are proposing,” Jaster said, “is to be the first on-demand incident response service out there.”

Posted by Patrick Marshall on Feb 08, 2018 at 2:36 PM0 comments

lock with bullet hole (wk1003mike/

New breach, same lessons

The story of recent breaches at the credit-rating agency Equifax, which may have involved the personal details of nearly 150 million people, has probably just begun, given the confusion that still surrounds events. But it’s brought the security of open source software to the fore yet again, and highlighted the ongoing struggle organizations still have with cybersecurity.

So far there’s no indication of how many U.S. government organizations may have been affected by the software bug that apparently led to the Equifax mess. However, it’s already been compared to some of the most damaging breaches in recent years, such as those at Sony and Target and at the Office of Personnel Management in 2015 that could have leaked sensitive details of over 20 million U.S. government employees.

It also brings up elements of the debate that resulted from the 2014 Heartbleed bug that was related to a vulnerability in OpenSSL software. That discovery launched a back-and-forth argument about the inherent security of open source software and how much responsibility organizations should bear for the security of applications that used it.

The Equifax breach was blamed on a vulnerability in the Apache Software Foundation’s Struts version 2, an open source framework for building Java web applications. There have been a number of announcements of Struts vulnerabilities over the past few months, the most recent issued by the Center for Internet Security on Sept. 15.

Depending on the privileges associated with the application, according to CIS, an attacker could “install programs; view, change, or delete data; or create new accounts with full user rights.” It put the risk for medium and large government enterprises at high.

It was an earlier vulnerability, publicly announced Sept. 4, that’s thought to have been the one that attackers exploited. However, the Apache Foundation itself said that, given that the security breach at Equifax was already detected on July 5, it’s more likely that an even older earlier announced vulnerability on an unpatched Equifax server, or a zero-day exploit of an at-the-time unknown vulnerability, was the culprit.

The timelines on this are confusing. There were detailed stories of attacks using a Struts 2 vulnerability as far back as March 2017, with attackers carrying out a series of probing acts and injecting malware into systems.

Back in the Heartbleed days, detractors claimed that open-source software was inherently insecure because developers just didn’t keep as close an eye on security issues and weren’t as systematic in finding potential holes in code as proprietary developers were. Proponents, on the other, said that open source was, in fact, inherently at least as secure as other software and that it was safe for government agencies to use.

It’s not an academic issue. Sonatype, for example, claims that some 80-90 percent of modern applications contain open source components and issued a recent report that said an “insatiable appetite for innovation” was fueling both the supply and demand of open source components.

The Apache Foundation made the case that its developers put a huge effort into both hardening its products and fixing problems as they become known. However, it said, since vulnerability detection and exploitation is now a professional business, “it is and always will be likely that attacks will occur even before we fully disclose the attack vectors.”

In other words, it’s up to organizations that use Struts, or any other open source product for that matter, to treat security just as they would for a proprietary product: assume there are flaws in the software, put security layers in place and look for any unusual access to public-facing web pages. It's critical that organizations  look for security patch releases and software updates and act on them immediately.

Sound advice for everyone. If only everyone followed it.

Posted by Brian Robinson on Sep 26, 2017 at 12:34 PM0 comments

risk alternatives (Narong Jongsirikul/

NIST's how-to for prioritizing risk

Some of the hardest parts of a security professional’s job are identifying which elements in an enterprise infrastructure pose the greatest risk and keeping that infrastructure secure going forward. The underlying constraint in these considerations is how to do this with a less-than-infinite budget.

In many organizations, and certainly for most of government, that comes down to keeping systems up and running when at least some part of that infrastructure depends on legacy systems. Agencies can’t replace all of the aging machines and applications, so where should they invest scarce dollars to boost security, while at the same time making sure they don’t introduce problems that prevent that infrastructure from functioning properly?

That’s what the National Institute of Standards and Technology most recent guidance on risk assessment aims to address. Unlike other cybersecurity guidance NIST has published, however, this document includes a step-by-step process that agencies can use to identify the most critical parts of an infrastructure so they can better choose what to upgrade and where to spend their (usually scarce) dollars.

NIST itself said the new guidance builds on previous publications, such as SP 800-53 Rev. 4, SP 800-160 and SP 800-161, all of which also emphasized picking out critical parts of an infrastructure, but didn’t say how to do that.

The relevant publication, the NIST Cybersecurity Framework -- an answer to the President Barack Obama’s 2013 Executive Order 13636 on “Improving Critical Infrastructure Cybersecurity” -- includes a detailed mechanism that organizations can use to better understand how to managing security risks.

The framework has become a standard document for both public- and private-sector organizations in establishing their approach to cybersecurity. In May, the Trump White House issued an executive order on strengthening federal cybersecurity that effectively made use of the NIST framework government policy.

The new NIST guide described what it calls a “high level criticality analysis process model,” which steps users through the various components needed to get to the end point of a detailed analysis of the criticality levels for all of the programs, systems, subsystems, components and subcomponents needed in a particular enterprise.

This kind of approach will give agencies more certainty in what they buy, and it won’t upset the business logic that supports an agency and its mission. After all, even though cybersecurity has certainly risen in the list of agency priorities, the main question most IT managers ask security product vendors is how any new tool will affect the normal running of current networks and systems.

The authors of NIST's new guidance believe their approach could eliminate the debate over return on investment of security solutions versus the long term resilience of systems. That’s something to be hoped for, but it may be a while before agency bosses shunt aside the well-established ROI for something that’s still so nebulous -- for now, anyway -- as resilience.

The new NIST publication does hint at the need for more active outcomes for all of the guidance -- from NIST and others -- that’s been published over the last few years. The House, for example, recently tried to push measurable metrics onto the NIST Framework through the NIST Cybersecurity Framework, Assessment and Auditing Act of 2017, which was introduced in February.

It would be a real advance if that effort produced actual metrics that could be used because it’s been notoriously hard to do that with any kind of specific security guidance. Each organization has very different needs when it comes to the application of security, so getting a general set of metrics to measure effectiveness may not be possible.

Still, the current draft of the NIST criticality guidance, which is open for comment until Aug. 18, gets halfway there. It at least promises to give users a better idea of what they have and how best to insert new security solutions and systems. That should make for a more certain and more effective acquisition process. And, who knows, it might eventually take its place alongside the NIST Cybersecurity Framework as a solid basis for government cybersecurity efforts.

Posted by Brian Robinson on Jul 24, 2017 at 10:33 AM0 comments

cyber attack (By GlebStock/

WannaCry: A preview of coming attacks?

The astonishing spread of the WannaCry ransomware that exploded onto the global scene on May 12 is not the work of some genius malware developers.  Rather, it is a clear example of the confluence of two trends, one that should have been strangled a long time ago and the other an inevitable result of technological progress.

Most people, if they’ve been paying attention, have noticed the recent growth in ransomware. In its 2017 Data Breach Investigations Report, Verizon said ransomware is now the fifth most common malware, compared to just the 22nd most common in 2014.

Part of the reason for that jump is the increasingly sophisticated techniques used to create the malware and share the code. The WannaCry malware apparently uses code first developed by the Lazarus Group, a shady outfit that’s been linked to some of the biggest and most effective raids on bank and finance systems around the world. The rise of ransomware-as-a-service is apparently making sophisticated malware available to even the most technically deficient criminal.

WannaCry also took advantage of a Windows exploit called EternalBlue that was developed by the National Security Agency and that attacks weaknesses in Microsoft’s SMBv1 (Server Message Block 1.0) using a backdoor tool also created by the NSA. All Windows machines still running an older version of the operating system -- Windows XP up through Windows 7 -- were vulnerable to WannaCry.

It’s not clear just how aware security professionals are, both in the public and private sectors, of the increasingly industrial nature of malware development and exploits. Malware creators are every bit as capable as their white-hat counterparts, and the infrastructure that makes malware easily obtainable by criminals is starting to mirror that of the legitimate software industry. The other side of this picture is the continued foot-dragging by users to start practicing baseline, no-brainer security such as regularly patching their systems. Microsoft, for example, issued a security update for the SMBv1 vulnerability in March, but thousands of systems were still thought to be unpatched when the WannaCry ransomware was launched.

Microsoft took the unusual step of sending out an emergency custom patch for Windows XP, Windows 8 and Windows Server 2003 machines on the first day of the attack. It also suggested that users make other changes, such as blocking legacy protocols on their networks, to counter similar attacks in the future.

One thing that’s still unclear is the potential impact of the attack for the government’s own agencies -- in this case, the NSA. It developed EternalBlue as its own weapon in the fight against groups hostile to the U.S., but when it was stolen last year along with a stash of other NSA cyber weapons and the code eventually published, questions began to be asked about whether the NSA was itself secure enough to be holding such potent hacking tools.

NSA officials also apparently worried about that. In a blog post, Brad Smith, Microsoft’s president and chief legal officer, said the WannaCry incident is yet another example of why the stockpiling of such things as EternalBlue, which wasn’t revealed to industry or anyone else, is such a problem.

“This is an emerging pattern in 2017,” he wrote. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world.”

All governments should treat this attack as a wake-up call, he said, and they must take a different approach and apply the same rules to cyber weapons as they do to weapons in the physical world.

That’s probably good advice. Up to now, cyberattacks have been non-lethal, but WannaCry showed just what real-world damage can be caused by ransomware and other types of malware. The UK’s National Health Service was one of the first and worst hit by WannaCry, and many hospitals there had to put off essential surgeries and other procedures.

With the pace of malware innovation seemingly outpacing the efforts of both public and private entities to protect against them, we must find a new way to deal with the issues malware poses. Microsoft, for example, wants a Digital Geneva Convention that will govern global cybersecurity, which would include a requirement for governments to report vulnerabilities to software vendors, rather than stockpile them.

Right now, that kind of collective response is a reach, but WannaCry has certainly shown just why it’s needed.

Posted by Brian Robinson on May 17, 2017 at 12:53 PM0 comments