digital key (Mott Jordan/Shutterstock.com)

Making encryption easier with blockchain

Moving public key infrastructure to a distributed ledger could offer a more secure, less expensive way to provide online authentication.

Public key infrastructure is an effective way of ensuring the security of encrypted data, but not many people use it.  That’s because PKI requires users to acquire -- at a cost greater than what many individuals and small businesses are prepared to pay -- a public key certificate from a centralized certificate authority that issues and manages the critical keys.

One company, however, wants to move the key storage from a centralized authority to a distributed ledger.

Respect Network Corp., a Seattle-based network technology company since acquired by Evernym, is using a $750,000 award from the Department of Homeland Security to develop a blockchain-based solution  for decentralized creation and management of key certificates for encryption and identity management. The Decentralized Key Management System employs a three-layer architecture that includes a distributed-ledger layer, a cloud-based agent layer and an edge layer of apps or wallets that individuals use to access keys and data. 

“In DKMS the public keys needed to verify any user are stored on a blockchain, which as you know provides an extremely tamperproof, decentralized solution to immutable storage,” Evernym Chief Trust Officer Drummond Reed said. “That's the primary innovation that makes DKMS possible.”

Private keys are also more secure, Reed said, because they reside on each user’s devices (the edge layer) instead of with a centralized authority.  “With DKMS, there is no giant stash of private keys or other secrets anywhere,” he said.  Without the private-key “honeypots” to target, attackers “would have to try to break into the secure elements on edge devices -- mobile phones, tablets, laptops -- for each and every user they want to try to attack.” 

The edge agents being developed for DKMS, which interact with DKMS cloud agents, also make it possible to provide backup and recovery options that weren’t feasible before. “They will make it easy enough for any average internet [user] to start using a digital wallet and easily recover it if they lose all their devices,” Reed said.

DKMS has three other major advantages, according to Reed.  First, since there is no central authority, there’s no single point of failure that can impact a large number of users.  Second, DKMS is not dependent on proprietary software the way traditional service providers are.  Third, DKMS has all the resiliency of distributed-ledger technology.

The company is developing prototypes of edge and cloud agents in the open-source Hyperledger Indy project, a distributed ledger and utility library purpose-built for decentralized identity. It expects the system to be available for proof-of-concept deployments in the first half of 2018.

Posted by Patrick Marshall on Apr 27, 2018 at 2:16 PM0 comments


The problem with PIV cards

Personal identification verification cards -- smartcards that contain an employee’s photo, biometrics, encryption keys and credentials -- are a great idea. They offer not only secure authentication but the ability to centrally manage that individual’s access to federal resources. Unfortunately, PIV cards have never quite lived up to their promise to control access to federal networks and physical locations. 

Some of the problems have been technical.  There were glitches using PIV card readers with Windows 7, for example.  And there is always the challenge of keeping up with new technologies.  It was only in 2015, for example, that the National Institute of Standards and Technology updated its specifications for the cards to allow them to work with smartphones.

But the biggest problems with PIV cards have been administrative.  A 2017 Government Accountability Office report, for example, noted that agencies often fail to retrieve PIV cards from separated employees and contractors.  And a February 2018 report by the Department of Homeland Security inspector general found that the department lacks effective protocols to ensure against no-longer-authorized contractors from using unretrieved PIV cards to access facilities and networks.

According to Dan Conrad, federal CTO for One Identity, a California-based vendor of identity access management solutions, PIV cards face another major challenge -- working with legacy programs.

“Anytime I authenticate with my PIV card, the validity of the certificate on the card is checked, so in theory [an agency] can revoke the certificate centrally,” Conrad said.  The problem, he said, is that many applications still in use aren’t PIV compatible and, in some cases, the vendor may no longer be in business.  “What the organizations are looking for is someone to go back and rewrite the authentication modules of this application so it can be rolled under the PIV module,” said Conrad.  “That is almost impossible in a lot of situations, or extremely expensive.”

As a result, agencies using such applications create “exceptions lists” that allow individuals to access those programs without being under the PIV umbrella.  Other options include abandoning incompatible legacy applications or acquiring a PIV-compatible single-sign-on solution. Which option is best will, of course, depend how critical the legacy software is to the agency’s mission.

One Identity and several other vendors offer workarounds. 

“Our solution for that is a bridge solution that will take applications that require usernames and passwords, and we encrypt and walletize those usernames and passwords and inject them after authenticating with a PIV,” said Conrad. “Upon successful entrance of your PIN and certificate validation, we decrypt the password from the wallet and then inject the credentials. The user doesn’t even know what they are."

“A small organization may have only one or two applications that only a few users use,” noted Conrad.  “They may be able to just go out and get a new [PIV-compatible] application that does the same thing.”

Another shortcoming of the current generation of PIV cards, Conrad said, is that many of the apps people use on smartphones also fall outside the PIV umbrella because they don’t accommodate derived credentials.

Posted by Patrick Marshall on Apr 10, 2018 at 12:25 PM0 comments


crowdsourced security

Crowdsourcing cyber threat defense

Paul Revere’s ride on April 18, 1775, to warn colonial troops at Concord. Mass., after seeing two lanterns in the Old North Church in Boston signaling the approaching British was arguably, said Mark Jaster, “the first successful evidence of an intelligence network operating in the United States.”

So when he founded his cybersecurity company -- 418 Intelligence Corp. -- Jaster selected a name that referred to that early intelligence network, with “418” representing April 18.

Far from focusing on old intelligence technologies, however, Herndon, Va.-based 418 Intelligence has just received a $350,000 grant from the Department of Homeland Security to develop a unique game-based forecasting platform for responding to cyber threats.

The idea behind 418 Intelligence’s platform is that when an organization detects a cyber attack or threat it will submit information about the event to the platform where it can be assessed by a recruited crowd of cybersecurity specialists. 

“We are designing an online game experience that I call ‘fantasy football for cyber,’" Jaster said. “We are asking defenders to come to the table with their playbooks. The whole point is to ask the crowd -- who are journeymen cybersecurity analysts -- under this condition and within these parameters, what is going to happen?”

Given the sensitivity companies and government agencies have about revealing vulnerabilities, Jaster said a critical part of the platform design is developing a utility to anonymize the submissions of cyber attack details. Integrating a commercial technology that will provide rules-based encryption and digital-rights access for safely sharing the data is also vital, he said.

Once the analysts -- recruited primarily from government agencies, though Jaster said he also plans to reach out to the private sector for participation – have the threat data in hand, they submit what they believe would be the most effective steps for neutralizing the threat. “Then we ask an observer to bet on who is going to be most effective,  and just how effective a specific control tactic will be against a specific attack tactic,” Jaster said. We're trying to get "a calibrated estimate" of a defense's effectiveness from the recruited crowd "that we then use as the prompt for real observers on the outside to validate whether those estimates are accurate with anonymized data.”

So what’s in it for the crowd of analysts?  “There are a couple of reasons why they will want to be involved,” Jaster said.  “One, just like the open source communities reward people for being experts in software development, they get reputation. They will gain stature in the community that they can use in their professional work to advance themselves and to gain influence and to sharpen their skills," he said. "That turns out to usually be the most lasting motivator.”

Second, Jaster said the company also has plans to commercialize crowdsourcing of cyber analysis. Participating analysts would benefit both by having access to real data that has been anonymized and that they can earn income from.

“What we are proposing,” Jaster said, “is to be the first on-demand incident response service out there.”

Posted by Patrick Marshall on Feb 08, 2018 at 2:36 PM0 comments


lock with bullet hole (wk1003mike/Shutterstock.com)

New breach, same lessons

The story of recent breaches at the credit-rating agency Equifax, which may have involved the personal details of nearly 150 million people, has probably just begun, given the confusion that still surrounds events. But it’s brought the security of open source software to the fore yet again, and highlighted the ongoing struggle organizations still have with cybersecurity.

So far there’s no indication of how many U.S. government organizations may have been affected by the software bug that apparently led to the Equifax mess. However, it’s already been compared to some of the most damaging breaches in recent years, such as those at Sony and Target and at the Office of Personnel Management in 2015 that could have leaked sensitive details of over 20 million U.S. government employees.

It also brings up elements of the debate that resulted from the 2014 Heartbleed bug that was related to a vulnerability in OpenSSL software. That discovery launched a back-and-forth argument about the inherent security of open source software and how much responsibility organizations should bear for the security of applications that used it.

The Equifax breach was blamed on a vulnerability in the Apache Software Foundation’s Struts version 2, an open source framework for building Java web applications. There have been a number of announcements of Struts vulnerabilities over the past few months, the most recent issued by the Center for Internet Security on Sept. 15.

Depending on the privileges associated with the application, according to CIS, an attacker could “install programs; view, change, or delete data; or create new accounts with full user rights.” It put the risk for medium and large government enterprises at high.

It was an earlier vulnerability, publicly announced Sept. 4, that’s thought to have been the one that attackers exploited. However, the Apache Foundation itself said that, given that the security breach at Equifax was already detected on July 5, it’s more likely that an even older earlier announced vulnerability on an unpatched Equifax server, or a zero-day exploit of an at-the-time unknown vulnerability, was the culprit.

The timelines on this are confusing. There were detailed stories of attacks using a Struts 2 vulnerability as far back as March 2017, with attackers carrying out a series of probing acts and injecting malware into systems.

Back in the Heartbleed days, detractors claimed that open-source software was inherently insecure because developers just didn’t keep as close an eye on security issues and weren’t as systematic in finding potential holes in code as proprietary developers were. Proponents, on the other, said that open source was, in fact, inherently at least as secure as other software and that it was safe for government agencies to use.

It’s not an academic issue. Sonatype, for example, claims that some 80-90 percent of modern applications contain open source components and issued a recent report that said an “insatiable appetite for innovation” was fueling both the supply and demand of open source components.

The Apache Foundation made the case that its developers put a huge effort into both hardening its products and fixing problems as they become known. However, it said, since vulnerability detection and exploitation is now a professional business, “it is and always will be likely that attacks will occur even before we fully disclose the attack vectors.”

In other words, it’s up to organizations that use Struts, or any other open source product for that matter, to treat security just as they would for a proprietary product: assume there are flaws in the software, put security layers in place and look for any unusual access to public-facing web pages. It's critical that organizations  look for security patch releases and software updates and act on them immediately.

Sound advice for everyone. If only everyone followed it.

Posted by Brian Robinson on Sep 26, 2017 at 12:34 PM0 comments