According to the experts, there is a growing deficit of students and graduates with the skills needed to maintain and protect the nation’s IT systems. Jobs are waiting to be filled, but schools — particularly K-12 — are not providing the education needed to ready students for these jobs.
“We have a shortage of talent,” said Cisco’s chief security officer John Stewart. According to the company’s 2014 annual security report, there currently is a global shortage of 1 million security professionals at a time when the number and complexity of attacks against IT systems is growing. “Every enterprise is receiving more security alerts about services, software and hardware” that have to be evaluated, Stewart said, but there are not enough people to respond.
And government is not immune. In the past few months the Federal Election Commission was hacked, information on more than 1,500 persons was mistakenly mailed out by the VA Medical Center at Walla Walla, Wash., the Colorado governor’s office lost information on 18,800 state employees and Loudoun County Public Schools in Ashburn, Va., exposed student and staff data online.
The situation is only expected to get worse. At a recent hearing before the House Science, Space and Technology Subcommittee on Research and Technology, figures from the Bureau of Labor Statistics were cited predicting 1.4 million new computing jobs would be created in the next 10 years. Over the same time, however, the National Science Foundation predicted only 400,000 new computer science graduates would be available to fill them.
Most of these jobs are not in tech companies, said Hadi Partovi, founder of Code.org, a nonprofit that promotes computer science education. An understanding of software and computers is required knowledge in the 21st century and needs to be taught in primary and secondary schools, he told lawmakers.
It is common knowledge that youngsters are tech savvy. But there is a difference in being able to use a device and understanding how it works. Real computer literacy, which involves some knowledge of programming and what is going on behind the interface, is something that must be taught.
The need for more trained professionals has been recognized for some years now, and colleges and universities are stepping up to improve computer science education, including cybersecurity, in their graduate and undergraduate programs. But students are not graduating from high school with the skills needed to take advantage of these programs.
The federal government spends about $3 billion a year to promote science, technology, engineering and math education, but only about 2 percent of that investment goes to computer science, said Partovi, and an alarming 90 percent of U.S. high schools have no formal computer science classes.
There is no question that kids like computers. Teaching them to understand devices beyond the touch screen should not be that much of a challenge. Doing so would benefit not only the students but the rest of our society as well.
Posted by William Jackson on Jan 17, 2014 at 8:42 AM0 comments
When it comes to IT security, data is the crown jewel. This is not to say that networks and other systems are not important. A compromise anywhere could expose resources in your enterprise to manipulation or theft. But it is the data your systems store and use that are the most valuable targets.
This is why mobile computing and BYOD are problematic. How do you protect your data when it is being accessed by and used on devices outside your control? The immediate reaction to this challenge is to forbid access, but that can be counterproductive, warns Alexander Watson, director of security research at Websense.
“People will find a way around things that stop them from getting their jobs done,” says Watson.
And employees today expect to use mobile devices to get their jobs done, no matter where they are. If balked, they will work around restrictions and create an inside threat — unintentional, perhaps, but a threat just the same. The solution is to make data security an enabler for mobile working rather than a roadblock.
The underlying problem in mobile computing is not new. Security generally has been an afterthought in computing, and security operations were set up separately from the IT shop. As a result, security is the bad guy who tells you that you can’t do something and stops you from doing it. It didn’t take long for this to be recognized as a problem. Consequently, the trend has been to move security from its silo and integrate it more tightly with IT and business operations. That way it can help with missions rather than interfere.
But patterns tend to be repeated in IT, and as new technologies are introduced this mistake often is repeated. Belated attempts at security inhibit the use of new tools until they are forced on the enterprise. However, security in mobile devices, particularly in increasingly powerful and useful smartphones and tablets, is evolving to help enable meaningful authentication, access control and data security.
Biometric authentication is emerging for phones with Apple’s introduction of a fingerprint scanner in its iPhone 5s. It’s imperfect, but a step forward in security and convenience. Card readers for devices can enable use of government CAC and PIV cards, and software credentials derived from these cards can be used for authentication and access.
Software agents can also apply data-loss-prevention policies on mobile devices. And there are software-hardware solutions such as the Trusted Execution Environment, which is a secure area on a phone’s main processor to provide security against software attacks. Independent processor chips can also be included in handsets to enable a secure work environment and secure communications channels.
None of these solutions are fully mature and no security is perfect. But if users and organizations demand these features in products out of the box, personal devices —which already are finding their way into government and private sector work environments — can become not only safe to use, but productive. “Security becomes an enabler,” Watson said.
Posted by William Jackson on Jan 10, 2014 at 8:28 AM1 comments
Predicting is easy. When it’s made, one prediction is as good as another. Only in hindsight can you pick the winners from the losers. Let’s look back at my 2013 predictions for cybersecurity and see how good they were.
I hedged my bets pretty well last year. The predictions for the most part covered areas that were so basic that they would be important security concerns regardless of what happened. But did they deserve to be singled out for 2013?
It turns out that reliability, not security, was the big issue in clouds.
An inspector general’s report found that NASA, a pioneer in cloud computing, suffered from a lack of proper security. “We found that weaknesses in NASA’s IT governance and risk management practices have impeded the agency from fully realizing the benefits of cloud computing and potentially put NASA systems and data stored in the cloud at risk.” But the report did not cite any serious breaches, and according to data from the Privacy Rights Clearinghouse most data losses still are occurring the old-fashioned way: Through lost, stolen or discarded devices and documents and from in-house breaches. Not from cloud breaches.
What caused problems in the cloud were a string of outages plaguing Amazon Web Services, Dropbox, Microsoft Office 365, Windows Azure cloud storage and CloudFlare. Data wasn’t lost, but it was unavailable. For the end user, an outage is as good as a denial-of-service attack.
Collateral damage and unintended consequences of cyberwar and espionage
This one was spot-on, especially for the NSA, which suffered from multiple self-inflicted foot wounds in 2013.
From June on, the nation’s eavesdropper in chief, Gen. Keith Alexander, found himself defending once-secret electronic surveillance programs in the wake of a never-ending stream of revelations stemming from Edward Snowden’s leaks of classified documents. Repeated lies, half-truths and evasions were exposed with each new release about wholesale collection of digital communications data at home and abroad, the tapping of international fiber-optic cables, cryptographic back doors and abuse of data.
NSA staffers, portrayed by Alexander as heroes, became the bad guys in many eyes. In December, the first of what will likely be multiple court decisions about the programs found wholesale collection of cellphone metadata likely to be unconstitutional.
Supply -chain security
This issue failed to rise to the level of a crisis in 2013.
Although lengthy and far-flung supply chains have possible weak links all over the world, China has been the primary concern for the U.S. government. There are appropriations laws in place prohibiting some agencies from dealing with Chinese contractors, and there have been anecdotal reports of NASA contractors with suspect Chinese ties.
In November, the Defense Department amended its acquisition rules allowing the DOD “to consider the impact of supply chain risk in specified types of procurements related to national security systems.”
But 2013 did not produce any serious cybersecurity incidents resulting from weaknesses or backdoors in IT products that were inserted in the supply chain (if you don’t count reports of NSA dabbling in commercial crypto systems). Of course, the beauty of supply-chain tampering is that if it is done right, no one will see it. We might not know for years if we’ve already been had.
With the popular Windows XP approaching end-of-life in April 2014, the security of Windows 8 is a concern. But there has not been much bad news here. The latest Windows OS generally is seen as the most secure version to date.
Windows 8 includes its own antivirus features with Windows Defender, which starts early in the boot-up process to help protect against rootkits. Downloaded files are scanned for executables and applications are sandboxed. Version 8.1 includes data classification for remote wiping, improved fingerprint biometrics and better encryption. Overall, this one was a miss.
Posted by William Jackson on Dec 20, 2013 at 8:27 AM0 comments
Cloud computing is useful because it offers a new approach to IT, leveraging shared resources to maximize productivity and cut overhead. But with new approaches come new threats. How can agencies minimize risk in this environment?
The Cloud Security Alliance and the Software Assurance Forum for Excellence in Code (SAFECode) have collaborated to identify a set of best practices for developing applications that meet the unique security requirements of cloud computing. The resulting paper, Practices for Secure Development of Cloud Applications, applies established methods of producing secure code to the architectural requirements of the cloud.
“For cloud computing to reach its true potential, all parties involved – both consumers and providers – will need new ways of thinking about security needs and related standards,” the paper says.
Eric Baize, senior director of the product security office at EMC Corp. who participated in the study for SAFECode, says the new guidelines are an addendum to the existing security practices identified in SAFECode’s Fundamental Practices for Secure Software Development.
About 70 percent of cloud development work is common with other application environments, Baize said. The difference in the remaining 30 percent lies primarily in the fact that the cloud is a multitenant environment in which trust boundaries are required because software running in one entity can be used by another.
The CSA and SAFECode working group spent about six months reviewing existing development practices to identify gaps that should be filled for the cloud environment. Representatives from member companies shared their experiences and lessons learned to identify a consistent set of practices that address issues in the cloud. The working group focused on the platform-as-a-service model and identified a basic set of threat areas that needed to be addressed differently in the cloud:
- Data breaches: Compromises in the virtual infrastructure can pose threats to co-tenants in the cloud, and techniques such as SQL injection threaten more serious consequences with multiple applications sharing an underlying database system. A flaw in one application could expose all.
- Data leakage and data loss: When data is kept in the cloud, the system needs to be designed, implemented and deployed so that it can withstand attacks on various levels in the multitier architecture. Changes to data should be detectable and traceable, and the data should be able to be restored. If encryption is used to protect data, at what layer is it performed and how are keys managed?
- Insecure interfaces and APIs: Improperly designed application programming interfaces can create vulnerabilities when used by third parties.
- Denial of service: This can occur at several layers, expanding the attack surface in a cloud environment.
The paper describes the security practices in the context of the unique requirements of the cloud. Recommendations are mapped to specific threats to provide detailed illustrations of the security issues they resolve, with specific action items for development and security teams.
Like many best practices, those identified for secure development of cloud applications are often common sense. “For us, it’s not a surprise,” Baize said of the recommendations. “I don’t expect this to be a surprise to anybody.”
Posted by William Jackson on Dec 13, 2013 at 8:14 AM0 comments