The cost and technical challenge of adding security to complex systems after the fact are prohibitive. Here are some steps developers and managers can take to build security into new software applications.
IT security has recently gotten a lot of attention in the mainstream press for all the wrong reasons – like the Target hack that compromised millions of credit card numbers or the Heartbleed bug in OpenSSL that had everyone scrambling. Government IT systems are not immune (here and here for example), and it isn’t hard to see why. With agency budgets already stretched to their limits, bolting security measures on to complex architectures comprised of applications of myriad ages, technologies and levels of quality is virtually impossible.
One lesson from these failures is that the cost and technical challenge of adding security after the fact are prohibitive. As a software engineer, I believe one thing we can do to begin to address this crisis is to ensure that we build security into all new software applications. Let’s consider some steps developers and managers can take.
1. Keep it simple. This might seem obvious, but never underestimate the ability of a developer to devise a complicated solution to a problem – or a perceived problem. I once worked with a developer on a Defense Department project who rejected Java’s built-in toLowerCase method in favor of his own custom solution using complicated bit shifting to achieve supposedly better performance.
Complexity is the death of all software and architectures, and complexity compounds itself with amazing speed. Developers must remain vigilant at every decision point to take the simplest path possible to achieve the desired functionality. Simpler and leaner code is easier to maintain and evaluate for possible vulnerabilities.
2. Limit resource access. Virtually every application needs to connect to a database or to files on disk, and this access must be restricted. In a Windows architecture, for example, developers can integrate IIS with Windows Authentication if users are on the same domain as the server. For anonymous database access, create a single user representing the application and limit its permissions substantially.
Less intuitive is cleaning up after accessing resources. For databases, be sure to close connections or return them to the pool; for files, close file I/O abstractions. Use the built-in facilities of the programming language to help. In Java, this would mean try-with-resources and AutoCloseable. Functional languages enable the loan pattern that can be implemented to compel developers to clean up after themselves.
Consider also using a tool comparable to Jasypt to encrypt values in configuration files governing access to sensitive resources.
And by now, do I really need to even discuss SQL injection?
3. Be vigilant with dependencies. Speaking of libraries, most projects depend heavily on third-party libraries – open-source and otherwise. If those developers haven’t read this column or lack diligence, you could introduce vulnerabilities by using them.
I absolutely do not advocate eschewing libraries and writing everything yourself. Rather, ensure libraries are actively maintained (particularly with open source) and address any concerns via forums or paid support.
4. Handle errors gracefully. Have you ever used a Web application where an error occurred and the gory details were displayed in the browser?
Aside from a poor user experience, revealing the technical details of an application isn’t ideal for security. Never swallow errors and instead log them immediately. Also a fault barrier provides a clean mechanism for handling errors gracefully while relieving developers of the burden. (Note: The link refers to Java, but the concept transcends language.)
5. Prefer immutability. Mutability is the ability to change values in code after they’ve been created. In object-oriented (OO) languages, basic mutability is common, and objects can even be cloned after creation. Another example derives from inheritance, perhaps the most powerful feature of OO. This endows new code with the properties of existing code. Mutability makes code more complex while introducing the potential for hacks.
Immutability makes code simpler, thread-safe and less vulnerable. Functional languages either forbid or make it difficult to write mutable code, but OO languages demand discipline from developers to write immutable code.
6. Automate security testing. In my first GCN column, I advocated automated testing to build quality into applications. The same approach can expose flaws in code that can be exploited.
I am unaware of viable open-source security testing tools, so you will likely need to invest in the tool that works best for your situation. Look for tools that work with IDEs like Eclipse and that can be assimilated into your continuous integration infrastructure. If the tool itself can’t be automated, consider an automatable script that runs it nightly. If all else fails, make sure someone runs scans at least weekly.guidance
NEXT STORY: Follow malware's tracks to thwart cyber attacks