Follow malware's tracks to thwart cyber attacks
- By Jason Brvenik
- Jul 09, 2014
To defeat malware, government agencies need to determine how threats entered a network and where they went once inside. But this relatively simple task is complicated by attackers’ ability to cover their tracks and to do so over an extended period of time.
The latest improvements in threat detection include executing files in a sandbox for analysis, using virtual emulation layers to insulate malware from users and operating systems, whitelisting applications and, more recently, simulating and analyzing the attack chain.
But these “point-in-time” detection technologies all bank on the effectiveness of inspecting code for malicious attributes in a single instance on first entry – and thus will never be 100 percent effective at screening. Compounding the challenge is that these traditional approaches are unable to identify the follow-on activities of the malware that inevitably slips past them.
To defeat attackers, IT managers have to find the trail of “breadcrumbs” left by advanced malware. The good news is that with the right model, security teams can collect these artifacts, or potential indicators of compromise, over time and then weave them together to identify and isolate malicious activity.
Agency IT managers understand that malware is dynamic and three dimensional. It exists as an interconnected ecosystem that uses constant motion to evade detection. To be effective, malware defense needs to be as dynamic as its attacker and should incorporate an additional dimension to detection – relationship analysis.
Adding relationship analysis requires a security model that combines a big data architecture with a continuous analytics approach so that security teams have protection and visibility along the full attack continuum – from point of entry, through propagation and post-infection remediation. In this model, data is analyzed continuously over time from the network and endpoint.
This capability is called retrospection, which offers significant advantages over event-driven monitoring, because it observes attacks as they happen, much like a video surveillance system. This inspection covers all file activity on the endpoint, all communication to and from the endpoint and all processes or relationships of file creation and file execution on the endpoint.
There are three types of retrospection:
- File retrospection – After initial analysis, file activity is inspected over time, allowing for examination beyond the point-in-time it was first seen.
- Communication retrospection – Communication is monitored to and from an endpoint and the associated application and the process that initiated or received the communication.
- Process retrospection – System process input-output is continuously inspected and analyzed over time.
This inspection of file, communication and process data is then woven together into a chain of activity for analysis in real time. Data is analyzed and reanalyzed against sophisticated algorithms to look for patterns of activity across detection events, static indicators of compromise left behind by malware and exploits as well as evidence of more advanced behaviors that happen over time and indicate potential compromise.
The confluence of the file, process and communication retrospection streams captures the relationship dimension required for effective malware control. With this insight, agencies’ IT teams can quickly pivot from detection to a full understanding of the scope of an outbreak and head off wider compromises by breaking the attack chain.
The fundamental difference between continuous and point-in-time response is that a continuous approach provides a robust outbreak control capability that includes surgical containment, whereas a point-in-time response only provides enumerated lists of facts and evidence. While these lists can be used by agencies’ IT teams, they are tedious to make actionable for containment.
With the power of a big data architecture and a continuous analysis approach, agencies’ IT teams hold the key to defeating malware and mitigating risk. A big data architecture handles the ever-expanding volume of data that is essential to effective malware detection and analytics, while a continuous approach uses that data to provide context and, most important, prioritization of events.
With these technologies in place, agencies’ IT teams can forge together the attack chain and focus their efforts on the threats with the greatest potential for damage, thwarting wider compromises and rapidly returning systems to a trusted state.
Jason Brvenik is a principal engineer in the Office of the Chief Security Architect at Cisco.