Swimming with Piranha: Testing Oak Ridge's text analysis tool
- By John Breeden II
- Dec 07, 2012
GCN’s Rutrell Yasin recently profiled the Piranha text analytics software developed at Oak Ridge National Laboratory, in a two part series about how the software was created and what it does. In this, part three, we take the evaluation version the software for a spin.
Piranha was created to help humans quickly find connections in documents that might not be immediately obvious, either because the links between them are tenuous or because the data sets are so large that it’s impossible to get a complete picture. As such, Piranha was built to run on everything from a standalone PC to an entire cloud-based network housing millions of documents. ORNL officials have said Piranha could be a valuable tool for law enforcement, military or health care analysts, as well as any agency dealing with large data sets.
For our testing, we used a single PC and the free-to-try version of Piranha, which behaves just like the full version with the exception of being limited to just 128 documents. The software is available for download and experimentation from the Oak Ridge website.
For our purposes, we played the role of an investigator who was searching documents on a confiscated PC for a common connection or theme. We used a set of old and new articles written for GCN and other publications by a single author.
Loading the documents into Piranha was relatively quick. The program was recently updated to work with Microsoft Word files, XML files and most word processors and office document formats. Of course, it can also work with plain text files, because the original program was designed to handle them.
Right now, Piranha is only able to analyze text that appears on the screen -- the information a user would see when he opened up a file. However, Oak Ridge is preparing a forensics version of the software that could look at metadata contained within files, according to Dr. Robert M. Patton of the Computational Data Analytics team at Oak Ridge National Laboratory. This metadata -- such as who authored the documents, who edited them, and when they were modified or created -- could prove invaluable for a researcher or investigator.
Once we had our cache of documents loaded into Piranha, we let the client software get to work. The current version of Piranha requires that a user enter search terms into the program before an analysis takes place. That means users who don’t know what they're looking for could miss important evidence. Piranha is a work in progress, and the researchers at Oak Ridge have already begun to tackle that aspect of Piranha with a new program called Raptor.
Already working in a test environment, Raptor will allow an investigator to ask, “What do all these documents have in common?” Raptor will then return answers in the form of suggested search terms and datasets of documents within the larger group. So it could learn, for example, that several of the documents seem to talk about an attack, and several more of them are informational facts about Grand Central Station.
At that point an investigator will be given suggested search terms that he could bring back to Piranha, but would also be directed to certain documents within the larger set that could be read for additional information. The connection between plans for a general attack and a location could be drawn out from their hiding spots within the huge datasets.
So instead of trying to sift through thousands or millions of documents, an impossible task for one person, an investigator might be directed to read 100 files with a common theme. This would allow a further refining of keywords that are pertinent to the investigation.
Right now, Raptor is a separate program from Piranha. But Patton said that will soon change. Plans call for making Raptor into a module within Piranha, a move that would vastly improve the original program for use with large datasets.
Once the search returns results, Piranha can display that data in graphics and charts that quickly show the relationship between various documents. It just requires a little extra work. As powerful as Piranha is, the current version is a magic bullet on its own. Successfully whittling down searches to narrower and more useful results is a skill that would need to be honed, at least until Raptor officially comes along to make this process more intuitive.
Therefore, for our smaller set of documents, we had to come up with our own search terms. But since we were familiar with the documents in question, it wasn’t hard to find keywords that the software could sink its teeth into. And we were surprised with some of the results.
Piranha was able to pick up patterns within the documents that would not be immediately obvious. For example, it was interesting to note that our mysterious tech writer penned several stories about tablet computers, increasing in number each month until they were the most popular topic of all the documents. But then in October, this writer suddenly stopped writing about tablets altogether. From January to September, tablet computers came up in 30 articles. Then from October through December, the term only appeared once. An investigator could conclude that something major changed in October.
Imagine a scenario where the word "tablets" really means "explosives," and it becomes clear how helpful that information can be for the investigator. In a set of 1,000 documents, such a pattern would be difficult to discover without Piranha. With the software, it’s as obvious as if it had been painted in bright red letters.
Where Piranha could really shine is on huge datasets residing on multiple servers or even on millions of potential documents. In that case, it would simply be impossible for a human to find connections without help. At the desktop level, a set of skilled queries using the software can save time. In the cloud, it would make the impossible actually possible. For that, overworked government agents and law enforcement agencies probably can’t wait to start swimming with Piranha.