Genomic tree

Piranha: Decoding the genomic tree in millions of documents

The Piranha text analytics software developed at Oak Ridge National Laboratory, which can scale from working on a single PC to a supercomputer, speeds up the search for relevant information by grouping and comparing documents.

More Info

Energy lab's Piranha puts teeth into text analysis

Nearly nine years in the making, the agent-based software, available to other agencies, clusters huge volumes of text documents into groups that are easily processed. Read more.

Swimming with Piranha: Testing Oak Ridge’s text analysis tool

Oak Ridge lab’s scalable software shows why it could be of value to investigators, researchers and analysts, especially once it adds Raptor, a metadata tool. Read more.

Piranha essentially represents documents as mathematical vectors, which lets users of the software perform similarity comparisons, explained Thomas Potok, senior scientist and leader of the Computational Data Analytics Group at the Energy Department’s lab.  Users can compare the two vectors and determine how closely they resemble each other.

“What we are able to do in a highly parallel fashion is to create and compare these vectors, so in essence we are comparing every word in every document to every word in every other document,” Potok said. From that point, researchers can say how similar the documents are.

The Piranha clustering algorithm is similar to how animals are grouped in a genomic tree, where horses and cows are close to each other but much different from a squid or an amoeba, Potok explained. “It kind of gives you a sense of how you look at similarities in documents, what things are similar or how things are dissimilar.”

Piranha’s software agents work differently than more traditional software agents, Potok said. It is not a piece of software running around a network but can be assigned to a specific computer or group of computers. For example, if an agency used the software with 10 computers, the software agents would move around the 10 computers to process information faster.  A law enforcement officer might install Piranha on his computer, but not on any other. For an agency that has a huge volume of data, users might put Piranha on a cluster of computers, a very large cluster of computers or even on a supercomputer if they needed that level of computing power.

ORNL’s Computational Data Analytics Group is working to extend Piranha’s capabilities. “We are looking to analyze a trillion documents,” Potok said, noting that the team is using the lab’s supercomputer to try to solve this task. But challenges still exist with how text analysis tools tag items, he said.  An analyst might want to pull out and highlight a name in a document, for instance.  But if he sees the name “Washington,” he doesn’t know if that refers to a person, city or street. So information tagging still needs to be addressed, he added.

ORNL’s computational data analytics team is also focused on how to look at text information in a time view. For instance, with Twitter, people tweet things at certain times but also from certain locations.  Law enforcement or intelligence analysts might have information before an event that could be significant, and after an event has occurred wished they had seen the information.  So the team is working on “how do you deal with time and location,” Potok said. 

Finally, “perhaps one of the biggest challenges is: How do I deal with the volume of data and the ugliness of data?” Potok said.  “You have all of these documents and raw information. Well, what is the value, what is useful and what is noise of no value at all?”

ORNL is offering a prototype version  of Piranha that can be downloaded from the ORNL site.

inside gcn

  • power grid (elxeneize/

    Electric grid protection through low-cost sensors, machine learning

Reader Comments

Mon, Dec 3, 2012

WSasn't ORNL supposed to work on nuclear projects? What are they doing creating software? Another agency of the Federal government that is out of control.

Mon, Dec 3, 2012

First, the user manual doesn’t appear to have been updated since October 14, 2005. Is this project that old, yet we’re only hearing about it now? Next, the kind of documents the program is able to parse is somewhat telling. Here’s what’s explained in the user guide: “The first step is to parse documents from their original form into a form suitable for clustering. To be suitable for Piranha, the document should be stripped of repetitive header/footer data, should contain no markup or special characters, and should be plain text.” Really? If this can’t even parse MS Word documents in their native form, is it useful as anything other than a proof-of-concept?

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group