Wire data analytics uses packet data to monitor activity across the network stack and may be the final step in the development a single monitoring and management architecture for enterprise IT operations.
There was a time when monitoring networks and IT infrastructure was relatively simple, but that was before cloud, virtualization, software-defined networking and other complex ways of moving and storing data arrived on the scene.
Enter wire data analytics.
Wire data has been around since the invention of TCP/IP networks. It’s the data contained in the headers and payloads of packets that move from one node to another on the network, along with information that describes the bidirectional flow of those packets through the network.
It’s also one of several kinds of data that is generated throughout the IT environment that is now being used to analyze how well systems are operating. Machine-generated data, for example, is information that various servers and routers produce as a routine part of their operation. This data is recorded in logs that operators can look at to see how well those systems are working and where problems are popping up.
However, while other sources of data are useful, wire data is the only kind that includes information on how well data traverses the breadth of the IT environment, from Layer 2 through Layer 7 of the OSI (Open Systems Interconnection) model of network communications. And it’s increasingly seen as the key to optimizing the performance of modern environments.
“It can be pretty important, especially in private and hybrid cloud deployments,” said Jim Rapoza, a senior research analyst with the Aberdeen Group. “Virtual systems are often difficult to monitor and measure, especially in legacy performance management platforms, and wire data can provide a real-time view into what’s happening with these systems.”
For some time now, wire data has been accessible using real-time taps and various monitoring solutions to track activity on the network, he said. In contrast, wire data analytics is about taking that data and applying cutting edge, big data analysis to it in order to produce real-time, comprehensive insight into network activities.
It all starts with the simple principle that communication protocols such as HTTP and SMTP enable the exchange of data between systems, but such data is encapsulated in the format most appropriate for that protocol, said Erik Geisa, vice president of marketing at ExtraHop Networks.
Wire data analytics provides a means to take the raw feed of that traffic, which is usually unordered and fragmented, and processing it into ordered, unfragmented segments which can then be decoded.
In that way, network administrators can map a transaction all the way through from one end to the other, measuring the time it takes for a transaction to traverse different network elements and where and why any delays occur.
Geisa cited an example in the oil and gas industry, where every minute pumping stations are sending out JSON posts over HTTP. In their payloads, these posts contain the rate, capacity and volume at which the stations are operating – and if there are any errors at particular pumping stations. That information can be extracted and displayed on a dashboard, with various trends and alerts set to show what’s working well and what isn’t.
Corvil, an Irish firm, is another company that uses wire data analytics for looking at networks in the financial and investment industries. It provides a system for real-time risk and compliance monitoring, for example, that is independent of the trading system and which can track all of a company’s trades and flag alerts when there is any anomalous activity.
Cybersecurity is another promising area for wire data analytics, which could give security professionals another level of insight into activity on networks beyond that provided by firewall and intrusion detection logs.
Wire data analytics could be seen as the final step in constructing a single monitoring and management architecture for enterprise IT operations, which could be effective regardless of where the workload is running and whether it is virtualized or a full application stack running on bare metal or in the cloud.
The importance of including wire data is vital because, according to Geisa, it’s “the richest source of data, about 1,000 times more than machine data.”
Geisa’s ExtraHop sees wire data as the fourth leg of a stool that will support the overall IT operations analytics (ITOA) framework. As well as machine data, the elements include agent data, which is used to identify errors in software code, and synthetic data, which is generated by tests that IT teams regularly run to determine where there are weak spots in the network and where failure can occur.
Will Cappelli, vice president of research for Gartner, believes ITOA platforms will increasingly be the center of gravity for the overall IT operations management architecture, becoming, “the next version of the monitoring manager, the single pane of glass, or organizing principle for IT operations management in general.”