Intell video moves to a Netflix model
Recorded full-motion video stored in central location but accesssed on-demand via laptops, other devices
- By Paul Richfield
- Apr 06, 2011
Video imagery used in military intelligence is moving to an on-demand model.
The National Geospatial-Intelligence Agency's new video-archiving capability is designed to allow recorded full-motion video to be stored in a central location but accessed virtually through laptop computers and other electronic devices by users at all levels, with additional data features tailored to user preferences.
Known as the National System for Geo-Intelligence Video Services (NVS), it could increase the quantity and quality of the imagery intelligence available to U.S. and coalition military customers.
NVS comes at a critical time. U.S. forces and their allies have started to transition from the narrow views provided by first-generation unmanned aircraft-mounted cameras to a new array of wide-area sensor systems deemed critical to finding and tracking insurgents in urban environments. Although the earlier video sources offered views measured in tens or hundreds of feet, viewing areas as large as 50 to 100 square kilometers are expected when the newest sensor networks are active. According to NGA, which recently moved to its new headquarters at Fort Belvoir, Va., its new video-archiving system will be more than able to manage that increased data flow in addition to different data formats.
“The unique aspect of NVS is its ability to get the archived video stream to the users when and how they want it, regardless of how much bandwidth they can handle,” said Navy Commander Robert Kraft, military deputy at NGA's Sensor Assimilation Division and video services program manager. “Now stored videos are chopped up and stored as flat files — we get rid of all that. The new system is like Netflix on demand. [NVS] instantly knows who you are and what you want and what sort of device you’re using to view the video. By using transcoding, we can get a lot more people using the service. A signal reaches out and analyzes the recipient and only gives them what they can manage.”
For an operational example, a typical NVS user might open Google Earth to zoom in to a specific geographic area, such as the site of an improvised explosive device attack. The user might search for all the relevant video of that location recorded in the past day, week or month and then search for recent keywords used to access the video. The user might also read the comments of other users who saw the video. A freeze frame of a particular instant in time can be extracted and shipped up or down the chain of command. The user also can ask the system to send an alert each time someone creates a new video of the area or when a video is tagged with specific keywords.
“When you see the video, you don’t just watch it — there’s the same level of information you’d see if you were watching a financial broadcast on MSNBC or a football game on ESPN,” Kraft said. “You see the score, the downs, the yardage markers, the biographical information on the players. We’ll have that same capability. The network screen can be accessed with a Web browser; this is a unique capability. This will be tailorable by the user, and we’ll be providing a rich menu of enhanced databases. By storing the video in one spot, we’re able to federate the analysis instead of giving the raw footage to a bunch of analysts. Every user can see what the previous user saw and what they said about it. The technical challenge is the integration of all the databases — it’s also a policy challenge.”
Valiant Angel redux
NVS was previously known as Valiant Angel, a Joint Forces Command (JFCOM) program that the Defense Department moved to NGA late last year. Initially valued at $29 million, the contract was awarded to a Lockheed Martin-led team comprised of broadcast expert Harris and computer storage specialist NetApp. A separate contract involves Pixia, a Virginia-based company that provides software to rapidly index imagery from wide-area sensors. Valiant Angel combined Lockheed Martin’s Audacity video analysis system with Harris’ Full-Motion Video Asset Management Engine. Audacity is the primary user interface, and the Harris contribution is mainly a suite of blade servers, computer storage, video-processing gear and a mix of commercial software.
Jon Armstrong, senior manager of full-motion video solutions at Lockheed Martin Information Systems and Global Solutions, said NVS could be contained in a pair of transit cases for a stand-alone node, or it could reside in several racks of equipment in an operations center. NVS uses standard, in-service servers, encoders and decoders.
“The meta data is the key, the actual meta data associated with the video itself," Armstrong said. "With it, you can mark each frame with a geospatial reference and time, along with myriad other elements depending upon the requirement and security classification. Users have access to actual comments — chat — about what’s on the video, and they can contribute their own. If the user specified an audio stream, it can be tied to exactly what’s being shown on the screen, along with all the other meta data. The emerging need now is for new tradecraft, or how you’re going to consume the data. We’ve learned that we can’t just stare at the pixels any more. You have to do activity-based intelligence. Now, they’re staring at the video hoping to see something, but that’s not going to work with a wide-area background.”
NVS has been tested in simulated combat conditions. While still under JFCOM control in August 2010, a pair of Valiant Angel nodes participated in the Empire Challenge 10, a multinational intelligence, surveillance and reconnaissance demonstration of new technical capabilities. Coalition interoperability was identified as a primary objective, and during the exercise, full-motion video was successfully conveyed to forward operating bases manned by British and Dutch participants. In addition, the Valiant Angel full-motion video product was sent to the Distributed Common Ground System Integrated Backbone, through which users could access it along with all associated meta data.
A similar version of NVS is slated for initial deployment to Afghanistan. And change is in the works, Kraft said. “Two things are on the cusp of being actual. One is the ability to integrate the wide-area large format into the video. Wide-area format looks at kilometers in diameter, more things are coming that have even greater diameters of look, such as Argus or Gorgon Stare — things that let you see an entire city. Second, we’re virtualizing the entire capability. Now it’s a hardware-specific build, moving toward a server-type system. When you virtualize, you get much better power, space and cooling efficiencies. And less trouble if the hardware fails — you just move over to another blade [server]. Virtualization is also critical when you’re operating in countries that don’t have great infrastructure.”