Lights, camera, capture
- By Joab Jackson
- Jan 19, 2005
With agencies' use of digital video on the rise, new standards make for better data management
NASA, which is upgrading cameras at Kennedy Space Center to better monitor shuttles, is one of many agencies in need of new video standards to manage digital content.
Many agencies include video in their arsenal of digital content, but few rely on the medium more than NASA. In fact, the agency is undertaking a $40 million upgrade to the video equipment surrounding the launchpad at Kennedy Space Center in Florida.
When the space shuttle Columbia lifted off in January 2003, 30 video cameras captured the ascent. But even that coverage proved in- sufficient once NASA needed to pinpoint how the space shuttle met its tragic end 16 days later. One camera captured images of the debris falling from the shuttle's main fuel tank and hitting Columbia's wing, inflicting structural damage that would tear apart the craft during re-entry.
Still, said Armando Oliu, lead of the Ice/Debris and Final Inspection Team for NASA's space shuttle program, investigators were frustrated. The footage was too fuzzy to show what had happened in sufficient detail. Oliu's team analyzes all space shuttle ascent imagery.
In response to the Columbia tragedy, Oliu said NASA is doubling the number of video cameras surrounding the launchpad and replacing some older analog video cameras with digital versions with higher frame-per-second rates.
'If we see something on one film, we'd like to identify it on another film, so we can pinpoint it in 3-D space,' Oliu said. The improved frame rates would allow observers to see events more clearly. But the digital data will be complex, massive and a challenge to manage.
NASA is not alone in recognizing that the digital re- sources it requires go beyond text documents and databases. The Defense Department has unmanned aerial vehicles that beam back video from hostile areas, and vi- deoconferencing systems that keep combatant commanders in touch with theaters of conflict. Other agencies run IP-based video surveillance systems that keep sentry over office buildings, or archive footage of historically notable events.
As government agencies use more digital video, however, they must look closer at the standards and technologies they employ. Agreeing on standards would allow better sharing of video data across agencies and, depending on the standards and technologies adopted, help store and transmit higher-quality imagery using current infrastructure. Fortunately, both agencies and private industry are addressing government's concerns.
In 2001 the National Science Foundation funded a study at the University of Illinois at Urbana-Champaign to gauge how much the government uses motion imagery and what researchers could do to improve the technology.
The organizers found pockets of motion imagery usage throughout the federal agencies, particularly at the National Geospatial-Intelligence Agency, other intelligence agencies and state and local law enforcement offices.
The study concluded that the video equipment industry's shift to digital video, along with the abundance of ever-cheaper networking tools and computational devices, would lead to 'entirely new systems with extraordinary capability to support government, military and industrial applications.'
Yet the study also found that if the government did not keep a close eye on its new video capabilities, chaos would ensue.
'There was no organizing principle that the government people subscribed to. Everyone viewed video as the next frontier but there was no real agreement of how to approach it,' said Thomas Prudhomme, primary investigator for the study. With no agreements in place, agencies risked building stovepiped video systems, or systems that would become obsolete.
One agency trying to facilitate agreement for motion imagery is the National Geospatial-Intelligence Agency. The agency's Motion Imagery Standards Board, a working group to promote visual im- agery system interoperability for the Defense Department, has published a set of guidelines, the Motion Imagery Standards Profile, laying out the video standards that defense agencies should use.
The technical guidelines encourage agencies to move toward digital systems, in-crease video resolution whenever possible, and plan for interoperability.
Although the group crafted its recommendations specifically for the Defense Department, civilian agencies have also looked to the recommendations for guidance, Prudhomme said.The new MPEG standard
The challenge comes when agencies must manage their video imagery. How do they store the large files? How can they efficiently move or stream the video from one location to another? The crux of the challenge is often in the format that agencies use to capture their video.
New video formats are emerging that might help agencies better manage the increasing amount of video they're transmitting and storing. The designers of technologies such as AVC/H.264 and VC1 tout them as replacements for the venerable MPEG-2 standard. For many, they represent a new wave in video compressor/decompressor algorithms, or codecs.
Created by the Moving Picture Experts Group, MPEG-2 has long been the most widely used codec for video streaming and motion imagery files. Codecs compress images, videos and audio, making them smaller than raw feeds, so that they take up less storage space or bandwidth.
As more research is done in the field of compression, researchers are finding more efficient ways of shrinking files. With NGA pushing agencies towards higher-quality video, new formats will take less space on storage systems and networks.
One of the most highly touted emerging codecs is the AVC/H.264, developed by MPEG along with the International Tele-communication Union. AVC stands for Advanced Video Coding, and H.264 is ITU's denotation of the standard.
AVC/H.264 is one part of the MPEG's emerging MPEG-4 standard, which in- cludes elements such as digital rights management. (As if it doesn't already have enough names, AVC/H.264 also goes by the designation MPEG-4 Part 10.)
Charles Fenimore, a researcher from the National Institute of Standards and Technology, has found that this new video compression standard can cut file storage requirements by half compared with MPEG-2.
'You can cut the bit rate in half and keep the quality the same relative to MPEG-2, according to our tests,' Fenimore said. 'It provides a 50 percent gain in compression efficiency.'
NGA's Motion Imagery Standards advises agencies to look at AVC/H.264-based equipment as products start to enter the market.
Richard Mavrogeanes, chief technical officer of IP-based video equipment VBrick Systems Inc. of Wallingford, Conn., said that AVC/H.264 represents a change in the direction of codecs. Whereas earlier codecs were designed with the assumption that the user would have only a limited amount of processing power for image crunching and uncrunching, AVC/H.264 was written with more powerful digital signal processors in mind.
'AVC depends upon Moore's Law to operate. It uses all the processing capacity [one has],' Mavrogeane said, referring to the IT folk wisdom that processor power doubles every 18 months. Now it's impossible to run full AVC/H.264 processing on a single processor'at least not with high-definition video'though you should be able to do so with the next generation of processors, he said.
Not willing to wait for Moore's Law, some vendors have begun offering products that use AVC/H.264, at least for some lower bit-rate applications. Videoconference hardware provider Polycom Inc. recently incorporated the standard into its line of desktop computer videoconference systems, said Maggie Smith, director of product marketing for the Pleasanton, Calif., company. Polycom's previous desktop computer videoconferencing units required 768 kbps; the new AVC/H.264-based equipment offers the same resolution at 384 kbps.Proprietary video formats
In addition to AVC/H.264, other manufacturers are releasing their own video and image formats, positioning them as MPEG killers. Microsoft Corp.'s VC1, which is embedded in Windows Media Player, promises to cut the size of the files considerably, Fenimore said. Like AVC/H.264, VC1 was written with high-definition video in mind.
Although much codec research seems geared toward high-definition video, at least one company is moving in the other direction. KT-Tech Inc., a Bowie, Md., company with a long history of NASA research and development work, is marketing its own codec. Called KT-Tech, this codec is being pitched for real-time video transmissions over conduits of limited bandwidth, such as mobile video conferencing.
'Everyone else is going broadband. We're going to focus on the narrowband market,' said KT-Tech founder Bao-Ting Lerner. The company is now pitching its technology to agency and federal contractors.
Compression schemes like MPEG break images into very small boxes, but KT-Tech's approach scales images down so they get fuzzier, not boxier, as they are condensed, Lerner said. In limited bandwidth settings, fuzzy pictures are easier to decipher than pixelated ones, he said. Another advantage to KT-Tech's codec is that the processing power needed to decode a video stream is the same amount needed to encode a stream, Lerner said. This symmetric ap- proach allows users to transmit webcam images using only handheld computers.
In a demonstration for GCN, the company ran a video stream from a desktop computer webcam to a handheld device running on an AT&T 2.5 G wireless network. The live video feed was established at 15 frames per second, Lerner said. A two-way interactive video transmission could work with as little throughput as 18 Kbps, KT-Tech officials said.
Certainly, low-bandwidth video applications will have a place in government operations. But with high-quality, high-bandwidth video becoming more commonplace, agencies will need to review all their options and settle on standards that will make their jobs easier.