recognizing objects in traffic (Texas Advanced Computing Center/Center for Transportation Research)

Traffic analysis at the intersection of AI and supercomputing

To help traffic planners and city managers pull insights from video from the scores of traffic cameras mounted in cities, researchers are working to develop tools that use deep learning and data mining for  to conduct sophisticated, searchable traffic analyses.

Researchers from the Texas Advanced Computing Center (TACC), the University of Texas Center for Transportation Research and the City of Austin have developed an algorithm that uncovers relationships among the items in traffic camera video by automatically recognizing and labeling all potential objects from the raw footage, tracking those objects by comparing them with previously recognized ones and comparing the output from each frame.

The team used the YOLO open-source object-detection library and neural network developed by University of Washington and Facebook researchers for real-time object detection. For the data analysis and query part of the system, they incorporated HiveQL, a query language maintained by the Apache Software Foundation that lets users search and compare data in the system, according to officials from TACC.

Being able to label, track and analyze traffic lets researchers count how many moving vehicles traveled down a road, identify places where vehicles and pedestrians are too close or see how many cars drive the wrong way down a one-way street. It can also analyze flow and congestion and recognize trends in traffic patterns.

"Current practice often relies on the use of expensive sensors for continuous data collection or on traffic studies that sample traffic volumes for a few days during selected time periods," said Natalia Ruiz Juri, a research associate and director of the Network Modeling Center at UT's Center for Transportation Research.

This system uses "artificial intelligence to automatically generate traffic volumes from existing cameras," creating valuable datasets researchers can use "to understand the impact of traffic management and operation decisions," she said. It demonstrates how AI can greatly reduce the effort involved in analyzing video data and provide actionable information for decision-makers.

Preliminary results showed the tool is 95 percent accurate.

The system, for example, was able to identify places where cars and pedestrians were in close proximity, helping city officials "pinpoint potentially dangerous locations," said Jen Duthie, a consulting engineer for the City of Austin and a collaborator on the project. With that information, she said, "we can direct our resources toward fixing problem locations before an injury or fatality occurs."

The researchers plan to explore how the system can be used for other safety-related analyses, such as identifying locations where people cross busy streets outside of crosswalks, understanding how drivers react to different signs signaling the presence of pedestrians and quantifying how far people are willing to walk in order to use a walkway.

In addition to developing and testing the model, the researchers evaluated the performance and scalability of the object identification system using different high-end processors, including Intel Xeon Phi, Intel Skylake, and several types of NVIDIA GPUs

Outside of computer science and the tech industry, deep learning adoption has lagged, partly due to its very high computational requirements in both the training and prediction stages," said Weijia Xu, a research scientist who leads the Data Mining & Statistics Group at TACC.  That makes using "high-end hardware and high-performance computing resources is crucial for providing practical solutions for complex real-world problems, especially those involving large-scale data," he said.

"We don't want to build a turn-key solution for a single, specific problem," Xu said. "We want to explore means that may be helpful for a number of analytical needs, even those that may pop up in the future."

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at or @sjaymiller.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.