firefighter in smoke-filled hallway

Researchers tap LiDAR-enabled indoor mapping for responders

Two projects are testing ways to use light detection and ranging (LiDAR) remote sensing to map the inside of buildings to increase first responders’ safety and efficiency when they respond to emergencies.


Point Cloud City: NIST's 3-D indoor mapping model

LiDAR-created 3-D indoor point cloud models will foster indoor mapping, localization and public safety applications. Read more.

3D scanner speeds police investigations

The scanner captures more than 900,000 data points per second that can be used to create 3D images of a crime or accident scene. Read more.

A peek inside DARPA's SubT challenge for autonomous systems

A video walking tour shows the mine where teams honed underground navigation and exploration technologies for unmanned systems. Read more.

Both efforts are part of the Global City Teams Challenge (GCTC), led by the National Institute of Standards and Technology in partnership with other agencies such as the Homeland Security Department’s Science and Technology Directorate. They’re also part of a larger program in which three teams are researching LiDAR-enabled indoor mapping, which has traditionally been used to map outdoor terrain.

“The government is getting out in front and trying to create standards early and ensure the relevance" and interoperability of the technology, said Joel Lawhead, CIO at Nvision Solutions, a geospatial solutions provider.  Rather than ending up with a "VHS vs. Betamax issue where the end users don’t know what to get, we’re hoping that developing standards gives all the innovators a target so things are compatible and everybody knows what the end game looks like," he said. "Hopefully it will help technology develop faster.”

Lawhead leads a GCTC project in Hancock, Miss., where his team scanned the interiors of 10 public schools and is now working to label objects in those scans -- such as fire extinguishers, stairs and emergency exits -- that first responders could use to support and expedite rescue efforts.

“That’s like taking a Sharpie marker and walking around labeling everything in your office or house and then doing that in 10 or 15 buildings,” Lawhead said. “It’s very tedious and time consuming. We can do it manually with just looking at things, drawing a box around it and typing in what it is, but we are also working on a way to automate that using image recognition.”

To collect the data for the maps, the team used a handheld LiDAR scanner connected to a hard drive in a backpack that stored the data as it was collected.

“It’s a laser that spins around on a gun-looking device in your hand that also has a camera system on it,” Lawhead said. “It’s like a laser range-finder, but it’s shooting 43,000 points a second.” The camera takes video while the laser provides points in space with X, Y and Z locations – horizontal, vertical and forward and backward. The result is a point cloud.

“It looks like just a blizzard of dots, but they’re in the shape of the area you walked around,” he said. Because the cloud and video are linked, the dots can be colorized to create a 3D picture of everything that was scanned. That can then be converted into other types of data such as a 3D model. “It’s a starting point for doing indoor mapping,” Lawhead said.

LiDAR-enabled indoor mapping 

It took about an hour to map each school, which were each at least 50,000 square feet. “From that, you can create a floor plan, and that might take another hour." he said. "To create a 3D model would take a lot longer -- probably a couple days,” he said. Without the camera, it would take weeks to  survey an indoor building with a tape measure, he added.

A year into the project, Lawhead said it will take at least 14 more years to produce “a completely automated process that’s efficient and cheap and will be practical to map most, if not all, buildings.”

As the Hancock team develops  that foundational automation, they’re also working to determine how responders can best access the maps once they’re ready. Current debates are around the effectiveness of virtual vs. augmented reality.

“Typically, when they get a call, they’re just going to rush to the building. Usually they don’t have any information about what the building looks like inside,” said Lan Wang, principal investigator for the Point Cloud MAP901 project in Memphis, Tenn. “They may have a floor map, but that may not be enough for them to navigate.”

Her team used a mobile mapping system from Green Valley International called LiBackpack with Velodyne PUCK LiDAR, a small computer and storage for the data.

“We also added a 360-degree camera called Insta360 [Pro 2] to the backpack so that we can collect 360-degree video at the same time as we collect the point cloud data from the LiDAR,” added Wang, who also chairs the computer science department at the University of Memphis. “This way, we can actually get color information in addition to the points, which represents the 3D space in the buildings. We can color the 3D point cloud so it’s much more realistic.”

The Memphis plan is to map seven buildings of varying size, including a community center and the Liberty Bowl Stadium. The device can scan for about 10 to 15 minutes before it begins to accumulate errors, so the team stitches all the scans together at the end. They’re also working to train algorithms for object detection and labeling, similar to Lawhead’s team. Wang said she expects to have the models labeled in about two months.

The project began in October 2018 and is expected to run for two years. Wang said data will be shared through the city’s open data portal, which will feature data access forms that the public can use to request access.

In Mississippi, Lawhead said he and his teams will deliver their maps to the state, which will host them long-term in a geospatial data clearinghouse. Researchers will be able to download the data for use in subsequent experiments.

About the Author

Stephanie Kanowitz is a freelance writer based in northern Virginia.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.