Demo of Hybrid 4 Dimensional Augmented Reality at Virginia Tech

Take a picture, and 4D app gives you the details

Second of four parts

Imagine taking a picture of an unfamiliar building or a piece of machinery and then having the picture tells you what’s inside the building or how that machine works. That’s part of the idea behind Hybrid 4-Dimensional Augmented Reality (HD4AR), a project of the MAGNUM Group at Virginia Tech University.

In this series

Smart phones as sensors: locating snipers, or parking spots

Agencies see a lot of potential in using sensor networks made up of smart phones for any number of military and civilian uses. Read more.

When smart-phone technology hits the wall

Even smart phones have their limits, but developers are finding ways to build sophisticated sensors than can connect to phones and expand mobile networks. Read more.

Sensors can make smart-phones more secure

Greater functionality for smart phones can mean bigger security concerns, but their new capabilities could also be used to protect them. Read more.

The project, led by Jules White, a professor of electrical and computer engineering at the university, can use a smart phone’s sensors, geotagging features, camera and video and audio recorders to augment situational awareness for first responders, construction crews or the public.

When it receives an image, HD4AR draws on a database of information to deliver annotated data to images on the phones. A user might, for example, take a photo of a piece of equipment. HD4AR would locate a similar image in its database and then deliver attached data — such as labels for the dials and levers on the equipment and perhaps a link to a user manual — to be superimposed on the image on the user’s cell phone. Or, send HD4AR a photo of a downtown street and it would be returned with buildings and stores identified.

“The idea behind this project is to create a framework where, when there was a disaster, people who were trapped in different areas could be using their smart phones to essentially provide situational awareness data to first responders or other citizen scientists in the area,” White said. “It can be image data, taking pictures of things, in-capture audio, video, accelerometer data, these types of things. From our perspective it was more about capturing that data, geo-tagging it all, and having it centralized in a location that first responders could look through.”

HD4AR also is designed as a tool for construction sites, taking the place of all those design drawings, but the framework also holds value for consumers. Suppose you go out in the morning and find your car battery needs a jump-start — and, as luck would have it, jump-starting a car is not something you know how to do. “So you take a photo of your engine and then on your photographs we will figure out where the positive and negative terminals are on the battery and we will annotate your photograph," White said.

The information flow goes both ways, too. “Anybody can add to the database using their phone,” he said. “From those photos we will build a crude 3-D model. So when the user goes into the photo and begins annotating it, drawing in information, we then figure out where on the 3-D model those notes go.

“When that information is saved in the database and when a new photo is taken — with a completely different angle and orientation — we can figure out which of those annotations that the first person created should be visible in the second person's photograph and then render them into that place in the photograph.” 

The biggest challenge was to accommodate the processing and matching of photos taken from different angles, at different times of day and with physical changes over time. “We designed all the algorithms to be able to handle change and ambiguity in the images,” White said, citing as examples obstructions such as people walking in front of the camera or walls that change over time.

“We can tolerate a large amount of change before things start giving us trouble,” he said. “A wall may double in height and we can still often recognize that wall based on the original imagery that we have of it.”

White said the technology, which received an Innovation Award at January’s Consumer Electronics Show in Las Vegas, has been licensed to a startup company, PAR Works Inc.

The Virginia Tech group has been working on other innovative ways to manage mobile devices, including a modified version of the Android operating system that lets admins set rules for when users can access data or run certain apps, according to such factors as their location or the time of day.

PREVIOUS: Smart phones as sensors: locating snipers, or parking spots
NEXT: When smart-phone technology hits the wall

Reader Comments

Wed, Feb 6, 2013 Chad Rommerdale

LOL! Seriously? "This application will be perfect for determine who you are....violation of our human rights" I'd rather be alive with rights violations than dead, rights intact. HaHaHa! While they're at it (devs), go ahead and program it to take pics of all the dumb people in the world as they gaze blindly into their phones commenting on something they clearly misunderstood. Then the rest of the world may be forewarned of their presence.

Wed, Feb 6, 2013

This application will be perfect for determine who you are... Now at days, personal information is pretty out there due to apps such facebook, etc... It's a positive thing as well as a possible violation of our human rights.

Wed, Feb 6, 2013

This app would be great for giving background on historical sites.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above