Augmented reality (AR) is a hot subject. Every day, we see more applications of that technology. At the moment, AR finds applications mostly in marketing, tourism, and wayfinding. However, progressively we see research and industrial groups being interested in other, more complex applications of AR, such as medicine and engineering. In those demanding areas, accuracy is important. For instance, decisions taken by engineers often have a major impact on people's lives or safety, so those professionals must rely on accurate data.
Augmented reality is promised to a brilliant future in the infrastructure engineering world. But at the moment, it has not really past the stage of prototypes. The problem is that achieving high quality (I mean “engineering quality”) augmentation is very hard. The reason is we must be able to track the position of the user’s tablet or smartphone within millimeter precision, outdoor, and in real time. That is extremely difficult to achieve. In my last post , I described our solution to the problem: instead of augmenting reality, we augment panoramic images. Doing so increases our chances of obtaining accurate augmentation. See my last post for detailed explanation and videos.
Now augmentation accuracy is not the only challenge of augmented reality. Last summer, we pursued our exploration of panoramic images augmentation by studying another difficult problem: spatial perception. Augmented scenes often look supernatural: adding artificial objects to a real scene is unusual, and sometimes confuses the brain that refuses to understand what it sees. This is illustrated in the figure below, where a 3D pipe model is used to augment a street scene. The photo is meant to show underground pipes, through the ground. The augmentation is achieved by displaying the pipes on top of the image. If you had X-Ray vision, that is possibly how you could see the pipes. The problem is that underground pipes are located under the road surface, so you are not supposed to be able to see them. Displaying the pipes this way creates a confusing image that is hard to understand for the brain. Such an image does not convey good spatial perception - it is too confusing to be useful.
The problem of spatial perception in augmented reality often arises when the model that is used for augmentation is supposed to be hidden, like those pipes. The question is: how can we make hidden objects become visible in a way that is visually pleasing and understandable?
In their work, Avery et al (2009)  proposed an interesting method. In one of their example applications, they show the augmentation model through a brick wall (see the first 40 seconds of their video below). During augmentation, the wall is not made totally invisible - they show their augmentation behind a brick texture. That is very clever, and helps the brain understanding that the augmentation image is actually behind the wall (and not covering it, as displayed in the pipe example above). That probably works because our brain is used to such representations – a good example is when you look through a screen door: you see the outdoor landscape, but you also see the screen very close, reminding you there is something between you and the landscape, and helping you understand that the landscape is actually away. Working with such analogies is helpful to help us understand such unusual scenes.
Subsurface pipes present a similar problem. Pipes are underground, so they should not be visible. If we want to augment a scene with hidden pipes, we have to find a way to make it clear in the augmentation that the pipes are actually underground. For that, we need an analogy with the real world. How do we normally see subsurface utilities? Well, we can only see them during installation, or after excavation. In both situations, we see them inside a hole (excavation). We are used to seeing that image – the brain is used to it, and understands it. So let’s do the same and display the pipe models inside a virtual excavation! That is what our team has done last summer:
As you can see, it works pretty well. By drawing a virtual excavation, our brain can more easily understand the scene: we feel as if the pipe model was really underground. Note that the idea is not ours – a team at the university of Graz came up with the idea first, in their projet Vidente. We adapted the concept to panoramic images, and made it dynamic.
Near the end of the video, you can see a 2D GPR radar scan. GPR stands for “Ground penetrating radar”. It is a device that is used to “detect” underground pipes, through the ground. A GPR scan is meaningless unless you know exactly where it was captured. Displaying it in the context of reality the way we have done it is very helpful for interpreting it. We can more easily see whether the scan detected the pipes that appear at that location in the model. And this way we can verify whether the model is properly geolocated. And we better know what obstacles may be met during excavation in that area. That sort of interpretation is made possible because the 3 sets of data are displayed together: subsurface utility pipes models, GPR scan, and panoramic image of reality.
Spatial perception is very important for good augmentation. The virtual excavation appears to be a very good solution to the subsurface utilities visualization problem, probably because it displays an image that is familiar to us.
Interested in seeing more? Stay tuned! We will have other exciting results to show you this winter.
 Benjamin Avery, Christian Sandor and Bruce H. Thomas, Improving Spatial Perception for Augmented Reality X-Ray Vision, IEEE VR 2009
I am very interested in your sub-infrastructure tech. I live in montana at the heart of a very critical water shed. I am working with a friend in water and sewer here to begin mapping sections of the utility system for use in AR pipe detection. I am useing the Autodesk Infrastructure suite for mapping and imaging. How soon do you see a Bentley set up for municipalities interested in applying your tech to city functions. Do you have any pointers on mapping techniques for future use in an AR visulaization.
viewing geological survey data or detailed geological model data is especially interesting in combination with large subsurface engineering objects such as subway stations and tubes.
In the Netherlands there is a project by the Netherlands Architecture Institute (NAI) called 'UAR ondergrond' where they make AR city walks showing subsurface objects. At the presentation I saw models of subway stations but never in it's real subsurface setting. I wonder how they are going to do that. Problem is that in most city area's there is no optical room for large excavations.
Another idea I myself am working on is visualizing mine shafts in a AR like application. These mining areas can be beneath buildup areas but beneath rugged landscape as well. The mining activity has been stopped but AR applications can tell the story about techniques and historic importance for the region. It's meant to tell a visual story not to be accurate in millimeters.
I think applications like these give you some extra challenges to work out on future prototypes.
Geological Survey of the Netherlands
interesting comment - thanks! We did not design our prototype with that in mind, but it is certainly an interesting problem to look at. Of course the way it is desined now, the method is limited to shallow infrastructure as our virtual excavation has a limited depth. We could easily modify the prototype to let us see deeper, in which case the displayed ground cross section would also have to be further away from the user, to facilitate visualization of such deep structures. So as you mentioned, the size of the excavation would depend on the depth.
I am curious, could you describe specific situations / workflows where it would be useful to view geological survey data in the context of reality? (e.g. on the field). I have only very basic knowlege in the field and would be interested to know more, so that we can design our future prototype with that sort of applications in mind.
Very nice solution for visualizing in a brain exceptable way!
Are there any examples/experiments on visualizing deeper situated objects, lets say down to a 100 meters? WIthin the geological survey we usually work with objects that lay deeper than just a few meters.
What is the optimal relation between the size and depth of the object and the size of the excavation?
So simple a solution once someone comes up with it and presents it to you.
We have struggled with the challenge of depicting 3D viusalizations of subsurface utilities in 2D illustractions for marketing purposes and this article explains very well why it is such a challenge.
The augmented reality/ virtual view certainly untangles the brain's ability to perceive the subsurface 3D infrastucture it knows it should not see from the unexcavated surface.