Augmented reality for underground infrastructure: the problem of spatial perception

Augmented reality (AR) is a hot subject.  Every day, we see more applications of that technology.  At the moment, AR finds applications mostly in marketing, tourism, and wayfinding.  However, progressively we see research and industrial groups being interested in other, more complex applications of AR, such as medicine and engineering.  In those demanding areas, accuracy is important.  For instance, decisions taken by engineers often have a major impact on people's lives or safety, so those professionals must rely on accurate data.

Augmented reality is promised to a brilliant future in the infrastructure engineering world.  But at the moment, it has not really past the stage of prototypes.  The problem is that achieving high quality (I mean “engineering quality”) augmentation is very hard.  The reason is we must be able to track the position of the user’s tablet or smartphone within millimeter precision, outdoor, and in real time.  That is extremely difficult to achieve.  In my last post , I described our solution to the problem: instead of augmenting reality, we augment panoramic images.  Doing so increases our chances of obtaining accurate augmentation.  See my last post for detailed explanation and videos.

Now augmentation accuracy is not the only challenge of augmented reality.  Last summer, we pursued our exploration of panoramic images augmentation by studying another difficult problem: spatial perception.  Augmented scenes often look supernatural: adding artificial objects to a real scene is unusual, and sometimes confuses the brain that refuses to understand what it sees.  This is illustrated in the figure below, where a 3D pipe model is used to augment a street scene.  The photo is meant to show underground pipes, through the ground.  The augmentation is achieved by displaying the pipes on top of the image.  If you had X-Ray vision, that is possibly how you could see the pipes.  The problem is that underground pipes are located under the road surface, so you are not supposed to be able to see them.  Displaying the pipes this way creates a confusing image that is hard to understand for the brain.  Such an image does not convey good spatial perception - it is too confusing to be useful.

The problem of spatial perception in augmented reality often arises when the model that is used for augmentation is supposed to be hidden, like those pipes.  The question is: how can we make hidden objects become visible in a way that is visually pleasing and understandable?

In their work, Avery et al (2009) [1] proposed an interesting method.  In one of their example applications, they show the augmentation model through a brick wall (see the first 40 seconds of their video below).  During augmentation, the wall is not made totally invisible - they show their augmentation behind a brick texture.  That is very clever, and helps the brain understanding that the augmentation image is actually behind the wall (and not covering it, as displayed in the pipe example above).  That probably works because our brain is used to such representations – a good example is when you look through a screen door: you see the outdoor landscape, but you also see the screen very close, reminding you there is something between you and the landscape, and helping you understand that the landscape is actually away.  Working with such analogies is helpful to help us understand such unusual scenes.

Subsurface pipes present a similar problem.  Pipes are underground, so they should not be visible.  If we want to augment a scene with hidden pipes, we have to find a way to make it clear in the augmentation that the pipes are actually underground.  For that, we need an analogy with the real world.  How do we normally see subsurface utilities?  Well, we can only see them during installation, or after excavation.  In both situations, we see them inside a hole (excavation).  We are used to seeing that image – the brain is used to it, and understands it.  So let’s do the same and display the pipe models inside a virtual excavation!  That is what our team has done last summer:

As you can see, it works pretty well.  By drawing a virtual excavation, our brain can more easily understand the scene: we feel as if the pipe model was really underground.  Note that the idea is not ours – a team at the university of Graz came up with the idea first, in their projet Vidente.  We adapted the concept to panoramic images, and made it dynamic.

Near the end of the video, you can see a 2D GPR radar scan.  GPR stands for “Ground penetrating radar”.  It is a device that is used to “detect” underground pipes, through the ground.  A GPR scan is meaningless unless you know exactly where it was captured.  Displaying it in the context of reality the way we have done it is very helpful for interpreting it.  We can more easily see whether the scan detected the pipes that appear at that location in the model.  And this way we can verify whether the model is properly geolocated.  And we better know what obstacles may be met during excavation in that area.  That sort of interpretation is made possible because the 3 sets of data are displayed together: subsurface utility pipes models, GPR scan, and panoramic image of reality.

Spatial perception is very important for good augmentation.  The virtual excavation appears to be a very good solution to the subsurface utilities visualization problem, probably because it displays an image that is familiar to us. 

Interested in seeing more?  Stay tuned!  We will have other exciting results to show you this winter.

References

[1] Benjamin Avery, Christian Sandor and Bruce H. Thomas, Improving Spatial Perception for Augmented Reality X-Ray Vision, IEEE VR 2009

Anonymous
Parents
  • Stéphane,

    viewing geological survey data or detailed geological model data is especially interesting in combination with large subsurface engineering objects such as subway stations and tubes.

    In the Netherlands there is a project by the Netherlands Architecture Institute (NAI) called 'UAR ondergrond' where they make AR city walks showing subsurface objects. At the presentation I saw models of subway stations but never in it's real subsurface setting. I wonder how they are going to do that. Problem is that in most city area's there is no optical room for large excavations.

    Another idea I myself am working on is visualizing mine shafts in a AR like application. These mining areas can be beneath buildup areas but beneath rugged landscape as well. The mining activity has been stopped but AR applications can tell the story about techniques and historic importance for the region. It's meant to tell a visual story not to be accurate in millimeters.  

    I think applications like these give you some extra challenges to work out on future prototypes.

    Harry Middelburg

    Geological Survey of the Netherlands

Comment
  • Stéphane,

    viewing geological survey data or detailed geological model data is especially interesting in combination with large subsurface engineering objects such as subway stations and tubes.

    In the Netherlands there is a project by the Netherlands Architecture Institute (NAI) called 'UAR ondergrond' where they make AR city walks showing subsurface objects. At the presentation I saw models of subway stations but never in it's real subsurface setting. I wonder how they are going to do that. Problem is that in most city area's there is no optical room for large excavations.

    Another idea I myself am working on is visualizing mine shafts in a AR like application. These mining areas can be beneath buildup areas but beneath rugged landscape as well. The mining activity has been stopped but AR applications can tell the story about techniques and historic importance for the region. It's meant to tell a visual story not to be accurate in millimeters.  

    I think applications like these give you some extra challenges to work out on future prototypes.

    Harry Middelburg

    Geological Survey of the Netherlands

Children
No Data