Augmented reality for subsurface utilities : further improving perception

Augmented reality (AR) extends the user’s perception of the physical world with virtual data.  However, this extension improves perception only if the virtual data is displayed in a visually clear and meaningful way.  We studied that problem last year – our work was presented in a previous post related with augmented reality for underground infrastructure.  In that post, I discussed the importance of providing good 3D perception in augmented reality applications.  Perception is particularly important for underground pipes, that are meant to be concealed and buried, so displaying them has to be done in such a way that helps the mind understand they are actually underground.  A successful result was achieved using a “virtual hole” (see figure 1).

Figure 1 : virtual hole showing 3D pipe model.

That was our first, basic experiment.  Our results have shown that the technique does indeed give the impression that the pipes are actually underground, and the tool turns out to be a nice, intuitive and interactive way to explore a 3D underground pipe model.  But we felt the prototype could be improved.  First, the pipes do look underground, but the pipes themselves, displayed with a uniform color, do not look quite cylindrical (see figure 1).  3D perception is altered because of that.  One way to alleviate that problem was to add texture to the pipes (see Figure 2).  This way, the shape of the pipes is much clearer.  Another easy change was to add a source light to the cursor – by moving the cursor, one changes the light source position, which changes the look of reflection on the pipes, which further helps improve 3D perception.

Figure 2 : Underground pipe model with texture.

The virtual excavation is great as an interactive visualization tool, but it is limited as far as measurements are concerned.  Municipal workers sometimes want to install new pipes amongst existing ones.  They might want to know whether there is enough vertical distance between 2 pipes in order to fit a new one.  The vertical excavation is not great help for that – this is illustrated in Figure 2.  The blue pipe does indeed look underneath the red pipe, but it is hard to tell how far they are from each other.  The blue one could be 1 m under the red one, or perhaps only 10 cm.  They might even be touching each other.  From that specific view point, it is hard to tell. We need something better.  The problem is we work in a (static) panorama context, and our viewpoint is therefore fixed.   We need somehow to change the viewing position to get a parallel view of the excavation facing the primary cut plane.  As a solution, we proposed using vertical slicing to extract and display a 2D vertical section of the model (see Figure 3).  The 2D section, the position of which is represented by a 2D semi-transparent plane inside the vertical excavation, is displayed in a corner of the view, giving the user an unambiguous way to measure distances.

Figure 3 : Vertical slicing tool.

The virtual excavation tool can be useful if you have a 3D model of your underground pipes.  But what if you don’t?  What if you have no idea what is actually underneath the ground surface, but still want to view it?

Ground penetrating radar (GPR) comes to the rescue.  GPR is a geophysical method that uses radar pulses to image the subsurface.  It can detect changes in material, voids, cracks, and objects (such as pipes).  GPR is not perfect, and good GPR data depend on many factors, including soil type, density, water content, etc.  However it can be a useful exploratory tool to detect and understand the underground prior to excavation. 

GPR was fist implemented as a 2 dimensional scanning method, but 3D GPR is now available.  Basically, multiple lines of GPR data collected over an area may be used to construct three-dimensional or tomographic images, which represent 3D density volume scans of the underground.  Such scans can be used to detect location and size of underground pipes.  But one must first interpret it properly.  Some software applications can be used to display 3D GPR data, and slice it to ease interpretation.  However, such a display is done only in a virtual context, which means the data is displayed outside of its physical world context.  Consequently, even if one can detect a pipe in the scan, he still has no idea where that pipe is actually located until he somehow maps the scan into the physical world.  Any potential correspondence with objects from the physical world is at best really hard to achieve.

A better way to visualize and interpret a 3D GPR scan in the context of the physical world is probably to display the scan directly into the physical world.  That is what we have tried, as shown in Figure 4, where the walls of the virtual excavation have been replaced with data from the 3D scan.  The user can then move the box around to explore the underground structure, and visually see potential correspondences with objects in the physical world, such as drains, valve accesses, etc.

Figure 4: Virtual excavation showing 3D GPR data.  GPR data courtesy of GSSI.

Want to see the result in action?  Check our video:

Model or scan data are very useful for design, construction and operation of infrastructure.  What we learned through those augmented reality experiments is that that data becomes even more useful when combined with the physical world, as it gains context.  Interestingly, by augmenting the physical world with model data, the physical world also becomes more informative and helps us understand it better.  So the resulting augmentation is superior to the sum of its parts!  I am totally fascinated by the possibilities that augmented reality enable for the future…


 

Anonymous
  • Stephane, I have just come across your prototype and it looks excellent. I note this is now over twelve months old, has there been further work carried out ? have cost benefit models been considered yet? How is it progressing in the UK?

    Les

  • Thats an awesome prototype. Is there any progress here? Are you planning of integrating it into the Navigator mobile?

    I would be honestly interested in using it.

  • Hi Ken,

    that feature is already available in MicroStation.  You can import the image directly, or use the Photomatch feature to load the photo and manually orient the camera position to align your model with the photo.

    Stéphane

  • This has given me ideas, can I import photos into MicroStation V8 XM  ? If not, what Bentley software should be used ?

  • Thanks, Stephane, for showing me this live at the Be Together Conference in May. It's really exciting stuff, and your presentation, even to a non-engineer like me, was fascinating but easy to understand. You rock!