The 3D rendering of subsurface utility pipes inside a virtual excavation is cool. But if you don’t have 3D data to support it (e.g. if you don’t know the depth of the pipes), that sort of renderings could actually be misleading.
Users told us that in spite of their limited 3D depth data, 2D pipe maps are very useful for excavation planning, and 2D renderings of those maps in the physical world would be quite handy, even without depth info.
We prototyped that concept using the Microsoft HoloLens and Bentley ContextCapture. We simulated a workflow where a user wants to find a service pipe connection and a valve, and mark their location. Our results are shown in this video:
The video also shows that our method enables accurate augmentation of surfaces topography showing significant altitude differences, such as staircases.
Interestingly, our concept could also be extended from visualization to editing. By projecting existing pipe map data onto the ground surface, our proposed system could let the user visually spot discrepancies between the augmentation and the actual location of surface assets, and propose modifications to the pipe database by “adjusting” virtual pipes location directly on-site…
By facilitating their use on site, Augmented Reality will likely increase the value of existing subsurface pipe maps, and also facilitate their maintenance over the years!
Of course a stable augmentation is not enough - we also have to make sure it is displayed at the right place. But we did not use GPS - it would not have been sufficiently accurate. The alignment phase is based on the pre-capture of the environment using a photo camera, and we use those photos to create a 3D mesh using Bentley Context Capture. The pipe map was manually aligned with that mesh (based on visible surface features such as manholes, valve access covers, drains) and projected onto the mesh - that phase provided the georeference that the GPS would normally have provided - but here it was much more accurate. On site, the 3D mesh was then manually aligned with the physical world by selecting common control points (e.g. aligning doors, lights, etc.). Then the mesh was "turned off", leaving only the pipe augmentations. Would that sort of augmentation be useful to you?
Really nice project. Did you use GPS to display the objects at specific points?