3D models for augmenting reality: the magic of laser scanning

We live in a physical world.  Our eyes give us information about what surrounds us, and we base many of our decisions on what we see.  Although our vision enables most of our actions, it has limits.  We are in fact limited by what we can see.  When we are confronted with new, unfamiliar, complex, or even invisible objects, only imagination and past experience can help us.  While vision supplies us with very rich information, still our understanding may be limited. Often, what we see is just not enough; not enough for us to form a complete understanding, particularly in the case of things that are both complex, and that require our attention and action.

Augmented reality (AR) extends the user’s perception of the physical world with virtual data.  The data is typically registered in 3D space, and related to physical objects (see figure 1).  Using AR, one can therefore improve perception of the physical world, better understand it, and with that understanding, be equipped to take better actions and make better informed decisions.

Figure 1: An augmented view, where virtual data is registered with buildings.

Unfortunately, augmented reality is not easy to achieve.  To obtain good augmentation, several elements need to be in place.  They can be categorized in 2 groups: techniques and data.  In previous posts, I discussed a few techniques required for augmentation.  Those include:

  • Tracking techniques (that ensure the augmentation is displayed at the right position), (see blog post)
  • Visualization techniques (that ensure the augmented scene makes sense visually) (see blog post).

Good augmentation techniques are very important, and that is what most augmented reality research focuses on.  However, we dont hear much about the data - it is often taken for granted.  Yet, data is as important as the augmentation techniques, since without data, no augmentation is possible.  Data can be grouped in 2 categories:

  • 3D data, to use for calibration and/or for augmentation
  • The physical world (or a good representation of it)

In our panorama based building augmentation prototype, the data we used included 3D metadata (for augmentation), panoramic images (as a representation of the physical world), and building models (to align the augmentation metadata with the panoramas).

The reason I highlight this today is because in many situations, some of that data does not exist.  Let’s take, for instance, the example of an old chemical plant that has been operating for, say, 60 years.  Some of its workers have been working in that plant for quite a while – for instance up to 40 years.  They are used to the plant, they know it like the back of their hand: they might have something to say about each and every pipe, valve or pump.  They know the maintenance history of the plant, the name of the companies performing the maintenance, the accidents that have happened, etc.  Their knowledge is invaluable.  Unfortunately, then they retire, their knowledge will become unavailable to their co-workers.  For their benefit (and the company’s), it would be important and useful if their knowledge could somehow be “downloaded” and stored in a handy place – ideally on a portable device, that one could use to aim at objects in the plant, for instance a pipe, click on it, and the device would display information about that pipe, like its maintenance history, its rated pressure, etc.  Such an augmented reality device would have a lot of value e.g. for training new employees, or for maintenance work planning, assistance, and documentation.  That looks like an ideal case for augmented reality.

How easy would it be to implement such an AR application?  Let’s assume here that we would develop an AR application based on our current panoramas display and tracking technique – so that would take care of the tracking.  The data we need is:

  1. Some panoramic images of the inside of the plant, as a base for augmentation;
  2. Some augmentation metadata, geolocalized in 3D space, that will be displayed when an object is clicked on;
  3. A 3D model of the plant, to accurately align the augmentation data with the panoramic images.

Panoramic images can be captured easily – so that fulfills the first requirement.  Augmentation metadata may be available somewhere, for instance in a database, and can be associated with 3D positions.  So that is more difficult, but with time it can be achieved – so that fulfills the second requirement.  The missing part is the 3D model: to properly align the 3D augmentation data with the panorama, we need an accurate 3D model of the plant environment.  Unfortunately, such a model is, most of the time, not available.  We might have access to 2D drawings of the plant (dating when it was designed), and that are therefore as old as the plant itself.  In addition to being only 2 dimensional, those drawings are most likely going to be out of date, as the plant might have been modified several time during its operation: walls removed, new equipment installed, etc.  Consequently, tan accurate 3D model is not available.  So we need to produce one.

Creating a 3D model from scratch (measuring everything) would take a long time.  Some might say we could use a laser scanner – but even then, creating an accurate 3D model from the point cloud would be quite lengthy and expensive.  Actually many users we talked to say that creating a model, even from point clouds, is too expensive.  But do we really need a 3D “CAD” (or “BIM”) model to achieve augmentation?  Why not use the point cloud directly, "as is"?  After all, a point cloud is 3D, and it can certainly be considered as a “model” of the physical world.  Now some laser scanners also capture the points color, producing a cloud that is pretty close to reality (see figure 2).  So the point cloud could actually be used to fulfill points 1 and 3 above.

Figure 2: A colored dense point cloud that looks like physical reality.

Want to see the result?  Check this:

In this video, the user navigates a point cloud on an iPad.  When he clicks on parts of the cloud that represent an object, the system displays metadata related with that object.  Using such a system, a user walking in the plant would simply load the application, navigate to the corresponding portion of the point cloud, and query objects by clicking on them.  Since the point cloud is very dense, objects on display are very similar to those visible in the physical world, so the illusion is convincing: it is nearly as if the user could “click” on physical world objects.

How did we do that?  Interestingly, all of this was achieved using existing Bentley software – we did not have to “develop” anything.  Here are the steps:

  1. The point cloud was first loaded into MicroStation.
  2. Then, we manually “augmented” the point cloud by creating basic elements around groups of points that represent a common object.  For instance, a cylinder object was created around points forming a pipe, a slab shaped element around ladders, a rectangular prism around a pump, etc.  Elements don’t need to be accurate – the idea is to simply create an element that encloses most points representing the object.
  3. Metadata was assigned to each of those “group” elements.  That step was done manually using the ECX attributes feature.
  4. We then exported panoramic views of the point cloud, from selected positions, into a .omim file using the i-Model Optimizer app.
  5. The .omim file was loaded in the iPad, and displayed using Bentley Navigator for iPad. 
  6. We could then navigate into the point cloud from hotspot to hotspot.  When clicking on a portion of the point cloud for which an element was defined, its metadata is displayed.

In short you could easily try it yourself on your own data!

In the future, we could even envision a workflow where the experienced user would, as he walks in the plant, view the point cloud, click on an element, and enter the metadata (e.g. his knowledge of the plant) himself, on the tablet.  That data would be later uploaded to the server for use by everyone else.  That way, he could “download” his knowledge to the device, and save his colleagues a lot of questions after he retires.

Now that workflow is not perfect – you probably noticed that step 2 is manual and may consequently be time consuming, even though the elements are only approximate.  Well that leaves some room for another fascinating research project!

Stay tuned!  More augmented reality stuff to come...

Anonymous
  • Nice post. Thanks for nice view. Another important facet of the laser scanning transformation in heritage is the re-creation of rare or damaged artifacts utilizing 3D models constructed from issue clouds (objects that are often also 3D printed). he heritage part (museums/archaeology, etc.) is very much at the forefront of the submission of 3D laser scanning and facts and figures acquisition technologies.

  • This is exactly, well almost, what my client is in need of.  How far away is actually walking through the Model in real space and seeing the change in POV as in a fly through?  Can that be done in AR or the Viewer as it is now?