3D stereo CAD modeling from point clouds

In a previous post ( CAD Models in 3D (stereo)), I described some of our work on the use of 3D stereo for 3D CAD model visualization.  During that project, we developed a 3D stereo prototype running on MicroStation and tested it using several 3D models.  We concluded that 3D stereo does indeed improve depth perception for 3D CAD model visualization.  Near the end of our experiment, we tried our prototype on a model containing a point cloud.  At that moment, it became clear that 3D stereo also had a lot of potential for point cloud visualization, and probably point cloud modeling as well.  So we started another project to look into that very question.

Point clouds are challenging to view on a 2D monitor.  Even though they represent points in 3D space, their 3D shape is not easy to visualize for various reasons:

  • The points are distant from each other - so:
    • If you zoom in too close, you lose the object structure, and 3D perception vanishes.
    • You see through the cloud, so in the same area, you may simultaneously see points that close, and other that are behind.
  • All the points are the same size, so you cannot use the size of a point as a cue to its distance, like we can do with 3D models or real images.

This is illustrated in this image, where it is very hard to guess what is the depth of each point in the cloud.

Actually, the 3D structure of a cloud becomes visible only when you rotate the cloud, like this:

The other problem is modeling: many users need to create CAD models using a point cloud as a basis.  But since they cannot see the depth of the points, it is not easy for them to know whether they snapped on the right one.  Then they either need to use 2 views, or rotate the cloud every time they snap to a point, to make sure they snapped onto the right one.  That can be cumbersome.

One solution is to view the cloud and edit the model on a 3D stereo monitor.  That is what we have done last winter.  We developed a small prototype that enables 3D stereo visualization, head tracking, as well as 3D point selection using a 3D selection device (a pen).  See the result:

In the first part, you see the user navigating in the cloud using a 3D selection device (a pen).  The3D location of the pen is tracked in real time, enabling us to provide a very intuitive navigation tool.  In the second part, you can see the user moving his head, and the model moving accordingly.  By tracking the user's head, we can display the model from his current position, letting him explore the cloud as if it were a real object.  Finally, you can see a new exploration tool that we called "Flashlight".  By emulating the behavior of a real flashlight, it helps users understanding the shape of the cloud, and therefore snap on the right point more easily.

Here is an older but longer and more complete video (with explanations):

Too bad this is a 2D blog!  Trust me, seeing the point clouds in full 3D is really convincing...

Questions?  Feel free to ask!

Anonymous