360 degrees photo with exact location, match with model

Hi all,

I have some 360 degree photos (such as Street View-photo's) and I know the exact location of the photo in coordinates. These photos I'd like to read in such a way in MicroStation that the photo matches an existing surface (for example: an existing drawing of the road) with the photo. I've tried a few things with background and image, and placing a camera on the location of the photo location, but it will not succeed. Does anyone have experience with this?

Parents Reply Children
  • I've wanted to try this for a while, wouldn't it be as easy as using the 360deg image as a light probe (or spherical projection) and enable the background to be visible?

  • sorry that middle link was broken. Here is the link http://youtu.be/kQPxPF-lf5I

    Let me get you signed up for an upcoming beta program. It will be great to get your feedback. Please email me at rob.Snyder@Bentley.com



  • That would certainly be a great way to do it. It would need some enhancement in the tools to allow you to store the rotation (around 3 axes) with a saved view that stores the corresponding camera position, so you can recall it when you need it and use it.

    It would also need some work on the controls that allow you to see the background as you move the camera through space seeking the correct camera eye location to match the image, and lastly, once you have the right camera eye position, you would need controls that let you spin the image around the camera eye until the image is correctly aligned with the model.



  • @ John Allen, i think i have tried your suggestion, but my problem is dat the image used as background is presented with a different scale and rotation as the model. I could imagine how to deal with the rotation, but the scale is still a problem. I have placed the camera on the exact location.

    I have attached the image as i see in the Explorer:

    And as i see in MicroStation as background:

    Thanks again.....

  • Photos + Point Clouds is a good idea.

    Maybe Bentley should look at Calabi Yau 's method of fusing images to meshes generated from point clouds.

    "Mesh rendering occurs automatically from the RGB and intensity information inherent within the scan data. Optionally, high resolution spherical images can be imported and fused to a decimated polygonal mesh. The combination of high resolution imagery with a decimated polygonal mesh relieves the system from the burden of scan resolution overkill. This option is very useful for many uses, including virtual survey. This feature provides the benefit of high visual acuity, with just enough mesh geometry to support highly accurate surveys."

    Pointools may be fast with point clouds but a lot of the time that's not what is needed. Maybe the mesh tools and STM teams need to meet and brainstorm with the pointools guys.... ? Probably some GPU texturing capabilities available already?