360 degrees photo with exact location, match with model

Hi all,

I have some 360 degree photos (such as Street View-photo's) and I know the exact location of the photo in coordinates. These photos I'd like to read in such a way in MicroStation that the photo matches an existing surface (for example: an existing drawing of the road) with the photo. I've tried a few things with background and image, and placing a camera on the location of the photo location, but it will not succeed. Does anyone have experience with this?

Parents
  • You can make that happen in microstation manually, but its not easy. The method is to map the panoramic images onto the 6 faces of a cube, and put your camera at the precise center of that cube. Then by tedious trial and error, move and spin (move and rotate) the image cube, always moving the camera eye to keep the camera at the center of the image cube as you move the image cube. Continue with move and rotate trial and error until you find the image cube is correctly aligned with your model.

    I have done this many times myself. It is not easy, but when done it can be very effective, as you can see here

    communities.bentley.com/.../augmented-reality-for-building-construction-and-maintenance-augmenting-with-2d-drawings.aspx

    I show it in action, combined with other things, several times here http://youtu.be/kQPxPF-lf5I

    You can also see it here, used in another way http://youtu.be/XH2AGknyzW8

    Using photos, as part of a data hybrid with models and point clouds, is certainly a smart thing to do. The viable use case list is generalizable (long and not narrow).

    Would you be interested in a tool that makes it easy for you to move and spin your photos (standard or panoramic) into alignment with your models?



  • Hi Rob,

    Thanks for your reply. Very interesting what you describe and show. Unfortunately, some of the youtube movies don't work, is the hyperlink correct?

    Is it possible fot you to post a video in which you explain the process as you described above? And, indeed, if that process takes to much time ( I have many of this photo's) then a tool is a good idea..;-)

  • Mike, Please email me in May about the beta at rob.snyder@Bentley.com

    Its great to hear how you use this already and I look forward to your comments on our app.  That idea you mention about images of point clouds at certain nodes in virtual space we show here at 19:16 in this video http://youtu.be/kQPxPF-lf5I  

    Clearly this kind of thing (photography, imagery) should become just another part of any spatial information environment.



  • I tried this out yesterday (spherical environment) and had pretty good results. The trick seems to be finding the 'eye' height and relative distances to some known objects and ONLY changing your view based on the 'eye'  position.

    The objects closer to the camera looked more natural, the further out you placed objects the less they appeared to align with the background. I'm sure that has everything to do with the lens distortion and stitching together of the images to create the 360deg panorama.

    The model I created only had 2 cars and a ground plane (to create a shadow). The different camera angles and lens lengths worked surprisingly well. (have I mentioned lately that the Dosch Design cars are AWESOME?)

     

  • Wow, very nice. Funny te see a Prius in our survey...;-)

    've been busy trying out the method Rob Snyder explained. Is not really simple. Can you explain exactly how to get to your result?

  • Yes :) the method I describe is not easy. But it does give you control. To get a spherical image aligned with a model, you need control over the rotation of the image around 3 axes, like this (pitch, yaw and roll) techpubs.sgi.com/.../04.4.plane.rotation.gif

    Without complete rotation control, you might get some alignment in the foreground, but farther away its wrong, and if you look closely and if you need good accuracy, then you need rotation control on 3 axes.

    That's what we needed when we made this alignment of photo with model here http://youtu.be/XH2AGknyzW8 in a plant. We needed good alignment all around with tight accuracy.  We tried the environment map method John used first, but with only 1 rotation field for one axis, we don't have enough control.  So we built the image box so we could have full control.

    One other thing we did, we put the image box in a different dgn and referenced it. This way we could set the display style of the model to semi- transparent while setting the image box to smooth (and ignore lighting so no shadows cast by the box). We set the image box shading style in the reference dialog.

    In the example here http://youtu.be/XH2AGknyzW8 we published the composite from MicroStation to iPad using the I-model optimizer (OMIM publisher) and viewed the result on the Bentley Pano Viewer app.

    Please keep in mind that none of what I say represents an optimal solution. These are just things you can try to do now. There are many things we need to do (Bentley) to make this much easier, and to have the result easily accessible and more widely useful.

    I really enjoy seeing your work. I hope to make it a lot easier.



Reply
  • Yes :) the method I describe is not easy. But it does give you control. To get a spherical image aligned with a model, you need control over the rotation of the image around 3 axes, like this (pitch, yaw and roll) techpubs.sgi.com/.../04.4.plane.rotation.gif

    Without complete rotation control, you might get some alignment in the foreground, but farther away its wrong, and if you look closely and if you need good accuracy, then you need rotation control on 3 axes.

    That's what we needed when we made this alignment of photo with model here http://youtu.be/XH2AGknyzW8 in a plant. We needed good alignment all around with tight accuracy.  We tried the environment map method John used first, but with only 1 rotation field for one axis, we don't have enough control.  So we built the image box so we could have full control.

    One other thing we did, we put the image box in a different dgn and referenced it. This way we could set the display style of the model to semi- transparent while setting the image box to smooth (and ignore lighting so no shadows cast by the box). We set the image box shading style in the reference dialog.

    In the example here http://youtu.be/XH2AGknyzW8 we published the composite from MicroStation to iPad using the I-model optimizer (OMIM publisher) and viewed the result on the Bentley Pano Viewer app.

    Please keep in mind that none of what I say represents an optimal solution. These are just things you can try to do now. There are many things we need to do (Bentley) to make this much easier, and to have the result easily accessible and more widely useful.

    I really enjoy seeing your work. I hope to make it a lot easier.



Children
No Data