360 degrees photo with exact location, match with model

Hi all,

I have some 360 degree photos (such as Street View-photo's) and I know the exact location of the photo in coordinates. These photos I'd like to read in such a way in MicroStation that the photo matches an existing surface (for example: an existing drawing of the road) with the photo. I've tried a few things with background and image, and placing a camera on the location of the photo location, but it will not succeed. Does anyone have experience with this?

Parents
  • You can make that happen in microstation manually, but its not easy. The method is to map the panoramic images onto the 6 faces of a cube, and put your camera at the precise center of that cube. Then by tedious trial and error, move and spin (move and rotate) the image cube, always moving the camera eye to keep the camera at the center of the image cube as you move the image cube. Continue with move and rotate trial and error until you find the image cube is correctly aligned with your model.

    I have done this many times myself. It is not easy, but when done it can be very effective, as you can see here

    communities.bentley.com/.../augmented-reality-for-building-construction-and-maintenance-augmenting-with-2d-drawings.aspx

    I show it in action, combined with other things, several times here http://youtu.be/kQPxPF-lf5I

    You can also see it here, used in another way http://youtu.be/XH2AGknyzW8

    Using photos, as part of a data hybrid with models and point clouds, is certainly a smart thing to do. The viable use case list is generalizable (long and not narrow).

    Would you be interested in a tool that makes it easy for you to move and spin your photos (standard or panoramic) into alignment with your models?



  • Hi Rob,

    Thanks for your reply. Very interesting what you describe and show. Unfortunately, some of the youtube movies don't work, is the hyperlink correct?

    Is it possible fot you to post a video in which you explain the process as you described above? And, indeed, if that process takes to much time ( I have many of this photo's) then a tool is a good idea..;-)

  • GeoNext, are you doing this with an environment map, and then calling that from a display style? If so, you see that the environment map settings only provide one rotation field. So you can control rotation around 1 axis. But this is not enough. You need to control pitch, yaw, and roll, so you need 3 rotation fields in the settings.

    Therefore, if you want to do a test, I suggest you build a cube composed of 6 square sides, convert your image to an image cube (there are free utilities online for converting panoramic photos to image cubes), and map the images onto the faces of the cube as render materials.

    Once you get the 6 images mapped onto the 6 faces of the cube, then you can group the cube for convenience (ctrl-g). Put a heavy lineweight point at the center of the cube first. Now you have control.

    Use rotate and accudraw to rotate the cube around each axis as needed to align with the model. Do this with the camera eye in the center of the cube so you can see the image cube rotate around you as needed, around 3 axes. Rotation is only one of 2 things needed though. You also have to MOVE. You have to move the image cube to the correct location versus the model, and you have to move the camera eye at the same time so it is always at the center of the image cube as it moves, because the panoramic photo is distorted if the camera leaves the cube center.

    You have to do these moves and rotates incrementally one step at a time, always moving the camera eye to the center of the cube, trial and error, until you see the image aligned with the model.  This is not easy.

    Once you get it right though, you will notice that camera angle (width) affects the display of the photo image and the model equally, so camera angle doesn't matter, and likewise, the size of the image cube (scale) does not matter. As the cube gets bigger, it gets farther away, but it is scaling around the camera eye which is at its center, so scaling has no affect.

    If you try to do this with the environment map instead though, in addition to the lack of rotation control for 2 out of 3 rotation axes, you also find uncontrollable behavior regarding camera angle. That is, the model display is affected (as it should be) by camera angle, but the environment map keeps a constant width in the view window at all times, so the image and the model can't be locked together as is necessary.

    You can get this to work though using an actual modeled image cube. I've done it many times.

    To make that easier, we're working on an app that helps you do the alignment. We'll be looking for some beta testers in a few months.



  • Hi Rob,

    That sounds great. We would like to be a beta tester. We routinely create 360's to complement our as built models.

    I'm sure this has been considered already but it would make sense to build a cube map based on the point cloud data itself. ie from a given view point render the point cloud onto the cube map. (a side thought: instead of color by elevation as a point cloud rendering option we could have colour by distance from the camera :0)

    Seems like a logical extension to descartes?

    I've considered trying the same through using a spherical projection in a render setup, then creating the cube map from that, but have not been able to justify the time involved.

    Mike

  • Mike, Please email me in May about the beta at rob.snyder@Bentley.com

    Its great to hear how you use this already and I look forward to your comments on our app.  That idea you mention about images of point clouds at certain nodes in virtual space we show here at 19:16 in this video http://youtu.be/kQPxPF-lf5I  

    Clearly this kind of thing (photography, imagery) should become just another part of any spatial information environment.



  • I tried this out yesterday (spherical environment) and had pretty good results. The trick seems to be finding the 'eye' height and relative distances to some known objects and ONLY changing your view based on the 'eye'  position.

    The objects closer to the camera looked more natural, the further out you placed objects the less they appeared to align with the background. I'm sure that has everything to do with the lens distortion and stitching together of the images to create the 360deg panorama.

    The model I created only had 2 cars and a ground plane (to create a shadow). The different camera angles and lens lengths worked surprisingly well. (have I mentioned lately that the Dosch Design cars are AWESOME?)

     

Reply Children
  • Wow, very nice. Funny te see a Prius in our survey...;-)

    've been busy trying out the method Rob Snyder explained. Is not really simple. Can you explain exactly how to get to your result?

  • Yes :) the method I describe is not easy. But it does give you control. To get a spherical image aligned with a model, you need control over the rotation of the image around 3 axes, like this (pitch, yaw and roll) techpubs.sgi.com/.../04.4.plane.rotation.gif

    Without complete rotation control, you might get some alignment in the foreground, but farther away its wrong, and if you look closely and if you need good accuracy, then you need rotation control on 3 axes.

    That's what we needed when we made this alignment of photo with model here http://youtu.be/XH2AGknyzW8 in a plant. We needed good alignment all around with tight accuracy.  We tried the environment map method John used first, but with only 1 rotation field for one axis, we don't have enough control.  So we built the image box so we could have full control.

    One other thing we did, we put the image box in a different dgn and referenced it. This way we could set the display style of the model to semi- transparent while setting the image box to smooth (and ignore lighting so no shadows cast by the box). We set the image box shading style in the reference dialog.

    In the example here http://youtu.be/XH2AGknyzW8 we published the composite from MicroStation to iPad using the I-model optimizer (OMIM publisher) and viewed the result on the Bentley Pano Viewer app.

    Please keep in mind that none of what I say represents an optimal solution. These are just things you can try to do now. There are many things we need to do (Bentley) to make this much easier, and to have the result easily accessible and more widely useful.

    I really enjoy seeing your work. I hope to make it a lot easier.