360 degrees photo with exact location, match with model

Hi all,

I have some 360 degree photos (such as Street View-photo's) and I know the exact location of the photo in coordinates. These photos I'd like to read in such a way in MicroStation that the photo matches an existing surface (for example: an existing drawing of the road) with the photo. I've tried a few things with background and image, and placing a camera on the location of the photo location, but it will not succeed. Does anyone have experience with this?

Parents
  • You can make that happen in microstation manually, but its not easy. The method is to map the panoramic images onto the 6 faces of a cube, and put your camera at the precise center of that cube. Then by tedious trial and error, move and spin (move and rotate) the image cube, always moving the camera eye to keep the camera at the center of the image cube as you move the image cube. Continue with move and rotate trial and error until you find the image cube is correctly aligned with your model.

    I have done this many times myself. It is not easy, but when done it can be very effective, as you can see here

    communities.bentley.com/.../augmented-reality-for-building-construction-and-maintenance-augmenting-with-2d-drawings.aspx

    I show it in action, combined with other things, several times here http://youtu.be/kQPxPF-lf5I

    You can also see it here, used in another way http://youtu.be/XH2AGknyzW8

    Using photos, as part of a data hybrid with models and point clouds, is certainly a smart thing to do. The viable use case list is generalizable (long and not narrow).

    Would you be interested in a tool that makes it easy for you to move and spin your photos (standard or panoramic) into alignment with your models?



  • Hi Rob,

    Thanks for your reply. Very interesting what you describe and show. Unfortunately, some of the youtube movies don't work, is the hyperlink correct?

    Is it possible fot you to post a video in which you explain the process as you described above? And, indeed, if that process takes to much time ( I have many of this photo's) then a tool is a good idea..;-)

  • That would certainly be a great way to do it. It would need some enhancement in the tools to allow you to store the rotation (around 3 axes) with a saved view that stores the corresponding camera position, so you can recall it when you need it and use it.

    It would also need some work on the controls that allow you to see the background as you move the camera through space seeking the correct camera eye location to match the image, and lastly, once you have the right camera eye position, you would need controls that let you spin the image around the camera eye until the image is correctly aligned with the model.



  • @ John Allen, i think i have tried your suggestion, but my problem is dat the image used as background is presented with a different scale and rotation as the model. I could imagine how to deal with the rotation, but the scale is still a problem. I have placed the camera on the exact location.

    I have attached the image as i see in the Explorer:

    And as i see in MicroStation as background:

    Thanks again.....

  • Photos + Point Clouds is a good idea.

    Maybe Bentley should look at Calabi Yau 's method of fusing images to meshes generated from point clouds.

    "Mesh rendering occurs automatically from the RGB and intensity information inherent within the scan data. Optionally, high resolution spherical images can be imported and fused to a decimated polygonal mesh. The combination of high resolution imagery with a decimated polygonal mesh relieves the system from the burden of scan resolution overkill. This option is very useful for many uses, including virtual survey. This feature provides the benefit of high visual acuity, with just enough mesh geometry to support highly accurate surveys."

    Pointools may be fast with point clouds but a lot of the time that's not what is needed. Maybe the mesh tools and STM teams need to meet and brainstorm with the pointools guys.... ? Probably some GPU texturing capabilities available already?

  • Unknown said:
    the scale is still a problem

    Does it help to adjust the view angle? 
    Define Camera>View information...

    Mike

  • GeoNext, are you doing this with an environment map, and then calling that from a display style? If so, you see that the environment map settings only provide one rotation field. So you can control rotation around 1 axis. But this is not enough. You need to control pitch, yaw, and roll, so you need 3 rotation fields in the settings.

    Therefore, if you want to do a test, I suggest you build a cube composed of 6 square sides, convert your image to an image cube (there are free utilities online for converting panoramic photos to image cubes), and map the images onto the faces of the cube as render materials.

    Once you get the 6 images mapped onto the 6 faces of the cube, then you can group the cube for convenience (ctrl-g). Put a heavy lineweight point at the center of the cube first. Now you have control.

    Use rotate and accudraw to rotate the cube around each axis as needed to align with the model. Do this with the camera eye in the center of the cube so you can see the image cube rotate around you as needed, around 3 axes. Rotation is only one of 2 things needed though. You also have to MOVE. You have to move the image cube to the correct location versus the model, and you have to move the camera eye at the same time so it is always at the center of the image cube as it moves, because the panoramic photo is distorted if the camera leaves the cube center.

    You have to do these moves and rotates incrementally one step at a time, always moving the camera eye to the center of the cube, trial and error, until you see the image aligned with the model.  This is not easy.

    Once you get it right though, you will notice that camera angle (width) affects the display of the photo image and the model equally, so camera angle doesn't matter, and likewise, the size of the image cube (scale) does not matter. As the cube gets bigger, it gets farther away, but it is scaling around the camera eye which is at its center, so scaling has no affect.

    If you try to do this with the environment map instead though, in addition to the lack of rotation control for 2 out of 3 rotation axes, you also find uncontrollable behavior regarding camera angle. That is, the model display is affected (as it should be) by camera angle, but the environment map keeps a constant width in the view window at all times, so the image and the model can't be locked together as is necessary.

    You can get this to work though using an actual modeled image cube. I've done it many times.

    To make that easier, we're working on an app that helps you do the alignment. We'll be looking for some beta testers in a few months.



Reply
  • GeoNext, are you doing this with an environment map, and then calling that from a display style? If so, you see that the environment map settings only provide one rotation field. So you can control rotation around 1 axis. But this is not enough. You need to control pitch, yaw, and roll, so you need 3 rotation fields in the settings.

    Therefore, if you want to do a test, I suggest you build a cube composed of 6 square sides, convert your image to an image cube (there are free utilities online for converting panoramic photos to image cubes), and map the images onto the faces of the cube as render materials.

    Once you get the 6 images mapped onto the 6 faces of the cube, then you can group the cube for convenience (ctrl-g). Put a heavy lineweight point at the center of the cube first. Now you have control.

    Use rotate and accudraw to rotate the cube around each axis as needed to align with the model. Do this with the camera eye in the center of the cube so you can see the image cube rotate around you as needed, around 3 axes. Rotation is only one of 2 things needed though. You also have to MOVE. You have to move the image cube to the correct location versus the model, and you have to move the camera eye at the same time so it is always at the center of the image cube as it moves, because the panoramic photo is distorted if the camera leaves the cube center.

    You have to do these moves and rotates incrementally one step at a time, always moving the camera eye to the center of the cube, trial and error, until you see the image aligned with the model.  This is not easy.

    Once you get it right though, you will notice that camera angle (width) affects the display of the photo image and the model equally, so camera angle doesn't matter, and likewise, the size of the image cube (scale) does not matter. As the cube gets bigger, it gets farther away, but it is scaling around the camera eye which is at its center, so scaling has no affect.

    If you try to do this with the environment map instead though, in addition to the lack of rotation control for 2 out of 3 rotation axes, you also find uncontrollable behavior regarding camera angle. That is, the model display is affected (as it should be) by camera angle, but the environment map keeps a constant width in the view window at all times, so the image and the model can't be locked together as is necessary.

    You can get this to work though using an actual modeled image cube. I've done it many times.

    To make that easier, we're working on an app that helps you do the alignment. We'll be looking for some beta testers in a few months.



Children