Hi all,
I have some 360 degree photos (such as Street View-photo's) and I know the exact location of the photo in coordinates. These photos I'd like to read in such a way in MicroStation that the photo matches an existing surface (for example: an existing drawing of the road) with the photo. I've tried a few things with background and image, and placing a camera on the location of the photo location, but it will not succeed. Does anyone have experience with this?
You can make that happen in microstation manually, but its not easy. The method is to map the panoramic images onto the 6 faces of a cube, and put your camera at the precise center of that cube. Then by tedious trial and error, move and spin (move and rotate) the image cube, always moving the camera eye to keep the camera at the center of the image cube as you move the image cube. Continue with move and rotate trial and error until you find the image cube is correctly aligned with your model.
I have done this many times myself. It is not easy, but when done it can be very effective, as you can see here
communities.bentley.com/.../augmented-reality-for-building-construction-and-maintenance-augmenting-with-2d-drawings.aspx
I show it in action, combined with other things, several times here http://youtu.be/kQPxPF-lf5I
You can also see it here, used in another way http://youtu.be/XH2AGknyzW8
Using photos, as part of a data hybrid with models and point clouds, is certainly a smart thing to do. The viable use case list is generalizable (long and not narrow).
Would you be interested in a tool that makes it easy for you to move and spin your photos (standard or panoramic) into alignment with your models?
sorry that middle link was broken. Here is the link http://youtu.be/kQPxPF-lf5I