Hybrid Inputs ?

Interesting quote made at YII2016...

"Pointclouds from laser scanning can now be combined with photos, as “hybrid inputs,” for reconstruction into a reality mesh."

Any vids of how this would work from the user's standpoint?

1. Will this be provided in Descartes? Hopefully not only in ContextCapture / Pointools.

2. Say the user starts with a low-res 'melty' looking mesh with the photo info on the mesh. He then goes to site and takes more photos of a particular area of interest. Can the new photos be added to the existing mesh? Will the mesh be updated locally where the the new photos provide additional detail?

3. Seems like laser scanners are starting to integrate image capture cameras. Can this information be dropped into the mix?

4. Presumably, ContextCapture can already use photo info from 'rover' cams. What about Google StreetView and Bing StreetSide? Maybe Bentley could provide an online service to landscape/building architects that will construct a reality mesh. 24hr service?

Maybe, we could Ueber it as well. Cabbies or even members of the public with smartphones? Maybe there will be a new generation of roving photogrametric surveyors waiting around London or NY submitting bids for doing the survey like taxis do now. :-)

5. Point Clouds <> R-Mesh <> STM <> LumenRT/Vue + Hypermodeling. I wonder if the missing leg for capturing context is old fashioned photogrammery.

Mstn's old Photomatch tool was never very easy to use and minimal feature-wise.

It would be to be able to back-Ref the camera positions in Descartes/Mstn. This would allow the 3d model to be a spatial filing cabinet for photos taken on site. This would also allow Mstn to be used for Verified View Montage (VVM) work.

Parents Reply Children
  • 2. Is there a vid of this process? Covered in one of the SIG's on the LEARN server?

    3. OK, but  I am sure sure I totally agree with you. The S6 seems to take pretty good photos. It is not only the laser scanners which are now getting cameras built in but the setting out theodolites, like Trimble's Total Stations.

    I can see a lot of contractors on building sites that would want to have a consoildated model that has all the point data generated by their laser scanners, thedolites, rover cams, ad hoc photos and also laser/cam info from the setting out theodolites... with now use the BIM model as input.

    The designers would ideally work in/from Mstn and reference in the required information... point clouds, surveyed points, r-meshes, terrain meshes, kml's, photos, scanned drawings recording manual measures by tape etc.

    Mstn has always had really good reference attachment capabilities. And with Hypermodeling, he can also back-ref in 2d drawn info into the 3d model. Sneak peak: I like very much the ability to access all the photos that have captured a selected point in 3d. This is a great way to provide the user a fast way to find all relevant supporting information.

    The same spatial coordinate led searching can be used to 'back-ref' draw information planes generated using the Hypemodeling tools.

    Bentley's Promis-e also can provide back-referencing by allowing the user to select an electrical component in the 3d model and provide him a list of drawings and schdules which 'show' the selected component.


    5. All of these advancements are great enabling functionality for Phidias or any photogrammetry tool sitting on top of Mstn. It would greatly reduce the amount of preparation time needed to assemble and compute the vector CAD elements that are needed for BIM etc. It would be great if Bentley could work closely with them and/or integrate them as a product.