Hybrid Inputs ?

Interesting quote made at YII2016...

"Pointclouds from laser scanning can now be combined with photos, as “hybrid inputs,” for reconstruction into a reality mesh."

Any vids of how this would work from the user's standpoint?

1. Will this be provided in Descartes? Hopefully not only in ContextCapture / Pointools.

2. Say the user starts with a low-res 'melty' looking mesh with the photo info on the mesh. He then goes to site and takes more photos of a particular area of interest. Can the new photos be added to the existing mesh? Will the mesh be updated locally where the the new photos provide additional detail?

3. Seems like laser scanners are starting to integrate image capture cameras. Can this information be dropped into the mix?

4. Presumably, ContextCapture can already use photo info from 'rover' cams. What about Google StreetView and Bing StreetSide? Maybe Bentley could provide an online service to landscape/building architects that will construct a reality mesh. 24hr service?

Maybe, we could Ueber it as well. Cabbies or even members of the public with smartphones? Maybe there will be a new generation of roving photogrametric surveyors waiting around London or NY submitting bids for doing the survey like taxis do now. :-)

5. Point Clouds <> R-Mesh <> STM <> LumenRT/Vue + Hypermodeling. I wonder if the missing leg for capturing context is old fashioned photogrammery.

Mstn's old Photomatch tool was never very easy to use and minimal feature-wise.

It would be to be able to back-Ref the camera positions in Descartes/Mstn. This would allow the 3d model to be a spatial filing cabinet for photos taken on site. This would also allow Mstn to be used for Verified View Montage (VVM) work.