<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://communities.bentley.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/"><channel><title>Hybrid Inputs ?</title><link>https://communities.bentley.com/products/3d_imaging_and_point_cloud_software/f/contextcapture-descartes-pointools-forum/128284/hybrid-inputs</link><description>Interesting quote made at YII2016... 
 &amp;quot;Pointclouds from laser scanning can now be combined with photos, as “hybrid inputs,” for reconstruction into a reality mesh.&amp;quot; 
 Any vids of how this would work from the user&amp;#39;s standpoint? 
 1. Will this be provided</description><dc:language>en-US</dc:language><generator>Telligent Community 12</generator><item><title>RE: Hybrid Inputs ?</title><link>https://communities.bentley.com/thread/392389?ContentTypeID=1</link><pubDate>Sun, 20 Nov 2016 11:14:18 GMT</pubDate><guid isPermaLink="false">6dad98f5-dbc9-4c4d-a9ba-e9da8dc6aa8e:74bfc86e-774d-439c-bd89-8ffabf61d1a4</guid><dc:creator>dominic SEAH</dc:creator><description>&lt;p&gt;2. Is there a vid of this process? Covered in one of the SIG&amp;#39;s on the LEARN server?&lt;/p&gt;
&lt;p&gt;3. OK, but &amp;nbsp;I am sure sure I totally agree with you. The &lt;a href="https://youtu.be/Z-CjdL6lbKQ?t=85"&gt;S6&lt;/a&gt; seems to take pretty good &lt;a href="https://youtu.be/GZZATHXaecg?t=1942"&gt;photos&lt;/a&gt;. It is not only the laser scanners which are now getting cameras built in but the setting out theodolites, like Trimble&amp;#39;s Total Stations. &lt;/p&gt;
&lt;p&gt;I can see a lot of contractors on building sites that would want to have a consoildated model that has all the point data generated by their laser scanners, &lt;a href="https://www.youtube.com/watch?v=BZi0owCSsso"&gt;thedolites&lt;/a&gt;, rover cams, ad hoc photos and also laser/cam info from the &lt;a href="https://youtu.be/-daf0gyCgp4?t=111"&gt;setting out&lt;/a&gt; theodolites... with now use the &lt;a href="https://www.youtube.com/watch?v=e-BnQH5v7tA"&gt;BIM&lt;/a&gt; model as &lt;a href="https://www.youtube.com/watch?v=RHmQr7gTV5I"&gt;input&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The designers would ideally work in/from Mstn and reference in the required information... point clouds, surveyed points, r-meshes, terrain meshes, kml&amp;#39;s, photos, scanned drawings recording manual measures by tape etc. &lt;/p&gt;
&lt;p&gt;Mstn has always had really good reference attachment capabilities. And with Hypermodeling, he can also back-ref in 2d drawn info into the 3d model. &lt;a href="https://youtu.be/DDCQ4HpjqpE?t=74"&gt;Sneak peak&lt;/a&gt;: I like very much the ability to access all the photos that have captured a selected point in 3d. This is a great way to provide the user a fast way to find all relevant supporting information.&lt;/p&gt;
&lt;p&gt;The same spatial coordinate led searching can be used to &amp;#39;back-ref&amp;#39; draw information planes generated using the Hypemodeling tools.&lt;/p&gt;
&lt;p&gt;Bentley&amp;#39;s Promis-e also can provide back-referencing by allowing the user to select an electrical component in the 3d model and provide him a list of drawings and schdules which &amp;#39;show&amp;#39; the selected component.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;5. All of these advancements are great enabling functionality for Phidias or any photogrammetry tool sitting on top of Mstn. It would greatly reduce the amount of preparation time needed to assemble and compute the vector CAD elements that are needed for BIM etc. It would be great if Bentley could work closely with them and/or integrate them as a product. &lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: Hybrid Inputs ?</title><link>https://communities.bentley.com/thread/392320?ContentTypeID=1</link><pubDate>Fri, 18 Nov 2016 18:31:40 GMT</pubDate><guid isPermaLink="false">6dad98f5-dbc9-4c4d-a9ba-e9da8dc6aa8e:579ef1df-7fcf-4a8f-a718-e70aec96d0ef</guid><dc:creator>Pascal Cloutier</dc:creator><description>&lt;p&gt;Hi Dominic,&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;1) The hybrid workflow will be provided in ContextCapture.&lt;br /&gt; &lt;br /&gt; 2) Here is the process for doing partial updates of a reality mesh: &lt;a href="/products/3d_imaging_and_point_cloud_software/w/wiki/30215.managing-partial-updates-on-large-contextcapture-projects"&gt;communities.bentley.com/.../30215.managing-partial-updates-on-large-contextcapture-projects&lt;/a&gt;&lt;br /&gt; &lt;br /&gt; 3) The main intent for laser scanners cameras is to capture images to colorize the scans. Photos taken with the intent of reconstructing a Reality Mesh in ContextCapture should have a 60% overlap. I don&amp;#39;t think a laser scanner is a good tool to achieve it.&lt;br /&gt; &lt;br /&gt; 4) If you need to model a long &amp;quot;corridor&amp;quot;, pictures from rover cams can definitely do the job.&lt;br /&gt; &lt;br /&gt; 5) Have a look at this sneak peak:&amp;nbsp;&lt;a href="https://www.youtube.com/watch?v=DDCQ4HpjqpE"&gt;https://www.youtube.com/watch?v=DDCQ4HpjqpE&lt;/a&gt;&lt;br /&gt;&amp;nbsp;&lt;br /&gt; HTH,&lt;br /&gt; Pascal&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>