My Experience with Point Clouds Rendering in uStation v8i SS3
All across Bentley.com is read about pointools, Descartes and "Visualization" of point clouds... But I dont really see HOW to do that ANYWHERE?
I mean.. I can load a pointcloud in uStation (SS3 user here) but its dumber than dumb when it comes to controlling!? (Or I am!)
So whats getting me all worked up:
I can NOT crop/mask the pointcloud in realtime using shapes?I can mask it using something like a block only!?
It does NOT respond to lighting?EDIT: It does produce shadows from sunlight it seems. Just as you vould expect any 2D geometry to do. It does NOT care about lineweight though.
It does NOT use the full resolution?If my view is set to show 20% of the points, then the render is just as bad! But even at 100% in a VIEW, Not all points are showing!Also, the resolution rendered depends on the size of the VIEW being rendered!! Rendering with AA does not up the point count, only increasing the view size seems to.
It does NOT render intelligent in any way?Points are rendered as tiny specs of dust only! I would expect at least an estimated closed surface by connecting close points, or at least the tools to do it swiftly. Respecting linescale goes without saying! (does not do that)It does respect "depth-of-field" but produces artifacts. Black dots with white "ghost".It does respect "contour shading", but.. thats.. I mean.. dots with black outlines..
I can NOT reclassify points using shapes/solids?I can reclassify by drawing a shape only!? And then I have to export the entire pointcloud which seems to take ALOT of time.. A serious lot of time. 300MB (4.5million xyz's) takes +10 minutes!! This is not rocket science and needs NO calculations of ANY kind, its just a list of coordinates that needs to be either written or skipped (as all we can do is filter by classification!) - This is really bad! - I wonder what happens when we begin working on our multi gigabyte sets?!!
Scalable Terrain Model (a Descartes thingy) can NOT make a useful model from a point cloud!?Descartes is simply using the XYZs to make a top-down mesh of triangles between ALL the points (as seen from space mind you) Which is pretty much the same as a DTM, except it looks like poop because it automatically lowers resolution!!?? This STM does not respect holes cut into the pointcloud. You can exclude points by shapes in the STM, but the holes are filled when viewing and rendering the STM!! So you cannot use this with any other geometry!?
How hard can it be to make a surface and access that dynamically instead of the points? Just make smallest triangles. And maybe get fancy and reduce geometry somehow; if sidelength is too long then exlude triangle. Or even reduce by neighbouring surface angles.. Or use a tree structure.. ;o)
I am new to working with pointclouds, but have known and read about them.
Please help me achieve (at least) a decent rendering:
Assuming a ginormeous classified, colored pointCloud.
I speculate I need some of the following questions answered, but ANY hints/ideas that work fast and efficient is much appreciated. The goal is that we dont model, we dont fix.. We clip the pointcloud, add our project, and render!
A. Preferably I need a procedure that just clip and filter the pointCloud on-the-go. So I can show anything except the ground as points.
B. Make a "2½D" regular triangulated (not autoscaled as that clearly doesnt work in renderings) DTM that respects the holes in the pointcloud, based on a classification filter.
C. Perhaps a procedure to split the pointCloud and matching surface into smaller managable "tiles" that will cull when rendering?
PLEASE help me "Get more value out of the pointcloud data"... please... PLEASE!
p.s. This has been and will be heavily edited as I digg deeper.. Thus responses after this post might not reflect the current text.
p.p.s. http://www.isprs.org/proceedings/XXXVI/part5/paper/RABB_639.pdf - Techniques to automate the generation of surfaces from a pointCloud.
Yes while point clouds are often jaw dropping they generally require post processing steps (e.g. : extraction, transformation, etc...) before something useful can be done with them.
This is quite similar to raster data (e.g. : even if you see the pixels describing road you cannot do much with it without modeling the road with some kind of vector).
Some of your items are CRs and so I will forward your post to product manager responsible for point cloud.
As for the resolution this is normal that the full resolution is not available all the time. This is the same for raster and STM, two others kind of data that can contains billions of pixels/points. Certainly displaying the full resolution on a computer screen having only a few millions pixels is absolutely not useful and would certainly kill the visualization performance.
The new Scalable Terrain Model is only a 2.5D terrain and thus should only by created from 2.5D data like aerial point cloud with points being classified as terrain or post-processed to remove any above ground features.
Finally I would suggest that you check the WIKI web pages describing the new functionalities in Descartes SELECTSeries 3 (see the SELECTSeries 3 section on communities.bentley.com/.../bentley-descartes.aspx).
I read: "Yes, we have NO working tools for point cloud visualization at the moment"
Is it really so? I got my bosses all hyped up, because you write all over that you can visualize pointclouds with descartes, and it turns out to be just talk?!
I have read and watched anything I could find at the Bentley site and the internet, and also the various help-files included with Descartes to no avail, but thanks for the link.
I want to love you Bentley, you know that, but that love is getting some serious testing at the moment! (probably thanks to marketing - and me being gullible!!?)
Please remember us little guys.. I have to tell my boss that what you (and therefore me) hinted at being possible is not really possible. Some day they'll believe your marketing over me, and then guess what!? (Same story with the missing traffic simulation and real luxology shaders btw..)
*Gone browsing... bbl*
System: Win7 64bit 16GB Ram - microStation V8i SS3 08.11.09.578. + PoinTools CONNECT. - Intel i7-4800MQ CPU@2.70GHz, 4 core / 8 Logic proc.
Sorry for the late reply. Last week we got our annual user event and this kept me busy. I am back now and will do my best to help you.
There is certainly room for improvement, as always, but we definitly have tools for Point Cloud Vizualisation that are used by several of Bentley users.
I am the Product Manager for Point Cloud in MicroStation and Descartes and will try to help you leveraging what we provide today. We will need more details to understand some of your points below...
I am not a rendering expert and will ask some colleague assistance on the rendering side but let me try to answer some of your questions.
PLease have a look at my answers.
Hope this helps.
[Torben] I can NOT crop/mask the pointcloud in realtime using shapes? I can mask it using something like a block only!?
Using Point Cloud clip tool accessbile in the clip manager, you can clip by " Block (oriented with AccuDraw compass), Shape and Slab"
[Torben] It does NOT respond to lighting? EDIT: It does produce shadows from sunlight it seems. Just as you vould expect any 2D geometry to do. It does NOT care about lineweight though.
I am afraid we don't handle the weight in luxology rendering yet .
If the weight is a MUST for you then you may consider rendering with MicroStation native engine and not luxology engnine. (Utilities/Image/Save).
[Torben] It does NOT use the full resolution?, If my view is set to show 20% of the points, then the render is just as bad! But even at 100% in a VIEW, Not all points are showing! Also, the resolution rendered depends on the size of the VIEW being rendered!! Rendering with AA does not up the point count, only increasing the view size seems to.
Not sure I follow you here. What does AA mean ?
The number of points loaded is always dependent of the view points (i.e. let,s say you are totally zoomed out with a point cloud of several hundreds of millions of points, we never load all the points but only a downsampled representation based on the viewer location).
Could you share images illustrating your problem ? and why not some dataset , please upload on ftp.bentley.com/pub/incoming and let me know the file name.
[Torben] I can NOT reclassify points using shapes/solids? I can reclassify by drawing a shape only!?
In the Edit Classification option you can select points with 4 methods:
- Block (you can set orientation with AccuDraw compass sortcuts)
What else would you need?
[Torben] ..And then I have to export the entire pointcloud which seems to take ALOT of time.. A serious lot of time. 300MB (4.5million xyz's) takes +10 minutes!!
I just exported a 10 Million point cloud from POD to POD and it took about 30 sec on my 3 years old laptop.
Do you export to POD or XYZ ?
Since I don't understand what is going wrong I would need more details and ideally your dataset.
Thank you very very much for taking the time to help me :) I have full confidence that you will help me with the procedures to create stunning renderings in nearly no time using Bentleys state-of-the-art technology!
One thing seriously missing to make it useful, is crop/classify using an element. Be that a 2D fence-like shape or 3D solid volume.
We model using inroads, and getting the exteriour boundaries of our project is trivial. This exteriour boundary is perfect to either mask the pointcloud(s) or reclassify (in order to remove points upon export)
As it is, it seems I cannot modify my clipping shape once it has been set, and can only have one clipping shape, so making adjustments to a big project becomes very time consuming as a very complex shape needs to be redrawn on each modification?
Seems obvious that symbology should be supported on points. What makes these points differ from regular points? They are all XYZ? Lineweight support would be a very simple shortcut to producing better looking renderings when the point density is not used at the fullest, or when rendering below the point density.
I speculate that control over points and their expressions (Matching classification to lineweight and transparency) would yield some interesting results (semitransparent vegetation using bigger points on a reduced set)
..turning the points into a mesh?Can you provide me with a procedure for making a reduced 3D mesh from the pointCloud, that does not fill internal holes? (F.ex. by disallowing trianglesides longer than, say 1m. Note that the Descartes STM does not seem to work)?I would prefer that the pointcloud would just render with a "solid" appearance; as a on-the-fly created mesh, or maybe some kind of automatic per-pixel interpolation between points to fill in the blanks. But seeing that this is in the future (as many many nice and wonderful things are) perhaps converting pointclouds to (alot of) meshes (groups of triangles) could be a way to go? microStation seems to be able to handle very large models, as long as the meshes are divided in such a way that they can be culled. We have access to MicroStation and inRoads, and will get Descartes IF we get to the point where rendering pointclouds makes sence (read:reduces cost without sacrificing quality).
About the point density
IMHO The point density should not reflect the view size, but the render size, and allways use the fullest resolution! If I resize a microStation view to be very small, say 100x75 pixels, and render a 2000x1500 image, the point density on the rendering is reflecting what points are seen in the view, resulting in a very blank image with just a few dots rendered. If I do the opposite; Set the view to be big , say 2000x1500, and render a smaller image, I get a much bigger point density, and thus a much better rendering.AA is an abbreviation for Anti-Alias (Smoothens pixels by rendering a bigger image then downsampling), and I speculated that this could be a technical opening into sampling more points from the pointCloud, but if it is, it is not in use.
About the exporting
I export POD to POD, but maybe XYZ to get the data available for some awesome DIY-VBA (Read:Bad joke!) I will investigate the machine configuration on our testing machine, to see if I can get faster export times.I will give it a try with rendering using the native engine (bad as it might be), and see about clearance for uploading images (but I dont think so, but maybe I can find a free pointCloud for testing)Regards,Torben
tx for the answers.
Let me give you a status Point by point.
We will add clip by element as well as new capabilities to combine display style (Classif & Intensity, Intensity & Elevation). This will come in the next version of Descartes but I cannot give any date for now.
I may ping you in the futur to demonstrate an alpha version and get your thought if you agree.
About rendering: lineweight & point density
I pushed that to our rendering team - I would appreciate if you could share examples of images your rendered. We can do it by email and bentley will use it only for internal validation/certification. I'd like to be 100% sure I understand your problem .. could be a project specific bug and not an overall problem related to our rendering. Please consider sending few images by email.
please update me,
1) Full 3D Meshing:
MicroStation Mesh elements does not scale with millions of Points so in my understanding the problem becomes to extract decimated points & lines from a point cloud and then build a microstation Mesh from these extracted elements.
the choice may vary depending on your dataset, here 3 ways to consider:
1.1 Extract a grid of points using Snap element along any aribtrary direction (ex; if you deal with a wall facade your direction may be somthing like a front view.)
An example along Z axis is given here, but keep in mind we can achieve this along any arbitrary direction
How to extract a regular grid of terrain points from a Point Cloud; http://communities.bentley.com/products/geospatial/desktop/w/geospatial_desktop__wiki/6296.aspx
1.2 Extract Sections and mesh between these sections.
Extract Section of Wall with overhangs
1.3 Extract breaklines
Use Visual explorer to extract breaklines/edges and use these to build your mesh.
see `"introducing Flashlight sections`" (renamed visual explorer)
2) 2.5D Mesh = Terrain Surface
"ex. by disallowing trianglesides longer than, say 1m. Note that the Descartes STM does not seem to work`" .. we will look at this in Descartes. It shoud work
Benoit asked me to answer you regarding the use of the STM for solving this request :
Can you provide me with a procedure for making a reduced 3D mesh from the pointCloud, that does not fill internal holes? (F.ex. by disallowing trianglesides longer than, say 1m. Note that the Descartes STM does not seem to work)?
Currently the disallowing of the triangle side functionality works the same way for a STM as for a TM. Unfortunately this behavior could lead to a very awful looking STM at low resolution. This is because at low resolution the STM is represented by less points, causing the points spacing to be greater and thus the triangle length to be also greater. So at low resolution many triangle sides that might be wanted are removed.
To restraint the triangulation of the STM in order to avoid filling some internal holes you could explicitly set multiple masks on a single STM, one mask for each hole. Or you could create Civil DTM features (e.g. : void, island, hole, etc…), export them to a Civil DTM file and use that file as a source of terrain data for the STM.
Yes Torben, you are right, currently the point density reflect the view size, which is a bug, so I have entered TR 339204.
The obvious workaround is to match the size of the MicroStation view to the size of the render view before doing a Luxology rendering featuring a point cloud.
As for using the full resolution this is probably not wise to do that for all kind of point clouds. For very huge point cloud using the full resolution will tremendously slow the rendering process and likely bust the RAM before the process is complete. Maybe for point cloud whose size allow the rendering process to complete without any memory allocation failure the rendering could be, again at the cost of slower performance. For the raster data draped on a STM we have noticed that boosting the resolution could result in better looking rendering, which lead to the creation of the STM_PRESENTATION_QUALITY environment variable. But even when this environment variable is set the full resolution of the raster draped on a STM is not use, just a higher, more precise resolution.
"MicroStation Mesh elements does not scale with millions of Points so in my understanding the problem becomes to extract decimated points & lines from a point cloud and then build a microstation Mesh from these extracted elements."
Just thinking aloud....
What about using the flashlight function in a grid... and decimating the points based on the points on the grid? The decimation process should be independent enough to be be processed concurrently..... on the GPU?