My Experience with Point Clouds Rendering in uStation v8i SS3
All across Bentley.com is read about pointools, Descartes and "Visualization" of point clouds... But I dont really see HOW to do that ANYWHERE?
I mean.. I can load a pointcloud in uStation (SS3 user here) but its dumber than dumb when it comes to controlling!? (Or I am!)
So whats getting me all worked up:
I can NOT crop/mask the pointcloud in realtime using shapes?I can mask it using something like a block only!?
It does NOT respond to lighting?EDIT: It does produce shadows from sunlight it seems. Just as you vould expect any 2D geometry to do. It does NOT care about lineweight though.
It does NOT use the full resolution?If my view is set to show 20% of the points, then the render is just as bad! But even at 100% in a VIEW, Not all points are showing!Also, the resolution rendered depends on the size of the VIEW being rendered!! Rendering with AA does not up the point count, only increasing the view size seems to.
It does NOT render intelligent in any way?Points are rendered as tiny specs of dust only! I would expect at least an estimated closed surface by connecting close points, or at least the tools to do it swiftly. Respecting linescale goes without saying! (does not do that)It does respect "depth-of-field" but produces artifacts. Black dots with white "ghost".It does respect "contour shading", but.. thats.. I mean.. dots with black outlines..
I can NOT reclassify points using shapes/solids?I can reclassify by drawing a shape only!? And then I have to export the entire pointcloud which seems to take ALOT of time.. A serious lot of time. 300MB (4.5million xyz's) takes +10 minutes!! This is not rocket science and needs NO calculations of ANY kind, its just a list of coordinates that needs to be either written or skipped (as all we can do is filter by classification!) - This is really bad! - I wonder what happens when we begin working on our multi gigabyte sets?!!
Scalable Terrain Model (a Descartes thingy) can NOT make a useful model from a point cloud!?Descartes is simply using the XYZs to make a top-down mesh of triangles between ALL the points (as seen from space mind you) Which is pretty much the same as a DTM, except it looks like poop because it automatically lowers resolution!!?? This STM does not respect holes cut into the pointcloud. You can exclude points by shapes in the STM, but the holes are filled when viewing and rendering the STM!! So you cannot use this with any other geometry!?
How hard can it be to make a surface and access that dynamically instead of the points? Just make smallest triangles. And maybe get fancy and reduce geometry somehow; if sidelength is too long then exlude triangle. Or even reduce by neighbouring surface angles.. Or use a tree structure.. ;o)
I am new to working with pointclouds, but have known and read about them.
Please help me achieve (at least) a decent rendering:
Assuming a ginormeous classified, colored pointCloud.
I speculate I need some of the following questions answered, but ANY hints/ideas that work fast and efficient is much appreciated. The goal is that we dont model, we dont fix.. We clip the pointcloud, add our project, and render!
A. Preferably I need a procedure that just clip and filter the pointCloud on-the-go. So I can show anything except the ground as points.
B. Make a "2½D" regular triangulated (not autoscaled as that clearly doesnt work in renderings) DTM that respects the holes in the pointcloud, based on a classification filter.
C. Perhaps a procedure to split the pointCloud and matching surface into smaller managable "tiles" that will cull when rendering?
PLEASE help me "Get more value out of the pointcloud data"... please... PLEASE!
/Torben
p.s. This has been and will be heavily edited as I digg deeper.. Thus responses after this post might not reflect the current text.
p.p.s. http://www.isprs.org/proceedings/XXXVI/part5/paper/RABB_639.pdf - Techniques to automate the generation of surfaces from a pointCloud.
Hi Torben,
Benoit asked me to answer you regarding the use of the STM for solving this request :
Can you provide me with a procedure for making a reduced 3D mesh from the pointCloud, that does not fill internal holes? (F.ex. by disallowing trianglesides longer than, say 1m. Note that the Descartes STM does not seem to work)?
Currently the disallowing of the triangle side functionality works the same way for a STM as for a TM. Unfortunately this behavior could lead to a very awful looking STM at low resolution. This is because at low resolution the STM is represented by less points, causing the points spacing to be greater and thus the triangle length to be also greater. So at low resolution many triangle sides that might be wanted are removed.
To restraint the triangulation of the STM in order to avoid filling some internal holes you could explicitly set multiple masks on a single STM, one mask for each hole. Or you could create Civil DTM features (e.g. : void, island, hole, etc…), export them to a Civil DTM file and use that file as a source of terrain data for the STM.
Point Density
Yes Torben, you are right, currently the point density reflect the view size, which is a bug, so I have entered TR 339204.
The obvious workaround is to match the size of the MicroStation view to the size of the render view before doing a Luxology rendering featuring a point cloud.
As for using the full resolution this is probably not wise to do that for all kind of point clouds. For very huge point cloud using the full resolution will tremendously slow the rendering process and likely bust the RAM before the process is complete. Maybe for point cloud whose size allow the rendering process to complete without any memory allocation failure the rendering could be, again at the cost of slower performance. For the raster data draped on a STM we have noticed that boosting the resolution could result in better looking rendering, which lead to the creation of the STM_PRESENTATION_QUALITY environment variable. But even when this environment variable is set the full resolution of the raster draped on a STM is not use, just a higher, more precise resolution.
HTH,
Mathieu
"MicroStation Mesh elements does not scale with millions of Points so in my understanding the problem becomes to extract decimated points & lines from a point cloud and then build a microstation Mesh from these extracted elements."
Just thinking aloud....
What about using the flashlight function in a grid... and decimating the points based on the points on the grid? The decimation process should be independent enough to be be processed concurrently..... on the GPU?
Yes I have been trying to look at this too. the best I could come up with was:
1. Decimate the points in Pointools and save out a low count (100mm spacing) pod file.
2. Reference into Microstation and export to xyz ascii file
3. Use Excel to strip off the RGB and Other info and save to a csv file
4. In microstation use the Dimension XYZ import points tool (key in IMPORT POINTS)
5. I select a range of points (roughly) per face, flipped those points on to the XY plane (around a known point), use the mesh by points tool, flipp the mesh back into the upright position (around the known point).
6. Then us the decimate mesh tool (repeatedly ) to remove triangles with short edges/coplanarfaces&edges until left with 1 triagle per face.
7. Finally used the surface trim tools to slice the triangles where they intersect.
By coping the result back to the point data I can see its done a roughly ok job, but it would have been easier to simply draw 6 faces… and I have lost detail by decimating so heaviliy.
Maybe a cheat would be to only decimate the PC at the DV clipping plane.... using flashlight.....
the decimation would be based on a linear traversal of the 'section profile' of points. I guess the there will be some clever clogs algorithm that picks out the break points, corners, edges...curves and arcs etc and strings them together with a Mstn smartline. Option for CVE processing?
I suspect this would be really helpful for reconstructing geometry from PC's. Probably a lot less costly than trying to reconstruct the whole 3d surface in one go.
Yes something like that. I was thinking I could use GC to find the planar "trend" of a selection set of points from the point cloud. Its easy enough to find the interections of planes in GC after that... The hard part is parsing the point cloud data and finding an algorithm to find the average planar "gradient"... of course by that stage I just wish bentley just had an app for that.