Rendering and working with PointClouds is very very poor?

My Experience with Point Clouds Rendering in uStation v8i SS3

All across Bentley.com is read about pointools, Descartes and "Visualization" of point clouds... But I dont really see HOW to do that ANYWHERE?

I mean.. I can load a pointcloud in uStation (SS3 user here) but its dumber than dumb when it comes to controlling!? (Or I am!)

So whats getting me all worked up:

I can NOT crop/mask the pointcloud in realtime using shapes?
I can mask it using something like a block only!?

It does NOT respond to lighting?
EDIT: It does produce shadows from sunlight it seems. Just as you vould expect any 2D geometry to do. It does NOT care about lineweight though.

It does NOT use the full resolution?
If my view is set to show 20% of the points, then the render is just as bad! But even at 100% in a VIEW, Not all points are showing!
Also, the resolution rendered depends on the size of the VIEW being rendered!! Rendering with AA does not up the point count, only increasing the view size seems to.

It does NOT render intelligent in any way?
Points are rendered as tiny specs of dust only!
I would expect at least an estimated closed surface by connecting close points, or at least the tools to do it swiftly. Respecting linescale goes without saying! (does not do that)
It does respect "depth-of-field" but produces artifacts. Black dots with white "ghost".
It does respect "contour shading", but.. thats.. I mean.. dots with black outlines..

I can NOT reclassify points using shapes/solids?
I can reclassify by drawing a shape only!? And then I have to export the entire pointcloud which seems to take ALOT of time.. A serious lot of time. 300MB (4.5million xyz's) takes +10 minutes!! This is not rocket science and needs NO calculations of ANY kind, its just a list of coordinates that needs to be either written or skipped (as all we can do is filter by classification!) - This is really bad! - I wonder what happens when we begin working on our multi gigabyte sets?!!

Scalable Terrain Model (a Descartes thingy) can NOT make a useful model from a point cloud!?
Descartes is simply using the XYZs to make a top-down mesh of triangles between ALL the points (as seen from space mind you) Which is pretty much the same as a DTM, except it looks like poop because it automatically lowers resolution!!?? This STM does not respect holes cut into the pointcloud. You can exclude points by shapes in the STM, but the holes are filled when viewing and rendering the STM!! So you cannot use this with any other geometry!?

How hard can it be to make a surface and access that dynamically instead of the points? Just make smallest triangles. And maybe get fancy and reduce geometry somehow; if sidelength is too long then exlude triangle. Or even reduce by neighbouring surface angles.. Or use a tree structure.. ;o)

I am new to working with pointclouds, but have known and read about them.

Please help me achieve (at least) a decent rendering:

Assuming a ginormeous classified, colored pointCloud.

I speculate I need some of the following questions answered, but ANY hints/ideas that work fast and efficient is much appreciated.
The goal is that we dont model, we dont fix.. We clip the pointcloud, add our project, and render!

A. Preferably I need a procedure that just clip and filter the pointCloud on-the-go. So I can show anything except the ground as points.

B. Make a "2½D" regular triangulated (not autoscaled as that clearly doesnt work in renderings) DTM that respects the holes in the pointcloud, based on a classification filter.

C. Perhaps a procedure to split the pointCloud and matching surface into smaller managable "tiles" that will cull when rendering?

PLEASE help me "Get more value out of the pointcloud data"... please... PLEASE!

/Torben

p.s. This has been and will be heavily edited as I digg deeper.. Thus responses after this post might not reflect the current text.

p.p.s. http://www.isprs.org/proceedings/XXXVI/part5/paper/RABB_639.pdf - Techniques to automate the generation of surfaces from a pointCloud.

Parents
  • Hi Torben,

    Yes while point clouds are often jaw dropping they generally require post processing steps (e.g. : extraction, transformation, etc...) before something useful can be done with them.

    This is quite similar to raster data (e.g. : even if you see the pixels describing road you cannot do much with it without modeling the road with some kind of vector).

    Some of your items are CRs and so I will forward your post to product manager responsible for point cloud.

    As for the resolution this is normal that the full resolution is not available all the time. This is the same for raster and STM, two others kind of data that can contains billions of pixels/points. Certainly displaying the full resolution on a computer screen having only a few millions pixels is absolutely not useful and would certainly kill the visualization performance.

    The new Scalable Terrain Model is only a 2.5D terrain and thus should only by created from 2.5D data like aerial point cloud with points being classified as terrain or post-processed to remove any above ground features.

    Finally I would suggest that you check the WIKI web pages describing the new functionalities in Descartes SELECTSeries 3 (see the SELECTSeries 3 section on communities.bentley.com/.../bentley-descartes.aspx).

    Thanks,

    Mathieu



  • I read: "Yes, we have NO working tools for point cloud visualization at the moment"

    Is it really so? I got my bosses all hyped up, because you write all over that you can visualize pointclouds with descartes, and it turns out to be just talk?!

    I have read and watched anything I could find at the Bentley site and the internet, and also the various help-files included with Descartes to no avail, but thanks for the link.

    I want to love you Bentley, you know that, but that love is getting some serious testing at the moment! (probably thanks to marketing - and me being gullible!!?)

    Please remember us little guys.. I have to tell my boss that what you (and therefore me) hinted at being possible is not really possible. Some day they'll believe your marketing over me, and then guess what!? (Same story with the missing traffic simulation and real luxology shaders btw..)

    *Gone browsing... bbl*

    /T

    System: Win7 64bit 16GB Ram - microStation V8i SS3 08.11.09.578. + PoinTools CONNECT. - Intel i7-4800MQ CPU@2.70GHz, 4 core / 8 Logic proc.

  • "MicroStation Mesh elements does not scale with millions of Points so in my understanding the problem becomes to extract decimated points & lines from a point cloud and then build a microstation Mesh from these extracted elements."

    Just thinking aloud....

    What about using the flashlight function in a grid... and decimating the points based on the points on the grid?  The decimation process should be independent enough to be be processed concurrently..... on the GPU?

  • Yes I have been trying to look at this too. the best I could come up with was:

    1. Decimate the points in Pointools and save out a low count (100mm spacing) pod file.

    2. Reference into Microstation and export to xyz ascii file

    3. Use Excel to strip off the RGB and Other info and save to a csv file

    4. In microstation use the Dimension XYZ import points tool (key in IMPORT POINTS)

    5. I select a range of points (roughly) per face, flipped those points on to the XY plane (around a known point), use the mesh by points tool, flipp the mesh back into the upright position (around the known point).

    6. Then us the decimate mesh tool (repeatedly ) to remove triangles with short edges/coplanarfaces&edges until left with 1 triagle per face.

    7. Finally used the surface trim tools to slice the triangles where they intersect.

    By coping the result back to the point data I can see its done a roughly ok job, but it would have been easier to simply draw 6 faces… and I have lost detail by decimating so heaviliy.

  • Maybe a cheat would be to only decimate the PC at the DV clipping plane.... using flashlight.....

    the decimation would be based on a linear traversal of the 'section profile' of points. I guess the there will be some clever clogs algorithm that picks out the break points, corners, edges...curves and arcs etc and strings them together with a Mstn smartline. Option for CVE processing?

    I suspect this would be really helpful for reconstructing geometry from PC's. Probably a lot less costly than trying to reconstruct the whole 3d surface in one go.

  • Yes something like that. I was thinking I could use GC to find the planar "trend" of a selection set of points from the point cloud. Its easy enough to find the interections of planes in GC after that... The hard part is parsing the point cloud data and finding an algorithm to find the average planar "gradient"... of course by that stage I just wish bentley just had an app for that.

  • Here follows my latest although sparse findings:

    Speed is amazing!

    Using the "Save image" produces images very very very fast!
    I "rendered" a 400 frames 720x525  fly-around" animation in just about 15 minutes!

    If I render the same image, it takes about 3 minutes. And the point cloud gains very little by being rendered.
    So what I basically have is a decrease in "rendering" time of about 80 times. I might aswell render animations (Where pointclouds really shines in the first place) instead of single images!

    This is spectacular indeed!

    Geometry to pointcloud instead of the other way around?

    To utilize this amazing speed, I wonder if perhaps I should stop using geometry all together, and simply use only pointclouds?

    1. Can I somehow export the regular model (The Geometry project mesh Model) as a detailed pointcloud and thus gain some serious speed?

    2. Can I somehow artificially boost the density of my laserscanned pointCloud / create interpolated points  to up the definition for visualization purposes?

    The downsize of loosing geometry alltogether (or is it really "Not rendering") is ofcourse the loss of pretty much ALL the latest fancy luxology stuff such as shadows/dof, which leads to another question.. Assuming "1" above is easy...

    3. Can we somehow recolor the generated pointCloud to reflect the lighting conditions in the model?

    Regards,
    Torben

    System: Win7 64bit 16GB Ram - microStation V8i SS3 08.11.09.578. + PoinTools CONNECT. - Intel i7-4800MQ CPU@2.70GHz, 4 core / 8 Logic proc.

Reply
  • Here follows my latest although sparse findings:

    Speed is amazing!

    Using the "Save image" produces images very very very fast!
    I "rendered" a 400 frames 720x525  fly-around" animation in just about 15 minutes!

    If I render the same image, it takes about 3 minutes. And the point cloud gains very little by being rendered.
    So what I basically have is a decrease in "rendering" time of about 80 times. I might aswell render animations (Where pointclouds really shines in the first place) instead of single images!

    This is spectacular indeed!

    Geometry to pointcloud instead of the other way around?

    To utilize this amazing speed, I wonder if perhaps I should stop using geometry all together, and simply use only pointclouds?

    1. Can I somehow export the regular model (The Geometry project mesh Model) as a detailed pointcloud and thus gain some serious speed?

    2. Can I somehow artificially boost the density of my laserscanned pointCloud / create interpolated points  to up the definition for visualization purposes?

    The downsize of loosing geometry alltogether (or is it really "Not rendering") is ofcourse the loss of pretty much ALL the latest fancy luxology stuff such as shadows/dof, which leads to another question.. Assuming "1" above is easy...

    3. Can we somehow recolor the generated pointCloud to reflect the lighting conditions in the model?

    Regards,
    Torben

    System: Win7 64bit 16GB Ram - microStation V8i SS3 08.11.09.578. + PoinTools CONNECT. - Intel i7-4800MQ CPU@2.70GHz, 4 core / 8 Logic proc.

Children
  • Not sure whether making everything points will speed things up. But, I would have thought that since the hardware is based on polygons, then there has always been a vertex buffer... I suspect the render for animation tool is much quicker due to the lack of overheads, memory budget for the UI?

    Ran across this awesome clip about Kinect Fusion, and point cloud segmentation. One thing about point clouds that I find bothersome is the lack of photo or texture information that seems to be available with using Kinect/Kiwi.... in real time, even !

    For visualisation purposes, I can see compositing or mixing material rendered using Mstn/Lux with a '3d environment map' created with a Kinect-type device that combines infra red or laser scanning with video footage being very productive.

    Mixing 'fictional' and 'non-fictional' objects, as these Uclideon guys call it, should have a lot of application. As built environment designers, we are constantly having to work with(in) a context. Easier to scan and appropriate, than to re-construct, using API's like Kinect Fusion, and Photosynth3d.

    And, if the Uclideon guys are successful, then point clouds would be the preferred format to deal with 'unlimited detail'.

    NURBS and solids seem be due to be radically re-implemented to take advantage of GPU advances. Things like using GPU texture buffers to store info for NURBS processing will need geometry library changes... Maybe points/meshes are better more HW-friendly' L-system-like procedural data structure would host surfaces and solids in future.

  • Torben,

    We provide clients with visualizations using solely point clouds from MS SS3. At times, we have used something we call 'hybrid modeling' to enhance our visualizations. Here is a link to our YouTube Channel to see what some of these look like. videos

    And here is some imagery of a hybrid model - a combination of geometry with photo-textures provided from our laser scanner and point cloud data.

    Point Cloud Data

    Hybrid, photo-textured model

    Combined data.

    If you would like to learn more about this workflow, I will post more information for you.

    -Paul

  • Hi Paul,

    The video's on you're Youtube channel are looking great. If I remember alright, I saw some of these in Amsterdam at the BE Inspired (I followed you're presentation).

    Do you have experience with Descartes for the processing of point Clouds. For what I know about it, this programm could be a great addition for this kind of work. 

    Regards Louis

  • Hello Louis,

    Awesome that you were in Amsterdam as well! Such a beautiful place.

    I have used Descartes and find that it is a good compliment to working with point clouds. Depends on the level of detail one is trying to extract from the clouds. The rendering engine is the same however. For full cloud feature extraction we like Geomagic, but Rapidform and Polyworks are fantastic as well for this. For the final visualization, bringing the meshes in MicroStation and applying materials is probably one's best bet.

  • Ran across this paper on 'Smart Boxes'.....

    Maybe good for filling in the blanks where the scan was obstructed...

    Maybe the points inside the boxes could be automatically moved to a separate 'level' or classification. This would make what is left to be modelled manually a little clearer?

    The box alignment and fitting is interesting. Could be helped with colour input? Maybe the generated planes should be left around to be tweaked on screen to allow the modeler to help the recognition process? The points "inside" the shapes would be used as repetition sets for pattern recognition of adjacent points.

    Seems to be a lot of pattern recognitions algos that would be useful for buildings? Periodicity , Rectangles. Non-local filtering. But, it seems that some of them could do with some user guidance or guide shapes.

    It could be a two way interactive process, where the user identifies the approximate planar shapes and the fitting algo cuts the CAD shapes boundaries back according to the best fit?

    Ducts and pipework.... are like trees? Windows ?

    PS: Then there are also these shape grammar guys.