Utilities Example for ElementSensor

I recently got a request for an example of using Element Sensor.  I have posted something to the group files area.

3D Utilities Using ElementSensor

This “tubes up” a water/sewer network based upon a single-line model that has diameter as business data from Quebec City.  Load the utilities_08.dgn in GC v.08.11.08.228 or later.

Please post other examples of ElementSensor if you have them.  From the readme, a description of the feature.

ElementSensor

Provides a live and persistent connection between a raw DGN element and a GC feature.  The element’s essential geometry and metadata are exposed in GC.

·         Element path is saved in GCT and can be reassigned to new elements

·         Multiple elements can be selected in a single ElementSensor

·         Works with reference files

·         Create by drag-and-drop or by using tool

·         ElementSensor can be used as D-series geometry input to GC features.

·         If the raw DGN element has Business Data, these are exposed as output properties

·         Note: currently works on only linear and elliptical geometry: line, polyline, arc, circle, ellipse, and elliptical arc

 -Makai

 

  • Hi Xun,

     

    1. Deep fetch: I was referring to the Mstn side of things. It sounded like because the EC info was buried too deep and GC would be slowed down when trying to get access? The GC user would not see any change. Is EC designed for data flow apps like GC to scan large amounts of elements, or only for manual individual editing in the Elem Info panel?

    2. Element Writers: Wonder why it did not make the list? GC needs to communicate with raw and other non-GC smart objects. Element Sensors provides an input 'portal' to GC, so an output mechanism should be next, I would have thought. I can't see making GC little informational 'dead end' spurs helping anyone. GC elements will tend carry more design intent, and should be designed to drive dumb geometry. Information is always increasing, hence also the need to manage it. This means using the few(smart) to drive the many(raw), and working at higher or multiple levels of 'abstraction'.

    3. Notation: Yes, I see that there is a right-click menu for attaching "notation property to all features of this type". The options are Boolean, double, int, object and string. Object sounds most interesting. What objects are available? There doesn't seem very much in the documentation, if at all. EC Properties/Schemas are pretty complex and go beyond notation. Will GC be able to dynamically discover EC objects in the model and read/write to them? Dictionary? The examples I found on the forum focus on tagging structural 'knowledge representation' info like 'nodes/member', or materials tags. Didn't find it in the documentation, but I assume the notation functions are available for scripting as well.

    Hopefully, the 'object' option means that things like local coordinate systems used and generated by the script can exposed/published when the geometry is 'exported or 'instanced' elsewhere. Eg: it should be standard practice to include local CS info with -say- any cladding panels generated. This would allow the follow-on detailing team to hook their working planes on to something, instead of having to guess and reconstruct their ACS's, refs etc with every iteration.

    4. Publishing: in Catia, you can publish any kind of geometry or parameter. Like most MCAD apps, Catia allows geometry to be linked/referenced across models. This is very powerful but leads to a lot of regen problems when the referenced/linked part(s) are changed.

    Publishing allows the user to name the referenced geometry, and 'duplicates' it in the part or assembly model specification tree. There is some control as to where the published info is 'declared'.

    When a published part is changed, Catia searches out the dependent models and updates/checks the dependencies... I guess this 'pushy' method is better than waiting for the referencing model to 'pull' the changes on loading, and potentially running into regen problems asynchronously. As mentioned elsewhere, trying to rebuild dependencies after the fact is a real pain. In V5, only the loaded models are checked. The onus is on the user to ensure that the new geometry is compatible with the 'published' geometry that will interface with the connecting models. There are some tools to bypass or replace failed or missing portions of the 'history tree'. PTC's WF5 apparently made a lot of changes here, and is probably a lot better at this... currently. Not sure how Catia V6 handles this.... in the cloud?

    Publishing also allows some end-user structuring of the dependencies/variables. Inter-model constraints tend to be published at the overarching 'context' or container model level. This allows the solver to prioritize and breakdown the constraint sets in a way that is aligned with the design intent. This also allows certain geometry or parameters to be exposed/published without having to load and synchronise the whole script/history tree. This saves a lot of time. Inventor's Derived Components goes further, and allows the user to pick which bits are to be published with 'full intelligence'. It could be something as simple as exposing certain local coordinate systems or parameters/constraints to participate in the loaded model, without forcing the system to 'play through' all the subservient solid modeling transactions.

    Catia has a scan mode that can display the product hierarchy/structure or the history/update sequence. The sequence in which the dependencies are processed (dataflow) is not necessarily the same as the model 'assembly:part' hierarchy (structure). Visualising the dependencies is key to effective info-model management.

    As a result, most MCAD models are based on 'skeleton' modeling, where the model 'structure' and 'dataflow' hierarchy is fairly closely aligned, and top down in character. At the top, there is a 'skeleton' model that contains all the key geometry/parameters and solved at that level, before propagating the changes downwards/downstream.

    A variation of this is 'adaptor' modeling, where there are some lateral dependencies in what is still mainly a top down 'tree' hierarchy. This is found a lot in automotive body/surface models, where links across the car body parts are required/unavoidable.

    There is also something called 'Functional Modeling' which is a lot less hierarchical. Catia's Imagine and Shape workbench uses this. DS demo'd a BIM app based on this called Live Buildings. Pretty powerful stuff. I suspect that publishing is still useful here but will need more behind the scenes management.

    So, to answer your question: I think publishing would apply to both the 'graph variables' which tend to be at the root or input end of the script(s), as well as the dependent parameters downstream 'in the leaves'.

    Root or input variables may need to drive other scripts in the same model. So publishing them is kinda like what Revit does with 'shared parameters'. PCS also allows for certain parameters to be 'booted up' to the top level so that they can be referenced by multiple components, and accessed after the component is inserted/instanced in Mstn/BA.

    I suppose publishing 'root' or 'leaf' variables/geometry are really just like the GFT's input and output properties. In the real engineering world, there will be multiple scripts that need to talk to each other in a robust way. The individual script needs to be encapsulated and participate through 'published' interfaces/ports with other 'scripts', non-GC smart geometry and lots of dumb geometry, and the user with his pesky model based manipulations.... or not?

     

    Regards

    Dominic

     

  • Dominic,

    Thanks for the replying. Your information always takes me days to digest :).

    (1) Right now, the ElementSensor GC Properties is designed intentionally to be of the same structure as EC Properties. This will help to eliminate any inconsistency between two worlds. As for if " Is EC designed for data flow apps like GC to scan large amounts of elements", I am not an expert on its design. But according to what I know of where EC has been used so far (for plant, civil and etc), it should have taken large amount of element data into design consideration.

    (2) It might be because that we didn't see many user cases (request) on element writter. At least, not as urgent as element sensor for bringing in raw element is essential but writting out raw element is not (can always use feature hosted element). Eventually, it will be there just as Excel feature that has both ReadValue and WriteValue methods.

    (3) The Notation property obj type is a problem when it is written to EC Property.  We will need explicit conversion between the GC Property and EC Property. If the Notation property is of obj type and is actually an instance of GC class, then we will need to find the right description of such class in EC and then convert it. In the published release, we now only have support several primitive types such as string, int, and etc but not obj.

    (4) The publishinig mechanism is very interesting to know. It feels a bit like alias mechanism and gft is similar to it other than the input output names are given by GC and unchangable. I haven't had any chance to use Catia yet except looked at only the introduction texts. It seems that in Catia, everything that can be published will have a host. For example, the plane to be published has its owner as Part1 and is identified as Part1\Geometry Set.1\Plane.2. Not sure if Catia allows the global variables to be published or it is directly accessible? Also, from Microstation station point of view, is it just like create another ECClass and ECInstance(ECProperty) and make sure the ECProperty is calculated from (or referencing) the part of or the whole raw element?

    Regards,

    -Xun

  • Hi Xun,

    Apologies for the big lag in replying.

    4. I think the Catia global variables are accessible. The parameter collections can be iterated through to find and access them using VB. See the examples on this site. They are also accessible via the Knowledge Advisor dialog boxes.

    The Catia examples raises a few questions for me: The way that Catia's VB needs to loop thru the 'parameter collection' and compare and match names sounds very slow and low tech. Surely, there are better ways of accessing user parameters? Houdini replaced its HScript with a Houdini Object Model couple of years ago. Surely, it we need to access a parameter, we should just have access it via an object interface, and not have to write loops.

    Houdini and a lot of other animation apps seem to be moving to Python. Even GH now has a Python plugin. One supposed advantage of Python is that it's more object-oriented and has dynamic typing. ACAD's new AssocFramework also uses something called protocol extensions, which also allows the user to extend the pre-compiled types or classes when using its Parametric Drawing API. See AU2010 CP316 online.  ACAD's Overule API also allows the user to extend the behavior of the standard entities using dotNET. I think the Stickman example would benefit from dynamic typing. Will GC support dynamic typing, objects, extension methods etc in the future?

    Hosts+ Aliases: interesting perspective. Do you mean like the containers + pointers that Makai mentioned previously, when the new school GC was introduced? Sounds very similar to ACAD's AssocFramework Dependency Bodies? These are containers for linking and defining constraints and parametrics that link raw elements together.

    3. GC v EC Properties: EC classes and properties sound like they are designed to be used in an OO setting. But hopefully not just as statically typed objects only? XFM seems to cater for Inferred as well as Native Features, using EC. Like XFM, OpenPlant has a lot of stuff that maps classes to databases schemas. I think scripting, by its nature, will need a lot of dynamically typed stuff/attributes/parameters made up by the user, that will need to interact with the class/schemas and state info that Mstn or GC provide?

    XFM data apparently does not require an external DB. So, how or what manages the dependencies, transactions, consistency/integrity and even performance issues? GC+LINQ?  I suppose GC can already use Excel as a solver, and draw the results, so using an external DB as a 'node', where the input and output parameters are managed by GC, shouldn't not a problem. Looks like GH is also looking at working to provide user data / dictionaries, in addition to Data Trees. And MS has helped here with things like LINQ, which should allow better DB access from scripts.

    I think definitely worth looking at simulation world, which has been dealing with problems of dataflow computing AND state machines for a while, including what role functional languages can play. The graph based approach has long been established in the simulation world, where solvers are strung together as nodes in a controlled state machine, using configurable (like an improved Catia relational modeling?!) electric circuit board-like ports or gateways.

    2. The future read/write functionality sounds good. How would it work? It sounds like GC would either write to an EC class/instance or a Mstn class/instance, similar to VBA's use of COM (hopefully without the 32bit limitation). Converting between GC and EC properties: I guess Mstn and GC already provide the classes, metadata as part of dotNET assemblies for Interops/ P-Invoke? How will user generated classes be handled dynamically? Design++ frame based classes/properties seem a lot more flexible and dynamic compared to GFT. Combine with XFM API for a GC/FS/DDD version of Geospatial Adminstrator?

    Ideally, scripting should be able to directly access the underlying native code libraries and data as much as possible. Currently, it seems that we need to switch to C#, to get to various functions and bit more speed.  This does not compare well to apps like MotionBuilder, where there is tight integration between Python and its C++ library OpenReality, and no 'inbetween' language to learn.  Under dotNET, even C# is still only interpreted not compiled. Some consider C# over glorified scripting because of this (I think this not necessarily all bad). Will GCScript and C# converge? As you know, this is one alleged advantage of DScript.

    Apparently, in F#, the programmer can select a block of code and run it without compiling, even though F# is statically typed, making it very much like scripting. LINQPAd has similar 'scratchpad' functionality. This is one of the big advantages(?) of scripting, besides not needing to think about resource management. NB: using C# doesn't guarantee that there won't be problems here, either.

    1.  ElementSensor GC Properties v EC Properties: I suppose this will be the same thing at some point, when GC becomes pervasive? Bentley has been working on Engineering Component Modeling, since Project + ComponentBank and CustomObjects, ie the 90's. Hopefully, Joe User will start to see more zip, soon. Tipping point: V9?