Import GC Model Node: Game Changer?

Great series by Mary Chib showcasing the new Import GC Model Node.

The ability to break up the script using the same 'mental model' as Mstn's Ref attachments opens up a whole new dimension, IMO. The longstanding problem with GC and other algorithmic apps like it is the 'hard transition' that happens once the algorithmic model is thrown over the wall to the rest of the team. This is increasingly being recognised as a problem. See comment in this recent write-up. 

"explained how they keep the geometry in .... for as long as possible to enable dynamic changes to the building definition before handing it over to .... for detailed documentation."

The Import GC Model node should help break down the model and :

  1. Allow the separate disciplines to work concurrently (using the familiar federated CAD file structure)
  2. Support multiple variations (especially important for LOD management)

The example workflow above shows the use of a parametric '2d sketch' for the plan geometry that is imported (with a link back to originating 'reference' GC model) into the active GC model. Pretty well understood workflow in the MCAD world. I can see multiple sketches used to define the tower's external envelope. KPF's CITIC tower would have 2-3 'key' plan sketches at the top, waist and bottom of the tower, for example. 

The use of separate dgn's provides a lot more transparency / familiarity for the entire design team... and helps avoid things disappearing into a mega 'write-only' script that no one understands and is afraid to modify later after your scripting guru has left for a better paying job somewhere else. The underlying problem is that after more than a decade of algorithmic tools being taught in the archie schools, most grads are still pretty ho-hum when it comes to scripting. So much for the 'tool maker' generation?

How teams will use of the Export Node in combination with the IGCM node will be interesting. The dreaded 'hard transition' can be ameliorated by using Nodes that have equivalents in OBD like Walls, Slabs etc. Hopefully, the new Curtain Wall tool in OBD will have a GC Node at some point so that any cladding panels made by GC can be intelligently modified later using OBD tools. The lack of continuity when it comes to parametric smarts has always been a big stumbling block. A number of options here:

  1. Provide Item Types / Functional Cell support. So instead of generating panel as a GNT here, the user would generate a FC using the new Parametric Solids / Constraints tools in Mstn.
  2. Even better if GC can also retain its parametric smarts in 'headless (as a handler) mode'. 
  3. Use GC (in the exported model) to replace dumb geometry cladding panels with smart ones based on either OBD Curtain Wall objects or Mstn FC's.
  4. ?
  • Export Node v. Imported GC Model

    As mentioned, the new IGCM node should make breaking down GC-driven smarts into much more usable chunks. I can see this being used very effectively in a production / team environment in conjunction with static geometry produced by the Export Node. For example, your cladding envelope team would be flexing their scripted model and can Ref attach a static model of the structural frame and the core. The other teams can update their static model independently without impacting others as Ref attaching a model is pretty quick, per usual practice.

    The new IGCM node allows the teams to expose some of the parameters in the respective GC models by using the GC Model Output Property Node. This allows the respective teams to keep a parametric 'copy' of the Ref attached model that allows them to flex and adjust the model allowing some tailoring to suit the host / local model team's requirements, which will be a huge productivity booster / lubricant.

    The process of defining the GC Model Output Properties looks very similar to the GNT process. I wonder if IGCM Node shouldn't supersede/subsume GNT's. Especially when GNT's apparently do not improve performance.

    Presumably, Mstn's Activate tool will still work going forward, so the user could have the scripts in separate models in the active file or in external files and amend them from the active model.

    Component Center integration in due course?

  • Breaking the GC graph down into smaller graphs in multiple models should also allow easier parallel processing.

    Using Wayne's Gherkin example, each spiraling strip could be hived off in a separate model and processed separately. The base polygon model would be separate and the individual strips could be separated into separate models and processed by separate threads etc. All those boolean ops for each panel will be time-consuming.

    Actually, you could just process one strip and ref attach the results back in as copies with different rotations, in this case.

    This should also allow the team to have separate models for different variations for LOD or optioning reasons. The variation models could be based on generating the panels on a floor-by-floor basis. This would allow the team to selectively regen only the floors required instead of having to process the whole building. This kind of flexibility will be key to scalability / LOD management in a production environment.

    Interesting vids from some researchers at SFU showing GC parallel processing.

    Maybe that investment in Scenario Services in the cloud (Azure?) will come in useful here.

  • A more distributed model-based approach also allows more realistic and productive workflows that mix the parametric 'smart' and 'raw' geometry.

    The plan sketch has a parametric rectangle for the core. In reality, the cores will be a lot more complex based a number of lift shafts of differring heights bundled together to form the bulk of the core footprint. The graph for the core would be a lot different and not really start or end with a provisional 'rectangle' perimeter.

    What would be more realistic would be to have GC just boolean substract the core at every level using the 'raw' shaft model in the GC model, instead of tying to come up with a parametric approximation in plan. Cross file associative links down to subelement level like those provided by Associative Extractions tech or similar?

    This hybrid workflow also highlights the usefulness of using GC to process Ref attached models. A simple example would calculating floor areas which needs all the columns and other structural obstructions from the slab area in plan.

    A more useful example would be to dynamically replace lower LOD elements that have been superseded by higher LOD ones.. on the fly. This example shows Aecosim structural elements being associatively linked to separate higher LOD elements in Proconcrete. What would be good is to have the option to dynamically delete/filter out superseded / duplicated elements in the Ref attachment so that hybrid compositions can be generated without duplication for drawing production, quantification etc.

    This kind of pre/post-processing is commonplace in the rendering world and graph-based... like GC.

  • GC has an Optimizer Node which is based on Genetic Algorithm processing. I wonder if there shouldn't also be a Compression or Instantiation Optimiser Node as well.

    Looking at this impressive Zahner presentation, I get the impression that the instantiation process is going to be the most compute/time-intensive. Generating the geometric setting out info or inputs is fairly quick in most cases, but generating all those downstream cladding panels using boolean operations etc in ever increasing LOD is going to need some optimising tools to keep things moving / feasible.

    Automatic duplicate / instance identification: I guess that this would be like BricsCAD's Blockify or a data compression algorithm and would be most useful when used in conjunction with the Export Node? The Optimiser Node would automatically manage a table of Shared Cells for each panel and maybe even look for nested elements that are duplicated like fixings etc?

    Pattern Recognition: Could the Optimiser Node function like a compiler and automatically change what the script passes to dotNET etc? Say the script is feeding the cladding panel GNT a series of points as inputs. And each list of four points are dimensionally equivalent. It would be good to generate the panel once and transform / copy-on-write the rest instead of re-generating each panel in place. Using the Gherkin example, the panels would vary from floor to floor due to the profile of the building, but there would be a lot of repetition going around each floorplate.

    The history-based, solid modeling data-flow way that most 'parts' are generated could also be mined for patterns? Lots of parts are identical or variations that could be 'in-lined' for instantiation before the next pass of operations like penetrations for fixings or trims at the perimeter the tiling pattern.

    Bifurcation: as long as what is generated is geometrically equivalent, it should not matter if the low level algo differs especially for 'static' geometry generated via the Export Node.

  • Zahner's multi-layered top-down set up is interesting.

    1. Driver Model

    This model (or folder of models?) contains the key surfaces, grids, levels, boundaries etc. Easily replicated using a combination of Building Nodes etc. ElementSensor'd Mstn elements

    2. Generative Model

    This model contains lists of key geometric info and a library of models that outputs different info. Easily replicated using a combination of Functional Nodes etc. Not sure what is the fastest way to store lists to be read fastest by other external models.

    3. Parametric Model

    This model (library?) includes the parametric components. The GC equivalent would be GNT's and Mstn's Functional Cells.

    4. Automation Model

    This assembly model is where the parametric components are fed the inputs from the Generative model and stores the instantiated components with embedded VBA scripts within the parametric component (Knowledge Pattern). Must be huge in toto and probably built up from sub-models. Mstn's Ref attachments should allow the same thing. Interesting that a lot of the parts were generated in file as much as possible.

    2d documents are also produced from the model.

    5. Integration Model

    This model looks to be the equivalent of the Navisworks / Navigator-type federated model for review purposes.

    The speaker makes a very important point about building in the ability to 'sculpt' the model... to accommodate other trades. As mentioned above, the importance of keeping the model dynamic as long as possible is slowly being recognised.