Need help working from home with your Bentley software? We're here to help - click here
The attached document discusses some enhancements to Reality Mesh (Acute3d 3MX) support in the upcoming MicroStation Connect Update.
If you are interested in previewing a pre-release version of this, please contact Marco Salino (email@example.com).
I welcome your comments or suggestions....
Reality Model Classification.pdf
To date Reality Models have been used primarily to provide context for their engineering counterparts. Advancements in the production of Reality Models have greatly improved the fidelity of these models, but their value has been somewhat limited by treating these models as monoliths. With the Acute3D ContextCapture tools, it is possible to create a reality mesh that very accurately represents an entire city, plant or other large infrastructure project, however cities are made up of many parcels and plants consist of many components. The reality model may capture each of the parcels or components accurately, but without some method to distinguish the parcel or components from the rest of the reality model or tie them to their underlying engineering or GIS data, the value of the model is limited.
We introduce the concept of using engineering or GIS objects to spatially classify a reality model into a discrete set of volumes that represent those objects.
GIS data provides an obvious and readily available classification source for many reality models as it provides both the underlying boundaries to spatially classify the data and information attached to these boundaries. In the example below a model of central Philadelphia captured to assist in the 2015 Papal visit is displayed with the building footprint (available to the public at OpenDataPhilly.org). As both the reality model and the GIS data are geographically located it is straightforward to overlay this data accurately. The building footprint can therefore be use to provide a template for spatially classifying the reality model into discrete buildings. Each building footprint is automatically associated with the volume above it in the reality model and conversely reality mesh geometry is associated to the footprint below it.
3D Reality Model of Philadelphia with 2D building footprints
By using the building footprints to spatially classify the reality model we can access the reality model as discrete buildings with GIS data rather than a monolithic and unintelligent mesh.
Highlighted building with associated data from associated GIS data
The link between the GIS data and the classified reality model is bidirectional. It is possible to access the underlying data by selecting the classified reality mesh or to access the reality mesh classifications by selecting the GIS data.
Other types of GIS data such as commercial corridors are available to provide alternate classifiers for our Philadelphia example. Below the “Parkway/Logan Square” corridor is displayed in isolation.
The initial version of Reality Mesh support in the MicroStation Connect Edition referred to the Reality Meshes as Acute3D Meshes and the tools for manipulating reality meshes were primarily accessible through keyins. In the MicroStation Connect Update 2, a complete GUI is provided for attaching and manipulating reality meshes. Additionally, support for spatially classifying reality meshes has been added to allow reality meshes to be classified by and associated to design or GIS data.
The Reality Mesh Attachments dialog box is available from the Attach Tools option on the Home ribbon bar tab.
The Hierarchy item controls whether the Hierarchy panel is displayed. This panel allows navigation to referenced models.
The Hilite Mode item controls the display of the currently selected attachment.
The upper list includes the reality mesh attachments for the current model. The columns for this dialog include:
FileName: The file name for the reality mesh attachment.
Active Classifier: The name of the Active Classifier (or none).
Export Resolution: Export resolution controls the resolution when the reality mesh is exported.
Transparency: The reality mesh transparency.
Display: Turn Display off to omit the reality mesh display in all views.
Snap: Turn Snap off to make the reality mesh unsnappable.
Locate: Turn Locate off to make not locatable by tools.
All of the entries in the attachments list are editable. Double click on the File Name entry to select an alternate reality mesh file.
The lower list includes all the classifiers for the selected reality mesh.
Name: The classification name. This can be any unique value describing the classification.
Target: The classification target Model, Level, Named Group or Element.
Margin: The Margin distance specifies a distance to expand (if positive) or reduce (if negative)
the classification boundary. For certain boundaries, such as building footprints that may lie directly on the edge of the enclosed volume, specifying a small margin value can avoid undesirable clipping at the boundary edge. A negative margin distance will omit a small volume between adjacent boundaries.
Inside (On|Off|Dimmed|Hilite): Controls the display mode for reality mesh inside classification boundary.
Outside (On|Off|Dimmed|Hilite): Controls the display mode for reality mesh outside classification boundaries.
Selected (On|Off|Dimmed|Hilite): Controls the display mode for reality mesh elements inside selected classification boundaries.
Note that for classifications with many elements, displaying the Inside and Outside geometry separately can slow display performance. In order to maintain acceptable display speed the ranges rather than actual boundaries may be used or the separated display may be omitted entirely.
Split Reality Mesh
Mask Reality Mesh
Clip Reality Mesh
Detach Reality Mesh
Attach Reality Mesh ality Mesh
Select Reality Mesh ality Mesh
Attach Classifier ality Mesh
The tools in the upper pane were provided in the previous versons:
Attach Reality Mesh: Attach a new reality mesh.
Detach Reality Mesh: Detach an existing reality mesh.
Clip Reality Mesh: Clip a reality mesh to the interior of an existing element.
Mask Reality Mesh: Split a reality mesh into two meshes representing interior and exterior of an existing element.
The tools in the lower pane provide support for the new Classification tools.
Attach Reality Mesh Classifier: Attaches a new classification to the selected reality mesh by selecting any element within the classification. The Type: setting controls whether the entire model (or referenced model), is used for the classification or the elements Level, Named Group or just the element itself is used.
Detach Reality Mesh Classifier
Select Reality Mesh Classifier: Allows selection of the Classification boundaries by picking the reality mesh directly. As the cursor is moved over the reality mesh, the reality mesh volumes are flashed to indicate the current classification contents. Entering a data point will select the indicated classification boundary element. Entering reset will remove the last selection and a data point with the Control key will add a classification boundary to the existing selection.
The bottom portion of the Reality Mesh Attachments dialog box supports Classifiers. A Classifier is a set of one or more elements that are linked to the reality mesh to provide a method for identifying portions of the mesh that represent physical entities. Two dimensional shapes from GIS (Geographic Information Systems) applications are particularly convenient and valuable as they typically are geographically located and contain valuable property data. By using design data to classify the reality model, we are able to access the reality model as a set intelligent components rather a single, isolated entity. The link between the reality model and the underlying design data is bidirectional so we can either query the GIS model to locate the associated reality mesh or we can select locations on the reality model to access the underlying design data.
In 2015 a reality model of Philadelphia was created to support the Pope’s visit to Philadelphia. In order to demonstrate Reality Model Classification we will link this model with GIS data and explore how the valuable this combination can be. Like most municipalities, the GIS data for Philadelphia is readily available. The website www.opendataphilly.org provides over 300 different datasets – We’ll start by download the building footprint data in SHP format.
The SHP data can be opened directly in MicroStation Connect and saved to DGN format for convenience. We also directly open the reality model in the “.3mx” format created by ContextCapture and save that to DGN.
Both of the DGN files are geographically located, however, they many not use the same geographic coordinate systems. Fortunately MicroStation can reproject the GIS data to match the coordinate system of the reality model. It is not possible, however, to reproject reality model data efficiently between geographic coordinate systems (the sheer volume of data makes this operation impractical). Therefore it is important to use the reality model as the master and attach as a reference (and potentially reproject) the design data to the reality model geographic coordinate system.
After attaching the building footprint data, we can view the model from below and observe that the footprint data matches our reality model very well, confirming that both the reality model and the GIS data were geospatially accurate.
The next step is to attach the building footprint reference as a Classifier to the reality model. The Attach Reality Mesh tool is used. In the tool settings, we select the Model type so that all of the building footprints in the referenced model are part of the classification and we set the name to “Building Footprints”. A margin setting of 1.0 (meters) will designate that each classification volume will be one meter larger than the building footprint. A single element within the footprint reference attachment is selected to designate that this is the desired model for the classification.
After attaching the classification it will appear in the lower Classifiers list and the newly attached classification is set to be the Active Classifier for the reality model attachment.
To interactively view the result of our newly attached classification we use the Select Reality Mesh Classifier tool and move cursor over our reality model. When the reality mesh below the cursor is within a classification boundary, the classified volume is flashed in the highlight color. Picking the reality mesh classification will add the classification boundary object to the current selection set and the properties box can be used to view the properties of the object.
Now that we have selected a classification object, we can use the Classifier settings to control how it is displayed. The Inside and Outside setting controls how reality mesh data inside or outside of the boundaries of the active classification boundaries are displayed. The Selected setting controls how the reality mesh for selected classification boundaries are depicted. By turning both Inside and Outside to Off and changing the Selected setting to On we can isolate to display only the reality mesh for the selected classification boundary (see below). Note that the Select Reality Mesh Classifier tool continues to function with the Inside and Outside settings off, so in this mode it is possible to continue to interactively select from the (now invisible) reality mesh.
With different settings of the Inside and Outside display settings we can view the classification data in interesting ways. With Inside: On and Outside: Off we can display only the buildings.
By setting the Outside: Dimmed we can see the buildings against a dimmed background of the unclassified reality mesh. Note that in this case the dimmed building in the center is new and missing from the building footprint data.
Missing From GIS Data…
Many different classifications can be attached to a single reality model, this provides many different ways of viewing the same reality model. By attaching a Classifier of the commercial corridors we view the same model as a set of zones rather than individual buildings. Note that in this case, we use a negative margin distance to inset the commercial zones slightly to produce a small Outside zone at their boundaries.
In most cases, it is preferable to use Classifiers to control how the reality model is viewed without changing the reality model attachment. It is, however, possible to use the reality mesh clipping tools to permanently clip the model at classification boundaries. For example in the zoning example above, it is possible to create separate attachments for the commercial zones by selecting the zones (Control-Click will allow selection of more than one classification boundary) and using the Split Reality Mesh Attachment tool to create separate attachments for each zone and a single attachment for areas outside the selected boundaries.
2D Linear geometry is important in GIS, civil and other disciplines as it is often used to roads, railroad tracks or waterways or boundaries. Point data is frequently used to represent locations of interest or vertical assets such as street signs or telephone poles. Although this geometry alone does not enclose an area that can be projected to produce a classification volume, it may be combined with the Margin setting to produce classification volumes that enclose and surround the geometry.
Example: Philadelphia with Arterial Streets Classifier
In the example to the left, the streets are used as classifiers for the Philadelphia Reality model.
At first glance, it appears that there are streets that are incorrectly placed through buildings.
On closer inspection (below), we see below that these are roads that actually go through the structures – an anomaly that would be difficult to discern without 3D reality model.
The example below illustrates the use of a point classifier representing the location of street poles in the Philadelphia reality model.
In the previous examples, 2D GIS data has been used to classify the reality meshes. In this case the 2D geometry is projected to classify all reality model geometry above (or below). It is also possible to use 3D geometry to more precisely enclose reality mesh volumes. Currently volumes can be specified by slabs, cylinders and non-parametric extrusions. If other 3D geometry is used then the classified volume is determined by the range of the element.
In the example to the right, a classification of the individual floors of a building rather than the entire building is illustrated. A series of slabs provide the classifiers. Properties of the floors (occupants, rental rates etc.) can be accessed by interactively selecting these reality model to access these classifiers.
Activating the “Tower Floor” classifier and Selecting Dimmed for the Outside display mode highlights the building of interest. Using the Select Reality Mesh Classifier tool allows selection of the individual tower floors.
The reality mesh classification uses the standard MicroStation Selection Set system. It is possible to use MicroStation Explorer’s Advanced Search to search for Classification elements and to reflect the search results on the reality mesh. This is illustrated using a Reality Mesh created for the city of Coatesville, Pennsylvania. A Classifier with parcel data is attached to the reality mesh and a search on all parcels where the owner matches ”CITY OF COATESVILLE” is performed. The city owned portions of the reality model are displayed below.