You are currently reviewing an older revision of this page.
Answers to commonly asked questions regarding ContextCapture and ContextCpture Center
Yes, it is possible to mix sources created at different resolutions. For the process to be fully automatic (no manual stitching of the photos) your photos must meet some specifications. The matching of the photos during aerotriangulation (first step of the process) is based on feature detection on the photos using “key points”. To create a “tie point” the same point must be matched in several photos and the resolution and viewing angle of the two photos must not be too different. This is why transitional photos may be required to automatically mix photos that have huge differences in resolution (more than 5x approx.).
We can process the images captured by panorama camera, e.g. Ladybug, but have to use the individual images from each camera. The stitched 360 image cannot be used.
See 2). ContextCapture needs to use the images taken from each of the 4 fisheye lens cameras. The stitched 360 image cannot be used.
YES, in ContextCapture user manual, there’s a list of RAW photo formats supported by latest version of ContextCapture. The raw formats compatible with ContextCapture are RW2 (Panasonic), CRW (Canon), CR2 (Canon), NEF (Nikon), ARW (Sony) and 3FR (Hasselblad). Currently, ContextCapture can input 16 bit Tiffs, but they will be compressed into 8 bit Tiffs for 3D model generation and texturing.
YES, the list of video formats supported by latest version of ContextCapture: AVI, MPG, MP4, WMV, MOV
In order to preserve the geometric accuracy of the photos (and the quality of the final 3D model), only radiometric editing (contrast, brightness, saturation, etc.) is allowed. All geometric editing such as cropping, stretching and distortion removal etc. is strictly forbidden.
Minimum overlap required is 50%. However, 70% overlap is recommended.
No specific brand of camera/lens is recommended for ContextCapture. Of course, the quality of the 3D model will depend on the photo quality and geometric precision. The best photos will be acquired with a camera that has a large sensor, and a good quality lens. We usually recommend a DSLR camera with a fixed lens. The focal length to use will depend on the kind of project and environment you are working with. In narrow places you could use fish-eye lenses, and if standing far away from the subject, a long focal length zoom may be needed to achieve the expected resolution. It is not necessary to have a calibrated camera (especially if your camera model and lens model are of good quality), but is mandatory in certain cases (i.e. drone acquisition of Nadir only photographs).
Ground control points are not a mandatory requirement to generate a 3D model. However, if you expect precise geo-referencing, then ground control points are recommended.
Geo-referencing can be done using ground control points or using the photo tags (this is generally less accurate). The geo-referencing using photo tags can be enabled if at least 3 photos have geo-coordinates.
No, with ContextCapture it is not necessary to use the name of the photos for Aerotriangulation. Context Capture uses several photo recognition and matching algorithms that do not require the photos to have specific ID’s. in any case, having photos correctly stored in sequence helps the software during ‘pair’ detection in the Aerotriangulation process.
As long as they are of similar resolution and quality, they should be able to be processed together without difficulty. NB. Always remember that the viewing angle should never be too different, otherwise transitional photos or manual stitching will be required.
No, ContextCapture currently has no application for flight planning.
In ContextCapture we have our own custom algorithms to help automatically match nadir and oblique photos.
You can use a drone which automatically records its GPS location to an images geotag or, use ground control points to georeference the scene.
Back ground data should automatically be ignored if out of focus or, it will also be modelled if there is enough information to form the background model.
We have a workflow that will allow the exported 3D model to be edited in3rd party software and then re-imported into ContextCapture workflow to produce / update the final output.
Yes, this can be done using third party software. See above
This is not a function of ContextCapture or its viewer. However, if imported into a 3D GIS platform application this can be achieved.
There are many variables affecting the accuracy of the model but generally we can promise 2-3 times the average pixel resolution for an exhaustive data acquisition.
Having the camera positions accurately surveyed will increase the precision of the georeferencing of your 3D model (absolute precision) and the success rate of the Aerotriangulation. The precision of the 3D model will only depend on the quality of the photos and the acquisition exhaustiveness.
Generally high quality full/mid frame camera systems with professional premium class lens such as Leica/Zeiss/CanonRedRing etc. A better sensor ensures a high standard of image quality and a premium lens ensures as little distortion as possible which best eliminates the aerotriangulation error.
Reflective materials are not recommended in the photogrammetry process as they can cause modelling errors during the process. However, a non-reflective special pattern target will help to enhance the accuracy during aerotriangulation and thus helps on geometry accuracy.
3MX files need to be imported in MicroStation and then exported to LumenRT running as a plugin. LumenRT currently will not directly import 3MX files.
CityGML has no standard for 3D mesh modelling. ContextCapture produces complex 3D meshes that are not compliant with this format.
The use of imported 3D scan data as reference data to enhance model accuracy is in our development roadmap. This functionality is targeted to be available before the end of 2016.
ContextCapture can produce 3D mesh models in Google Earth KML format. These can easily be read by Google Earth.
There are quite number of GIS platforms which support our Collada (LODTree) and OSGB format, including SpaceEye3D, Agency9 3DMaps, DigiNext VirtualGEO, Skyline Globe etc. However, if you are considering our DOM/DSM/Point cloud formats; they actually work with all existing GIS. From Q2 2016 ContextCapture will be able to export I3S format which can be used in ESRI products
ContextCapture can produce 3D meshes in 3MX format that are compatible with MicroStation CONNECT edition and the V8i SS4 platform.
The STL format that ContextCapture produces are an industrial standard format should work on virtually every 3D printer, however this must be confirmed with the manufacturer. OBJ files can also be used (STL does not support textures).
The size of output files is largely affected by the output format. From previous experience as a rough estimate a dataset produced by 200 x 25MP images in 3MX format should be within 200MB.
Yes, absolutely (LAS or POD format).
See item 5). The software that is fully compatible with our 3D mesh models may be limited but should work with our neutral industrial standard output format. We also have LumenRT from Bentley as a very good simulation tools.
To be confirmed.
The standard reference functionality is use with 3MX files. Currently, there is no specific integration with ProjectWise
ContextCapture cannot import points form scanners. However, this functionality is on our product roadmap for 2016.
In ContextCapture, all that is required are some Ground Control Points to redirect the model to any location. When importing to MicroStation you can achieve the same result by alternative means
We do not currently have collision detection incorporated in the WebGL viewer
If the input photos are fully compliant with our data acquisition guide, the process will be 99% automatic.
Yes a report is produced for the Aerotriangulation once the process is completed
Geotags in photos, Ground Control Pints or by importing a dataset with full POS metadata.
Absolutely
ContextCapture includes a user interface that allows you to import your control point coordinates and perform measurements in the photos.
Scaling can be done with the use of user tie-points. A user can input several tie-points manually and add distance, axis and plane constraints. If it is georeferenced geotags or ground control points, the model will be scaled automatically.
Using our Control Point GUI, import the GCP position and select the pixel on 3 – 5 photos. The rest of the process is fully automated.
You will need to export the coloured point cloud to a point cloud format which can then be used in Bentley Pointools, Descartes or other point cloud software. ContextCapture is positioned purely as a 3D data production tool.
Same as answer 9), not within ContextCapture
Yes, volume calculations can be done in the Acute3D Viewer
In the Acute3D Viewer draw a polygon in the model. The software takes a sampling with the polygon and calculates the volume within (cut and fill volumes).
No, our 3D formats (3MX and S3C) are fully scalable and are specified to be able to deal territory of infinite size.
Generally, when a single pass fails. Please refer to the user manual for more details on multi-pass
This is a popular request and we are looking into it.
Capturing a good dataset to get an optimal 3D model
Reconstructing interiors using photogrammetry is a difficult task. The short distance from the subject and the numerous objects creating masks drastically increases the number of photos need to reconstruct the scene properly. Another common issue can be the lack of texture on walls which may lead to holes in the 3D model or failure during aerotriangulation. Use of a fish-eye lens can be useful in circumstances where the distance from the scene (such as interiors) is limited.
The recommended spec is: A high-end i7 processor with high Ghz multi-cores, 32+GB RAM, an NVidia GTX980Ti or TitanX graphic card.
Please refer to your local sales rep. or use the link below:https://www.bentley.com/en/about-us/contact-us/sales-contact-request
Same as above, please contact us to discuss.
In specific pre-sales cases, it is possible. For commercial processing the general answer is no.
See the chart below: