Answers to commonly asked questions regarding ContextCapture and ContextCapture Center
1) Can you mix photos from different sources at different resolutions? E.g. aerial photos with photos from ground level?
Yes, it is possible to mix sources created at different resolutions. For the process to be fully automatic (no manual stitching of the photos) your photos must meet some specifications. The matching of the photos during aerotriangulation (first step of the process) is based on feature detection on the photos using “key points”. To create a “tie point” the same point must be matched in several photos and the resolution and viewing angle of the two photos must not be too different. This is why transitional photos may be required to automatically mix photos that have huge differences in resolution (more than 5x approx.).
2) What about panoramas? Can they be used?
We can process the images captured by panorama camera, e.g. Ladybug, but have to use the individual images from each camera. The stitched 360 images cannot be used.
3) Is it possible to use 360 cameras, such as the Nctech iris360?
See 2). ContextCapture needs to use the images taken from each of the 4 fisheye lens cameras. The stitched 360 image cannot be used.
4) Is it possible to use RAW photos (14bits, 16bits, HDR)?
YES, in ContextCapture user manual, there’s a list of RAW photo formats supported by latest version of ContextCapture. The raw formats compatible with ContextCapture are RW2 (Panasonic), CRW (Canon), CR2 (Canon), NEF (Nikon), ARW (Sony) and 3FR (Hasselblad). Currently, ContextCapture can input 16-bit Tiffs, but they will be compressed into 8 bit Tiffs for 3D model generation and texturing.
5) Is it possible to make a 3D model from video files?
YES, the list of video formats supported by latest version of ContextCapture: AVI, MPG, MP4, WMV, MOV
6) Can ContextCapture create models with images that have been fully edited and modified in Photoshop or other programs or does it need the raw image files/info unedited?
In order to preserve the geometric accuracy of the photos (and the quality of the final 3D model), only radiometric editing (contrast, brightness, saturation, etc.) is allowed. All geometric editing such as cropping, stretching and distortion removal etc. is strictly forbidden.
7) Can ContextCapture use point clouds instead of or as well as images to create models?
Yes, you can now use point clouds as a data source for your 3D Model. The input format for the point cloud must be E57, PTX, LAS or LAZ for static scans and E57, LAZ or LAZ for mobile scans.
1) How much is a good overlap between pictures to accomplish better accuracy?
Minimum overlap required is 50%. However, 70% overlap is recommended.
2) What are the requirements of the camera / photos - calibrated, fixed lens etc.?
No specific brand of camera/lens is recommended for ContextCapture. Of course, the quality of the 3D model will depend on the photo quality and geometric precision. The best photos will be acquired with a camera that has a large sensor, and a good quality lens. We usually recommend a DSLR camera with a fixed lens. The focal length to use will depend on the kind of project and environment you are working with. In narrow places you could use fish-eye lenses, and if standing far away from the subject, a long focal length zoom may be needed to achieve the expected resolution. It is not necessary to have a calibrated camera (especially if your camera model and lens model are of good quality), but is mandatory in certain cases (i.e. drone acquisition of Nadir only photographs).
3) Do you need surveyed ground control?
Ground control points are not a mandatory requirement to generate a 3D model. However, if you expect precise geo-referencing, then ground control points are recommended.
4) Should all photos be geotagged or should it be only one and will the software handle with it?
Geo-referencing can be done using ground control points or using the photo tags (this is generally less accurate). The geo-referencing using photo tags can be enabled if at least 3 photos have geo-coordinates.
5) Is the photo ID required to stitch together the photos?
No, with ContextCapture it is not necessary to use the name of the photos for Aerotriangulation. Context Capture uses several photo recognition and matching algorithms that do not require the photos to have specific ID’s. in any case, having photos correctly stored in sequence helps the software during ‘pair’ detection in the Aerotriangulation process.
6) What Is the workflow for flying over a site to capture images? I've flown over a few at a height of about 30 meters and wondered how to also capture the vertical faces of buildings. Can I fly a second flight path and incorporate those images into my model easily?
As long as they are of similar resolution and quality, they should be able to be processed together without difficulty. NB. Always remember that the viewing angle should never be too different, otherwise transitional photos or manual stitching will be required.
7) Does ContextCapture have additional applications for flight planning? Specifically, a drone?
No, ContextCapture currently has no application for flight planning.
8) How are you handling the obliquity in the image samples?
In ContextCapture we have our own custom algorithms to help automatically match nadir and oblique photos.
9) Does a camera mounted on a drone need to have a known coordinate or geolocation in order to represent true scale and location?
You can use a drone which automatically records its GPS location to an image geotag or, use ground control points to georeference the scene.
10) How does the software handle background information that is not relevant during the imagery acquisition?
Back ground data should automatically be ignored if out of focus or, it will also be modelled if there is enough information to form the background model.
3D Mesh Edition:
1) Are the 3D meshes produced fully editable or do they work like fixed blocks?
We have a workflow that will allow the exported 3D model to be edited in3rd party software and then re-imported into ContextCapture workflow to produce / update the final output.
2) Can you edit the 3D models & change the geometry?
Yes, this can be done using third party software. See above
3) Can landmarks inside the 3D mesh be selected and manipulated?
This is not a function of ContextCapture or its viewer. However, if imported into a 3D GIS platform application this can be achieved.
1) What is the accuracy of ContextCapture models?
There are many variables affecting the accuracy of the model but generally we can promise 2-3 times the average pixel resolution for an exhaustive data acquisition.
2) Can accuracy be improved if the camera positions can be surveyed accurately?
Having the camera positions accurately surveyed will increase the precision of the georeferencing of your 3D model (absolute precision) and the success rate of the Aerotriangulation. The precision of the 3D model will only depend on the quality of the photos and the acquisition exhaustiveness.
3) Do you have a recommended list of cameras and types of photos to use to get a certain level of accuracy?
Generally high quality full/mid frame camera systems with professional premium class lens such as Leica/Zeiss/CanonRedRing etc. A better sensor ensures a high standard of image quality and a premium lens ensures as little distortion as possible which best eliminates the aerotriangulation error.
4) Are users able to add reflective dot targets to scenes to enhance accuracy?
Reflective materials are not recommended in the photogrammetry process as they can cause modelling errors during the process. However, a non-reflective special pattern target will help to enhance the accuracy during aerotriangulation and thus helps on geometry accuracy.
1) Can 3MX be exported to LumenRT?
3MX files need to be imported in MicroStation and then exported to LumenRT running as a plugin. LumenRT currently will not directly import 3MX files.
2) Is there an export option to CityGML?
CityGML has no standard for 3D mesh modelling. ContextCapture produces complex 3D meshes that are not compliant with this format.
3) Can these models be integrated with a 3D scanner (i.e. Leica P40)?
The use of imported 3D scan data as reference data to enhance model accuracy is in our development roadmap. This functionality is targeted to be available before the end of 2016.
4) Integration with Google Earth?
ContextCapture can produce 3D mesh models in Google Earth KML format. These can easily be read by Google Earth.
5) What GIS software are the Context Capture Outputs, compatible with?
There are quite number of GIS platforms which support our Collada (LODTree) and OSGB format, including SpaceEye3D, Agency9 3DMaps, DigiNext VirtualGEO, Skyline Globe etc. However, if you are considering our DOM/DSM/Point cloud formats; they actually work with all existing GIS. From Q2 2016 ContextCapture will be able to export I3S format which can be used in ESRI products
6) In which Bentley software can we export the 3D mesh?
ContextCapture can produce 3D meshes in 3MX format that are compatible with MicroStation CONNECT edition and the V8i SS4 platform.
7) Is there a list of compatible 3D printers?
The STL format that ContextCapture produces are an industrial standard format should work on virtually every 3D printer, however this must be confirmed with the manufacturer. OBJ files can also be used (STL does not support textures).
8) Do you have an idea of a file size? For example, if I had 200 pictures at 25Megapixels each.
The size of output files is largely affected by the output format. From previous experience as a rough estimate a dataset produced by 200 x 25MP images in 3MX format should be within 200MB.
9) Is it possible to export as a coloured point cloud?
Yes, absolutely (LAS or POD format).
10) Where can I get the list of 3rd party software that we can use after for example the water simulation etc.?
See item 5). The software that is fully compatible with our 3D mesh models may be limited but should work with our neutral industrial standard output format. We also have LumenRT from Bentley as a very good simulation tools.
11) Can Bentley's design tools use the mesh as a surface or terrain, like in power inroads?
To be confirmed.
12) Does referencing 3MX and 3SM files in MicroStation use the standard reference functionality, or does it use raster manager functionality and is this fully integrated in ProjectWise?
3MX and 3SM meshes are attached using the Realty Mesh Management tool. Currently, there is no specific integration with ProjectWise
13) How can we combine points generated by scanner and photos?
ContextCapture can now import points from scanner and combine them with points generated from photos.
14) Can created 3D models be placed into different geographical zones?
In ContextCapture, all that is required are some Ground Control Points to redirect the model to any location. When importing to MicroStation you can achieve the same result by alternative means
1) For the WebGL Viewer, is it possible to take into account the collision?
We do not currently have collision detection incorporated in the WebGL viewer
Processing & analysis:
1) How automatic is the process of processing multiple photos?
If the input photos are fully compliant with our data acquisition guide, the process will be 99% automatic.
2) Regarding photo control does the software prepare an Aerotriangulation solution report?
Yes, a report is produced for the Aerotriangulation once the process is completed
3) When you say, "Production time" is that clock time or CPU processing time?
4) How do you georeference the mesh in ContextCapture?
Geotags in photos, Ground Control Points or by importing a dataset with full POS metadata.
5) Is it possible to georeference model by GPS points captured by camera?
6) How do you add control to photos?
ContextCapture includes a user interface that allows you to import your control point coordinates and perform measurements in the photos.
7) How does one scale the contents?
Scaling can be done with the use of user tie-points. A user can input several tie-points manually and add distance, axis and plane constraints. If it is georeferenced geotags or ground control points, the model will be scaled automatically.
8) What is the process of matching photos with ground control?
Using our Control Point GUI, import the GCP position and select the pixel on 3 – 5 photos. The rest of the process is fully automated.
9) Is there a function to manually classify the 3D point cloud for a 3rd party survey?
You will need to export the coloured point cloud to a point cloud format which can then be used in ContextCapture Editor, Bentley Pointools, Descartes or other point cloud software. ContextCapture is positioned purely as a 3D data production tool.
10) Do you have object classification tools?
Same as answer 9), not within ContextCapture. ContextCapture Editor is included with ContextCapture’
11) Do you have direct volume calculations in ContextCapture?
Yes, volume calculations can be done in the Acute3D Viewer
12) How do you calculate volumes?
In the Acute3D Viewer draw a polygon in the model. The software takes a sampling with the polygon and calculates the volume within (cut and fill volumes).
13) Are there size limits in terms of MB/GB & poly count?
No, our 3D formats (3MX and S3C) are fully scalable and are specified to be able to deal territory of infinite size.
14) In what circumstances would it be best to use multi-pass processing instead of single pass? Is there a major difference in the end result?
Generally, when a single pass fails. Please refer to the user manual for more details on multi-pass
15) Are you going to ease the process to produce orthophotos according different axes?
This is a popular request and we are considering it.
16) What are the greatest challenges users may encounter in the production of 3D models with ContextCapture?
Capturing a good dataset to get an optimal 3D model
17) Is there a method for using this software for interior scenes?
Reconstructing interiors using photogrammetry is a difficult task. The short distance from the subject and the numerous objects creating masks drastically increases the number of photos need to reconstruct the scene properly. Another common issue can be the lack of texture on walls which may lead to holes in the 3D model or failure during aerotriangulation. Use of a fish-eye lens can be useful in circumstances where the distance from the scene (such as interiors) is limited.
18) What is the recommended specification of a PC running ContextCapture?
The recommended spec is: A high-end i7 processor with high Ghz multi-cores, 32+GB RAM, an NVidia GTX 1080Ti or TitanX graphic card.
1) Can you tell us something about licensing prices?
Please refer to your local sales rep. or use the link below: https://www.bentley.com/en/about-us/contact-us/sales-contact-request
2) Is there any educational version?
Same as above, please contact us to discuss.
3) Are you doing processing for your customers?
We off a Cloud Processing Service using ContextCapture Mobile and the ContextCapture Desktop Console applications.
In specific pre-sales cases, it is possible. For commercial processing the general answer is no. We also
4) What is the difference between ContextCapture and ContextCapture Center?
See the chart below:
Utilizing user Tie point :
1) When should I use user tie points?
User tie points should be used only to solve aerotriangulation issues. In all cases, before acquiring user tie points, we recommend to run a first aerotriangulation.
2) Can I reincorporate those failed photos using tie points or are they gone forever?
Of course, you can try to reincorporate the missing photos using tie points.Goal is to put at least 3 tie points per missing photos and each of those tie points must be placed in 3 or 4 other calibrated photos. When you run the first AT you do not have to provide Geo reference data, it comes from the files, right?The Georeference data come from the metadata of the images (Exif) or from a dedicated file if you have one.If the project is georeferenced you can simply choose the final geocoordinate system (we call it SRS in ContextCapture) in the production settings.
3) Question, how do I know that I’ve reincorporated the missing pictures. It looks like I have but from the report where 84 where removed, is there a way to analyze which ones are missing, then purposely go in and find those images and makes sure they are tied?
You can check the number of photos that are present in the main component between 2 ATs > General tab or the acquisition report. You can also go to the Quality Report if you want to know exactly which photos still missingOpen/generate the quality report (available on CC Update 8) then click in the Photo report link then check the Distance to input position per photo table > photos that doesn’t have computed positions in XYZ.