I have a use case where I need to update the metadata for each photo from a data file that has more accurate position and yaw/pitch/roll data than what is contained in each photo's exifdata.
Through the SDK, I am able to modify each photo's PoseMetadata Point3d `center` and Matrix3 `rotation` attributes. To change `rotation`, I must find the transformation from yaw/pitch/roll to ContextCapture rotation matrix.
However, the rotation matrix formula provided in the CC user guide does not appear to be correct (docs.bentley.com/.../GUID-2D452A8A-A4FE-450D-A0CA-9336DCF1238A.html).
For instance, if I add a photo with 0 yaw/0 pitch/0 roll exifdata (confirmed through CC SDK photo.exifData.yawPitchRoll), the photo's poseMetadata rotation matrix is [[0.9703417, -0.24173759, -0.00000276], [0.20574631, 0.8258776, -0.52497107], [0.12690751, 0.5094007, 0.85112005]], whereas the provided formula results in [[1, -0, 0], [0, 0, -1], [0, 1, 0]].
I have created the following minimum reproduceable example using any image named "example_image_0.jpg" with yaw/pitch/roll exifdata.
import ccmasterkernel import math import numpy as np np.set_printoptions(suppress=True) def compare_rotation_matrices(): print("MasterKernel version %s" % ccmasterkernel.version()) print() project = ccmasterkernel.Project() block = ccmasterkernel.Block(project) project.addBlock(block) photogroups = block.getPhotogroups() path = "example_image_0.jpg" print("----------", path, "----------") photo = photogroups.addPhotoInAutoMode(path) yaw, pitch, roll = photo.exifData.yawPitchRoll print("--- PHOTO EXIFDATA ---") print("Yaw:", yaw, "\tPitch:", pitch, "\tRoll:", roll) print() rot_mat = np.asarray(photo.poseMetadata.rotation.getElements(), dtype=np.float32).reshape((3, 3)) print("--- CONTEXTCAPTURE ROTATION MATRIX ---") print(rot_mat) print() calculated_rot_mat = yaw_pitch_roll_to_rotation_matrix(yaw=yaw, pitch=pitch, roll=roll) print("--- CALCULATED ROTATION MATRIX ---") print(calculated_rot_mat) print() def yaw_pitch_roll_to_rotation_matrix(yaw, pitch, roll): """ Formula from ContextCapture User Manual. https://docs.bentley.com/LiveContent/web/ContextCapture%20Help-v10/en/GUID-2D452A8A-A4FE-450D-A0CA-9336DCF1238A.html """ P = np.deg2rad(pitch) R = np.deg2rad(roll) Y = np.deg2rad(yaw) M_00 = math.cos(R) * math.cos(Y) - math.sin(R) * math.sin(P) * math.sin(Y) M_01 = -math.cos(R) * math.sin(Y) - math.cos(Y) * math.sin(R) * math.sin(P) M_02 = math.cos(P) * math.sin(R) M_10 = math.cos(Y) * math.sin(R) + math.cos(R) * math.sin(P) * math.sin(Y) M_11 = math.cos(R) * math.cos(Y) * math.sin(P) - math.sin(R) * math.sin(Y) M_12 = -math.cos(R) * math.cos(P) M_20 = math.cos(P) * math.sin(Y) M_21 = math.cos(P) * math.cos(Y) M_22 = math.sin(P) return np.array([[M_00, M_01, M_02], [M_10, M_11, M_12], [M_20, M_21, M_22]]) if __name__ == "__main__": compare_rotation_matrices()
Here is the output using two example images I have:
As can be seen, the matrices are not the same, so I cannot update the photo rotation with confidence.
Please get back to me. Thank you in advance.