-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Projecting Meshroom 3D Mesh Points onto Images #2595
Comments
Maybe width and height gets inverted due to the vertical image. |
Thank you for your reply! width, height = float(intrinsic_data['width']), float(intrinsic_data['height']) # In pixels
sensor_width = float(intrinsic_data['sensorWidth']) # In mm
sensor_height = float(intrinsic_data['sensorHeight']) # In mm
print(f"width {width}")
print(f"height {height}")
print(f"sensor_width {sensor_width}")
print(f"sensor_height {sensor_height}") The output was: If I understood correctly, the sensor width seems correct, but it appears that the image width and height might have been swapped? height, width = float(intrinsic_data['width']), float(intrinsic_data['height']) # In pixels
sensor_width = float(intrinsic_data['sensorWidth']) # In mm
sensor_height = float(intrinsic_data['sensorHeight']) # In mm
print(f"width {width}")
print(f"height {height}")
print(f"sensor_width {sensor_width}")
print(f"sensor_height {sensor_height}")
# Compute fx and fy in pixels
fx = (focal_length / sensor_width) * width
fy = (focal_length / sensor_height) * height
# Convert principal point offsets from mm to pixels
cx = principal_point[0] + width / 2
cy = principal_point[1] + height / 2
distortion_params = intrinsic_data['distortionParams']
k1 = float(distortion_params[0])
k2 = float(distortion_params[1])
k3 = float(distortion_params[2])
dist_coeffs = np.array([k1, k2, 0, 0, k3]) # OpenCV uses 5 coefficients
# Construct intrinsic matrix (K)
K = np.array([
[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]
]) However, the projected points are still not aligned correctly. Could there be another aspect of the extrinsic or intrinsic parameters that might be causing this issue? |
Hello,
I'm attempting to project the 3D points from a mesh generated by Meshroom's Texturing node onto each of the original images I captured. However, the projected points are not aligning correctly with the images. Here is the script i use:
Unfortunately, the results are incorrect. I’m unsure whether the issue lies with the extrinsic and intrinsic parameters or the point cloud from the mesh. I’ve tried various transformations on the point cloud, but the projected points remain inaccurate. This is one of the closest results I’ve managed to achieve:
Meshroom Version: 2023.3.0
Thank you in advance for your time!
The text was updated successfully, but these errors were encountered: