Published at CVPR 2024. Open access version available here. Preprint version available at arXiv:2311.16304.
The iterative focal method has been integrated into PoseLib master branch. To method can be used by calling focals_from_fundamental_iterative
. Note that this version is tuned to have better numerics than the C++ code present in this repo.
Python bindings are also available so you can call the method like this:
prior_cam1 = {'model': 'SIMPLE_PINHOLE', 'width': -1, 'height': -1, 'params': [f1_prior, p1_prior[0], p1_prior[1]]}
prior_cam2 = {'model': 'SIMPLE_PINHOLE', 'width': -1, 'height': -1, 'params': [f2_prior, p2_prior[0], p2_prior[1]]}
cam1, cam2, iter = poselib.focals_from_fundamental_iterative(F, prior_cam1, prior_cam2, max_iters=50,
weights=np.array([5.0e-4, 1.0, 5.0e-4, 1.0]))
p1 = np.array(cam1.params[1:])
p2 = np.array(cam2.params[1:])
f1 = cam1.focal()
f2 = cam2.focal()
If you want to use the code in your own codebase and do not want to compile the whole PoseLib library you can simply include the files decompositions.cc and decompositions.h in your project.
-
Clone the repo:
git clone https://github.com/kocurvik/robust_self_calibration
-
cd robust_self_calibration
-
Use conda/mamba to install dependencies from env.yaml
-
Install Matlab python plugin
-
Install pybind11:
pip install pybind11
-
Compile the c++ code:
cd cxx python setup.py install cd ..
-
Clone and install the following repositories which implement RFC:
git clone https://github.com/kocurvik/opencv ... follow install instructions... git clone https://github.com/kocurvik/PoseLib cd PoseLib git checkout -b onefocal ... follow install instructions ... cd .. git clone https://github.com/kocurvik/vsac cd vsac git checkout -b version2 ... follow install instructions ...
First you have to prepare the datasets:
cd /path/to/the/clone/robust_self_calibration/
export PYTHONPATH=/path/to/the/clone/robust_self_calibration/
python datasets/prepare_im.py -n 1000 -m loftr1024 /path/to/phototourism/dataset
python datasets/prepare_im.py -n 1000 -m loftr1024 /path/to/aachen/dataset
python datasets/prepare_single.py -n 1000 -m loftr1024 /path/to/ETH3D/multiview/dataset
You can alternatively download the matches from this address: TBA
Then you can run the evaluation scripts:
python eval/uncal.py -nw 4 -m loftr1024 phototourism
python eval/uncal.py -nw 1 -m loftr1024 aachen
python eval/uncal.py -nw 4 -m loftr1024 eth3d_multiview
You can also run the scripts in the eval/synth
folder to get the outputs for the synthetic experiments.
The graph comparing RFC runtimes and accuracy can be generated by running:
python eval/rfd.py -nw 1 -m loftr1024 phototourism
Note that you can change the correspondences to SP+SG by changing setting -m sg2048
. You hava to install the networks from: https://github.com/magicleap/SuperGluePretrainedNetwork
and potentially modify the paths in utils/matching.py
If you want to use PoseLib for estimating the fundamental matrices you have to comment and uncomment the relevant parts of the eval scripts. The -nw
parameter is used for multiprocessing. You can set it to 1 to use only a single process.
Deprecated: If you want to use the c++ version you can look at how it is used in methods/ours.py
. Since the c++ version is installed using a setup.py
script, the package iterative_focal
should be available in the whole environment.
The tuned version of the C++ implementation is available in the PoseLib master branch (see above for instructions).
If you want to use the Matlab version you should check out the matlab_utils/engine_calls.py
. Note that when you run the engine you have to specify that the engine includes the relevant folder:
eng = matlab.engine.start_matlab()
s = eng.genpath('path/to/robust_self_calibration/matlab_utils')
eng.addpath(s, nargout=0)
Note: During experiments we always shifted all of the point correspondences so that the priors for the principal points were at (0, 0). It seems that without this the method is significantly less stable. If you already have F then you can transform it using the eq. (1) from Peter Sturm's paper and after you obtain estimates for the principal point you add them to the values you used in the transformation matrices.
If you find this repository useful please consider citing:
@inproceedings{kocur2024robust,
title={Robust Self-calibration of Focal Lengths from the Fundamental Matrix},
author={Kocur, Viktor and Kyselica, Daniel and K{\'u}kelov{\'a}, Zuzana},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5220--5229},
year={2024}
}