Please checkout our paper and supplementary material here.
If you consider using our code/data please consider citing us as follows:
@inproceedings{tinchev2020xrcnet,
title={{$\mathbb{X}$}Resolution Correspondence Networks},
author={Tinchev, Georgi and Li, Shuda and Han, Kai and Mitchell, David and Kouskouridas, Rigas},
booktitle={Proceedings of British Machine Vision Conference (BMVC)},
year={2021}
}
-
Install conda
-
Run:
conda env create --name <environment_name> --file asset/xrcnet.txt
To activate the environment, run
conda activate xrcnet
We train our model on MegaDepth dataset. To prepare for the data, you need
to download the MegaDepth SfM models from the MegaDepth website and
download training_pairs.txt
from here and validation_pairs.txt
from here.
- After downloading the training data, edit the
config/train.sh
file to specify the dataset location and path to validation and training pairs.txt file that you downloaded from above - Run:
cd config;
bash train.sh -g <gpu_id> -c configs/xrcnet.json
We also provide our pre-trained model. You can download xrcnet.pth.tar
from here
and place it under the directory trained_models
.
The dataset can be downloaded from HPatches repo. You need to download
HPatches full sequences.
After downloading the dataset, then:
- Browse to
HPatches/
- Run
python eval_hpatches.py --checkpoint path/to/model --root path/to/parent/directory/of/hpatches_sequences
. This will generate a text file which stores the result in current directory. - Open
draw_graph.py
. Change relevent path accordingly and run the script to draw the result.
We provide results of XRCNet alongside with other baseline methods in directory cache-top
.
In order to run the InLoc evaluation, you first need to clone the InLoc demo repo, and download and compile all the required depedencies. Then:
- Browse to
inloc/
. - Run
python eval_inloc_extract.py
adjusting the checkpoint and experiment name. This will generate a series of matches files in theinloc/matches/
directory that then need to be fed to the InLoc evaluation Matlab code. - Modify the
inloc/eval_inloc_compute_poses.m
file provided to indicate the path of the InLoc demo repo, and the name of the experiment (the particular directory name insideinloc/matches/
), and run it using Matlab. - Use the
inloc/eval_inloc_generate_plot.m
file to plot the results from shortlist file generated in the previous stage:/your_path_to/InLoc_demo_old/experiment_name/shortlist_densePV.mat
. Precomputed shortlist files are provided ininloc/shortlist
.
In order to run the Aachen Day-Night evaluation, you first need to clone the Visualization benchmark repo, and download and compile all the required depedencies (note that you'll need to compile Colmap if you have not done so yet). Then:
- Browse to
aachen_day_and_night/
. - Run
python eval_aachen_extract.py
adjusting the checkpoint and experiment name. - Copy the
eval_aachen_reconstruct.py
file tovisuallocalizationbenchmark/local_feature_evaluation
and run it in the following way:
python eval_aachen_reconstruct.py
--dataset_path /path_to_aachen/aachen
--colmap_path /local/colmap/build/src/exe
--method_name experiment_name
- Upload the file
/path_to_aachen/aachen/Aachen_eval_[experiment_name].txt
tohttps://www.visuallocalization.net/
to get the results on this benchmark.
Our code is based on the code provided by DualRCNet, NCNet, Sparse-NCNet, and ANC-Net.