Skip to content
This repository has been archived by the owner on Feb 8, 2023. It is now read-only.

XYZReality/xrcnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

X Resolution Correspondence Networks

Paper

Please checkout our paper and supplementary material here.

Bibtex

If you consider using our code/data please consider citing us as follows:

@inproceedings{tinchev2020xrcnet, 
    title={{$\mathbb{X}$}Resolution Correspondence Networks}, 
    author={Tinchev, Georgi and Li, Shuda and Han, Kai and Mitchell, David and Kouskouridas, Rigas}, 
    booktitle={Proceedings of British Machine Vision Conference (BMVC)},
    year={2021} 
}

Dependency

  1. Install conda

  2. Run:

conda env create --name <environment_name> --file asset/xrcnet.txt

To activate the environment, run

conda activate xrcnet

Preparing data

We train our model on MegaDepth dataset. To prepare for the data, you need to download the MegaDepth SfM models from the MegaDepth website and download training_pairs.txt from here and validation_pairs.txt from here.

Training

  1. After downloading the training data, edit the config/train.sh file to specify the dataset location and path to validation and training pairs.txt file that you downloaded from above
  2. Run:
cd config;
bash train.sh -g <gpu_id> -c configs/xrcnet.json

Pre-trained model

We also provide our pre-trained model. You can download xrcnet.pth.tar from here and place it under the directory trained_models.

Evaluation on HPatches

The dataset can be downloaded from HPatches repo. You need to download HPatches full sequences.
After downloading the dataset, then:

  1. Browse to HPatches/
  2. Run python eval_hpatches.py --checkpoint path/to/model --root path/to/parent/directory/of/hpatches_sequences. This will generate a text file which stores the result in current directory.
  3. Open draw_graph.py. Change relevent path accordingly and run the script to draw the result.

We provide results of XRCNet alongside with other baseline methods in directory cache-top.

Evaluation on InLoc

In order to run the InLoc evaluation, you first need to clone the InLoc demo repo, and download and compile all the required depedencies. Then:

  1. Browse to inloc/.
  2. Run python eval_inloc_extract.py adjusting the checkpoint and experiment name. This will generate a series of matches files in the inloc/matches/ directory that then need to be fed to the InLoc evaluation Matlab code.
  3. Modify the inloc/eval_inloc_compute_poses.m file provided to indicate the path of the InLoc demo repo, and the name of the experiment (the particular directory name inside inloc/matches/), and run it using Matlab.
  4. Use the inloc/eval_inloc_generate_plot.m file to plot the results from shortlist file generated in the previous stage: /your_path_to/InLoc_demo_old/experiment_name/shortlist_densePV.mat. Precomputed shortlist files are provided in inloc/shortlist.

Evaluation on Aachen Day-Night

In order to run the Aachen Day-Night evaluation, you first need to clone the Visualization benchmark repo, and download and compile all the required depedencies (note that you'll need to compile Colmap if you have not done so yet). Then:

  1. Browse to aachen_day_and_night/.
  2. Run python eval_aachen_extract.py adjusting the checkpoint and experiment name.
  3. Copy the eval_aachen_reconstruct.py file to visuallocalizationbenchmark/local_feature_evaluation and run it in the following way:
python eval_aachen_reconstruct.py 
	--dataset_path /path_to_aachen/aachen 
	--colmap_path /local/colmap/build/src/exe
	--method_name experiment_name
  1. Upload the file /path_to_aachen/aachen/Aachen_eval_[experiment_name].txt to https://www.visuallocalization.net/ to get the results on this benchmark.

Acknowledgement

Our code is based on the code provided by DualRCNet, NCNet, Sparse-NCNet, and ANC-Net.

About

A novel dense correspondence network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published