Code for paper: Triangulation Residual Loss for Data-efficient 3D Pose Estimation. TR loss enables self-supervision with global 3D geometric consistency by minimizing the smallest singular value of the triangulation matrix. Particularly, TR loss aims to minimize the weighted sum of distances from the current 3D estimate to all view rays, so that the view rays will converge to a stable 3D point.
An example video is available here
- Install mmcv==1.7.0 and mmpose==0.29.0 following the guideline
- clone this project and install the requirements.
-
Calm21(MARS) dataset: images could be download from here, the annotations could be downloaded from Google driver or BaiduNetdisk with password: n0b4
-
Dannce and THM datasets: The used images and annotations could be download from Google driver or BaiduNetdisk with password: n0b4
-
Human3.6M dataset is formulated in COCO annotation form.
-
Download the dataset to your loacal computer, then modify the 'data_root' in the config file to the downloaded path.
The pretrained backbone and models could be downloaded from here.
The core code of TR loss is in TRL/models/heads/triangulate_head.py as following:
u, s, vh = torch.svd(A.view(-1, 4)) # A is the matrix defined in (13)
res_triang = s[-1] # res_triang is the TR Loss
Then add the TR loss to your final losses and perform gradient backpropagation.
[1] https://github.com/zhezh/adafuse-3d-human-pose
[2] https://github.com/karfly/learnable-triangulation-pytorch