Skip to content

Official Tensorflow Implementation of the paper "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning" in CVPR 2018

License

Notifications You must be signed in to change notification settings

supun-kanda/DenseVideoCaptioning

 
 

Repository files navigation

DenseVideoCaptioning

Tensorflow Implementation of the Paper Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning by Jingwen Wang et al. in CVPR 2018.

alt text

Data Preparation

Please download annotation data and C3D features from the website ActivityNet Captions. The ActivityNet C3D features with stride of 64 frames (used in my paper) can be found here.

Please follow the script dataset/ActivityNet_Captions/preprocess/anchors/get_anchors.py to obtain clustered anchors and their pos/neg weights (for handling imbalance class problem). I already put the generated files in dataset/ActivityNet_Captions/preprocess/anchors/.

Please follow the script dataset/ActivityNet_Captions/preprocess/build_vocab.py to build word dictionary and to build train/val/test encoded sentence data.

Hyper Parameters

The configuration (from my experiments) is given in opt.py, including model setup, training options, and testing options.

Training

Train dense-captioning model using the script train.py.

First pre-train the proposal module for around 5 epochs. Set train_proposal=True and train_caption=False. Then train the whole dense-captioning model by setting train_proposal=True and train_caption=True. To understand the proposal module, I refer you to the original SST paper and also my tensorflow implementation of SST.

Prediction

Follow the script test.py to make proposal predictions and to evaluate the predictions.

Evaluation

Please note that the official evaluation metric has been updated (Line 194). In the paper, old metric is reported (but still, you can compare results from different methods, all CVPR-2018 papers report old metric).

Results

The predicted results for val/test set can be found here.

Dependencies

tensorflow==1.0.1

python==2.7.5

Other versions may also work.

Update:

  1. I corrected some naming errors and simplified the proposal loss using tensorflow built-in function.
  2. I uploaded C3D features with stride of 64 frames (used in my paper). You can find it here.
  3. I uploaded val/test results of both without joint ranking and with joint ranking.
  4. I uploaded video_fps.json and updated test.py.
  5. Due to large file constraint, you may need to download data/paraphrase-en.gz here and put it in densevid_eval-master/coco-caption/pycocoevalcap/meteor/data/.
  6. I correct multi-rnn mistake caused by get_rnn_cell() function (see model.py).

About

Official Tensorflow Implementation of the paper "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning" in CVPR 2018

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%