- install requirements
- install model + vocab from librispeech
- load checkpoints
pip install -r ./requirements.txt
# download model (3-gram.pruned.3e-7.arpa) and vocabulary (librispeech-vocab.txt) from here https://www.openslr.org/11/
wget https://www.openslr.org/resources/11/3-gram.pruned.3e-7.arpa.gz
wget https://www.openslr.org/resources/11/librispeech-vocab.txt
# the add paths to them to config, check configs/for_testing.json
# load checkpoints from google drive. now you can link them
pip install gdown
gdown https://drive.google.com/drive/folders/1OW__4Bd8HzeFildGCimCCctqksv0GeK3 --folder
to run train use config real_training.json for testing use config for_testing.json
I've implemented LM beamsearch and added tests for it and for regular beamsearch also. I've commented test for LM as it has hardcoded paths.
You might be a little intimidated by the number of folders and classes. Try to follow this steps to gradually undestand the workflow.
- ✅ Test
hw_asr/tests/test_dataset.py
andhw_asr/tests/test_config.py
and make sure everythin works for you - ✅ Implement missing functions to fix tests in
hw_asr/tests/test_text_encoder.py
- ✅ Implement missing functions to fix tests in
hw_asr/tests/test_dataloader.py
- (Write it yourself) ✅ Implement functions in
hw_asr\metric\utils.py
- ✅ Implement missing function to run
train.py
with a baseline model - Write your own model and try to overfit it on a single batch
- Implement ctc beam search and add metrics to calculate WER and CER over hypothesis obtained from beam search.
Pain and sufferingImplement your own models and train them. You've mastered this template when you can tune your experimental setup just by tuningconfigs.json
file and runningtrain.py
- Don't forget to write a report about your work
- Get hired by Google the next day
- Make sure your projects run on a new machine after complemeting the installation guide or by running it in docker container.
- Search project for
# TODO: your code here
and implement missing functionality - Make sure all tests work without errors
python -m unittest discover hw_asr/tests
- Make sure
test.py
works fine and works as expected. You should create filesdefault_test_config.json
and your installation guide should download your model checpoint and configs indefault_test_model/checkpoint.pth
anddefault_test_model/config.json
.python test.py \ -c default_test_config.json \ -r default_test_model/checkpoint.pth \ -t test_data \ -o test_result.json
- Use
train.py
for training
This repository is based on a heavily modified fork of pytorch-template repository.
You can use this project with docker. Quick start:
docker build -t my_hw_asr_image .
docker run \
--gpus '"device=0"' \
-it --rm \
-v /path/to/local/storage/dir:/repos/asr_project_template/data/datasets \
-e WANDB_API_KEY=<your_wandb_api_key> \
my_hw_asr_image python -m unittest
Notes:
-v /out/of/container/path:/inside/container/path
-- bind mount a path, so you wouldn't have to download datasets at the start of every docker run.-e WANDB_API_KEY=<your_wandb_api_key>
-- set envvar for wandb (if you want to use it). You can find your API key here: https://wandb.ai/authorize
These barebones can use more tests. We highly encourage students to create pull requests to add more tests / new functionality. Current demands:
- Tests for beam search
- README section to describe folders
- Notebook to show how to work with
ConfigParser
andconfig_parser.init_obj(...)