B-SOiD v1.2: Updates with adaptive high-pass for signal occlusion, improved handling of larger datasets, and frame-shifted prediction for millisecond neurobehavioral alignment.
B-SOiD v1.2 update improves user-friendliness by incorporating the following:
1. Data-driven determination of likelihood cutoff for possible signal occlusion.
2. Adaptive t-SNE parameters. The parameters include learning rate, exaggeration, and perplexity.
3. Frame-shifted machine learning prediction to identify behavioral transitions up to your camera's frame rate.
We have updated our README.md
to reflect the newest changes. Upon setting the parameters and having folder containing the frames extracted from one of your video datasets, all you have to do is run the bsoid_master_v1p2.m
and follow the prompt to select your files/folders. We want to make this as user-friendly as possible. Please do not hesitate to open up an issue!
The Python 3 version (Google Colab, free for all) will be released as a separate branch before the end of February 2020! We envision B-SOiD to run on the cloud one day!