This repository contains a model and a sample application that utilizes object detection techniques to identify animals in wildlife footage.
Videos sourced from the MammalCam YouTube channel.
The "Detecting Animals in Wildlife" project is designed to automate the identification and tracking of animals in wildlife footage. Utilizing state-of-the-art YOLOv8 object detection technology, this application aims to enhance the efficiency of wildlife monitoring by saving a significant amount of time for individuals who would otherwise need to watch the camera footage manually.
Python Version: 3.10.12
- aiofiles==23.2.1
- asynctest==0.13.0
- fastapi==0.111.0
- httpx==0.27.0
- matplotlib==3.8.3
- moviepy==1.0.3
- numpy==1.26.4
- opencv_python==4.9.0.80
- opencv_python_headless==4.9.0.80
- pandas==2.2.2
- pydantic==2.7.1
- pytest==8.1.1
- starlette==0.37.2
- ultralytics==8.1.29
- openpyxl
The dataset is created by me. You can access the dataset here
The dataset consists of images captured under various conditions (day/night, different seasons, angles), which may affect detection accuracy. If you see any inaccuracies in predictions, please report them. I will update model based on feedback.
- Coyote
- Deer
- Turkey
- Raccoon
- Total images: 5932
- Real images: 2924
- Augmented images: 3008
- Deer 1318
- Turkey 1270
- Raccoon 954
- Coyote 900
- Flip: Horizontal, Vertical
- 90° Rotate: Clockwise, Counter-Clockwise, Upside Down
- Crop: 0% Minimum Zoom, 20% Maximum Zoom
- Rotation: Between -15° and +15°
- Shear: ±10° Horizontal, ±10° Vertical
- Grayscale: Apply to 15% of images
- Hue: Between -15° and +15°
- Saturation: Between -25% and +25%
- Brightness: Between -15% and +15%
- Exposure: Between -10% and +10%
- Blur: Up to 1.5px
- Noise: Up to 0.1% of pixels
- The trained model can be found in the 'model' directory. This model uses YOLOv8 small for detection.
To start the web application on your local machine, you'll need to run the following command in your terminal. Ensure you are in the project's root directory where main.py is located:
uvicorn main:app
Open a web browser and navigate to http://127.0.0.1:8000/ This will load the web interface where you can upload videos for processing.
URL: http://127.0.0.1:8000/upload_and_track/
Parameters:
file
(UploadFile): The video file in MP4 or AVI format.every_n_frame
(int): This parameter controls the frequency of frames analyzed by the model. Setting this parameter allows you to balance processing speed and detection accuracy. A lower value means the model analyzes more frames, leading to higher accuracy but slower processing times. The default setting is 3, yielding great results under most conditions.
Upload a video file in MP4 or AVI format.
Choose the number of frames to be processed. Fewer frames result in more accuracy, more frames increase processing speed .
Once processing is complete, you will receive three download links: the annotated video, detailed results (predictions from each frame), and a summary showing the most frequently detected animals.
Detailed Results: Predictions from each frame
Summary: Showing only the most frequently decided category for each tracked animal
URL: http://127.0.0.1:8000/upload_and_track_multiple/
Parameters:
files
(List[UploadFile]): Video files in MP4 or AVI format.preference
(str): Options include:keep_summary
: Generate a summary of detections.generate_annotated_video
: Create an annotated video with generated bounding boxes.keep_detailed_results
: Generate detailed results of detections.
every_n_frame
(int): This parameter controls the frequency of frames analyzed by the model. Setting this parameter allows you to balance processing speed and detection accuracy. A lower value means the model analyzes more frames, leading to higher accuracy but slower processing times. The default setting is 3, yielding great results under most conditions.
Response: A JSON response containing the session ID, paths to the processed files (organized based on detection results), and a summary in CSV and Excel format. Errors are also returned in the response if any occur during processing.
Summary CSV: This file contains information about each processed video, including the video name, whether animals were detected (boolean), the categories of detected animals, and the count of each animal category.
Upload a video file in MP4 or AVI format.
Choose preferences you want.
Select the number of frames to be processed (same as in the first endpoint).
After completion, you will receive a zip file containing two folders: one for videos where animals were detected and another for videos without them. Additionally, the root directory of the zip file will include a detailed summary.csv file.
| ├─app │ ├── file_management.py │ ├── initialization.py │ ├── reporting.py
│ ├── result_handling.py │ ├── video_processing.py | └── zipping_json.py ├─static │ ├── css │ └── js |─templates | └── index.html ├─tests │ ├── corrupt-video.mp4 │ ├── test-video-2.mp4 │ ├── test-video.mp4 │ ├── test_file_management.py │ ├── test_initialization.py │ ├── test_main.py │ ├── test_reporting.py │ ├── test_result_handling.py │ |── test_video_processing.py | └── test_zipping_json.py | ├─main.py ├─config.py │ ├─model │ └─YOLOv8_small.pt