Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/add dependabot #123

Merged
merged 2 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
target-branch: "main"
# Labels on pull requests for version updates only
labels:
- "pip dependencies"
54 changes: 49 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,18 @@ This is a command-line tool that simplifies the conversion process of YOLO model
> \[!WARNING\]\
> Please note that for the moment, we support conversion of YOLOv9 weights only from [Ultralytics](https://docs.ultralytics.com/models/yolov9/#performance-on-ms-coco-dataset).

## Running
## 📜 Table of contents

- [💻 How to run](#run)
- [⚙️ Arguments](#arguments)
- [🧰 Supported Models](#supported-models)
- [📝 Credits](#credits)
- [📄 License](#license)
- [🤝 Contributing](#contributing)

<a name="run"></a>

## 💻 How to run

You can either export a model stored on the cloud (e.g. S3) or locally. You can choose to install the toolkit through pip or using Docker. In the sections below, we'll describe both options.

Expand Down Expand Up @@ -54,21 +65,54 @@ docker compose run tools_cli shared_with_container/models/yolov6nr4.pt

The output files are going to be in `shared-component/output` folder.

### Arguments
<a name="arguments"></a>

## ⚙️ Arguments

- `model: str` = Path to the model.
- `imgsz: str` = Image input shape in the format `width height` or `width`. Default value `"416 416"`.
- `version: Optional[str]` =
- `version: Optional[str]` = Version of the YOLO model. Default value `None`. If not specified, the version will be detected automatically. Supported versions: `yolov5`, `yolov6r1`, `yolov6r3`, `yolov6r4`, `yolov7`, `yolov8`, `yolov9`, `yolov10`, `yolov11`, `goldyolo`.
- `use_rvc2: bool` = Whether to export for RVC2 or RVC3 devices. Default value `True`.
- `class_names: Optional[str]` = Optional list of classes separated by a comma, e.g. `"person, dog, cat"`
- `output_remote_url: Optional[str]` = Remote output url for the output .onnx model.
- `config_path: Optional[str]` = Optional path to an optional config.
- `put_file_plugin: Optional[str]` = Which plugin to use. Optional.

## Credits
<a name="supported-models"></a>

## 🧰 Supported models

Currently, the following models are supported:

| Model Version | Supported versions |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `yolov5` | YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5x, YOLOv5n6, YOLOv5s6, YOLOv5m6, YOLOv5l6 |
| `yolov6r1` | **v1.0 release:** YOLOv6n, YOLOv6t, YOLOv6s |
| `yolov6r3` | **v2.0 release:** YOLOv6n, YOLOv6t, YOLOv6s, YOLOv6m, YOLOv6l <br/> **v2.1 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l <br/> **v3.0 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l |
| `yolov6r4` | **v4.0 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l |
| `yolov7` | YOLOv7-tiny, YOLOv7, YOLOv7x |
| `yolov8` | **Detection:** YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x, YOLOv3-tinyu, YOLOv5nu, YOLOv5n6u, YOLOv5s6u, YOLOv5su, YOLOv5m6u, YOLOv5mu, YOLOv5l6u, YOLOv5lu <br/> **Instance Segmentation, Pose, Oriented Detection, Classification:** YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x |
| `yolov9` | YOLOv9t, YOLOv9s, YOLOv9m, YOLOv9c |
| `yolov10` | YOLOv10n, YOLOv10s, YOLOv10m, YOLOv10b, YOLOv10l, YOLOv10x |
| `yolov11` | **Detection, Instance Segmentation, Pose, Oriented Detection, Classification:** YOLO11n, YOLO11s, YOLO11m, YOLO11l, YOLO11x |
| `goldyolo` | Gold-YOLO-N, Gold-YOLO-S, Gold-YOLO-M, Gold-YOLO-L |

If you don't find your model in the list, it is possible that it can be converted, however, this is not guaranteed.

<a name="credits"></a>

## 📝 Credits

This application uses source code of the following repositories: [YOLOv5](https://github.com/ultralytics/yolov5), [YOLOv6](https://github.com/meituan/YOLOv6), [GoldYOLO](https://github.com/huawei-noah/Efficient-Computing) [YOLOv7](https://github.com/WongKinYiu/yolov7), and [Ultralytics](https://github.com/ultralytics/ultralytics) (see each of them for more information).

## License
<a name="license"></a>

## 📄 License

This application is available under **AGPL-3.0 License** license (see [LICENSE](https://github.com/luxonis/tools/blob/master/LICENSE) file for details).

<a name="contributing"></a>

## 🤝 Contributing

We welcome contributions! Whether it's reporting bugs, adding features or improving documentation, your help is much appreciated. Please create a pull request ([here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)'s how to do it) and assign anyone from the Luxonis team to review the suggested changes. Cheers!
Loading