Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-2552: Add images user journey #2991

Merged
merged 15 commits into from
Jun 21, 2024
Binary file added assets/use-cases/ml-vision-diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
115 changes: 76 additions & 39 deletions docs/use-cases/deploy-ml.md
Original file line number Diff line number Diff line change
@@ -1,77 +1,114 @@
---
title: "Train and deploy image classification models"
linkTitle: "Train and deploy classification models"
weight: 50
title: "How to train and deploy ML/computer vision models"
linkTitle: "Train computer vision models"
weight: 20
type: "docs"
tags: ["data management", "data", "services"]
no_list: true
description: "Use Viam's machine learning capabilities to train image classification models and deploy these models to your machines."
images: ["/platform/ml.svg"]
imageAlt: "Machine Learning"
tags: ["vision", "data", "services"]
images: ["/services/ml/collect.svg"]
description: "Collect images and do interesting things with computer vision, ML, and webhooks."
---

You can create and deploy an image classification model onto your machine with Viam's machine learning (ML) capabilities.
Manage the classification model fully on one platform: collect data, create a dataset and label it, and train the model for **Single** or **Multi Label Classification**.
Then, test if your model works for classifying objects in a camera stream or existing images with the `mlmodel` classification model of vision service.
You can use Viam's built-in tools to train a machine learning (ML) model on your images and then deploy computer vision on your machines.

![Diagram of the camera component to data management service to ML model service to vision service pipeline.](/use-cases/ml-vision-diagram.png)

For example, you can train a model to recognize your dog and detect whether they are sitting or standing.
Then, you can configure your machine to [capture images](/use-cases/image-data/) only when your dog is in the camera frame so you don't capture hundreds of photos of an empty room.
You can then get even more image data of your dog and improve your ML model by training it on the larger dataset.

You can do all of this using the [Viam app](https://app.viam.com) user interface.
You will not need to write any code.

{{< alert title="In this page" color="tip" >}}

1. [Create a dataset and label data](#create-a-dataset-and-label-data)
2. [Train and test a machine learning (ML) model](#train-and-test-a-machine-learning-ml-model)

{{< /alert >}}

## Create a dataset and label data

{{< table >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/collect.svg" class="fill alignright" style="max-width: 300px" declaredimensions=true alt="Collect data">}}
**1. Collect**
{{<imgproc src="/services/icons/data-capture.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Collect data">}}
**1. Collect images**

Start by collecting images from your cameras and syncing it to the [Viam app](https://app.viam.com).
See [Collect image data and sync it to the cloud](/use-cases/image-data/#collect-image-data-and-sync-it-to-the-cloud) for instructions.

<br>
{{% alert title="Tip" color="tip" %}}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this under the steps for creating and labeling data. You might need to adjust the text a little but I think since it's more of an aside anyway it can go at the bottom of this segment

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even though they'd need to do this during this step, before collecting the data?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If they end up getting to the end of this and then having a few images that don't have the label I think that's ok. So yes

To keep your data organized, configure a tag in your data management service config panel.
This tag will be applied to all data synced from that machine.
If you apply the same tag to all data gathered from all machines that you want to use in your dataset, you can filter by that tag in the Viam app **DATA** tab to make the next steps easier.

This is not required, since you can use other filters like time or machine ID in the **DATA** tab to isolate your data.
{{% /alert %}}

JessamyT marked this conversation as resolved.
Show resolved Hide resolved
{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/collect.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Label data">}}
**2. Label your images [_(i)_](/services/data/dataset/)**

Start by collecting images from your cameras with the [data management service](/services/data/).
You can [view the data](/services/data/view/) on the **Data tab**.
Once you have enough images of the objects you'd like to classify, use the interface on the **DATA** tab to label your data.
If you want to train an image classifier, use image tags.
For an object detector, use bounding boxes.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/label.svg" class="fill alignleft" style="max-width: 300px" declaredimensions=true alt="Label data">}}
**2. Create a dataset and label**
{{<imgproc src="/services/ml/label.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Label data">}}
**2. Create a dataset [_(i)_](/services/data/dataset/)**

Once you have enough images of the objects you'd like to classify, [label your data and create a dataset](/services/data/dataset/) in preparation for training classification models.
Use the interface on the **DATA** tab (or the [`viam data dataset add` command](/cli/#data)) to add all images you want to train the model on to a dataset.

{{< /tablestep >}}
{{< /table >}}
JessamyT marked this conversation as resolved.
Show resolved Hide resolved

## Train and test a machine learning (ML) model

{{< table >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/train.svg" class="fill alignright" style="max-width: 300px" declaredimensions=true alt="Train models">}}
**3. Train an ML model**
{{<imgproc src="/services/ml/train.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**1. Train an ML model [_(i)_](/services/ml/train-model/)**

Use your labeled data to [train your own models](/services/ml/train-model/) for object classification using data from the [data management service](/services/data/).
In the Viam app, navigate to your list of [**DATASETS**](https://app.viam.com/services/data/datasets) and select the one you want to train on.
Click **Train model** and follow the prompts.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/registry/upload-module.svg" class="fill alignleft" style="max-width: 200px" declaredimensions=true alt="Train models">}}
**4. Deploy your ML model**
{{<imgproc src="/registry/upload-module.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**2. Deploy your ML model**

To use ML models with your machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.
On the **Configure** page add the built-in [ML model service](/services/ml/deploy/) and select your ML model.
The service will to deploy and run the model.
Once you've added the ML model service to your machine, choose your newly-trained model from the dropdown menu in the ML model service's configuration card.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/configure.svg" class="fill alignright" style="max-width: 300px" declaredimensions=true alt="Configure a service">}}
**5. Configure an <code>mlmodel</code> vision service**
{{<imgproc src="/services/icons/vision.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Configure a service">}}
**3. Configure an <code>mlmodel</code> vision service [_(i)_](/services/vision/)**

The vision service takes the the ML model and applies it to the stream of images from your camera.

For object classification, you can use the [vision service](/services/vision/), which provides an [ml model classifier](/services/vision/mlmodel/) model.
Add the `vision / ML model` service to your machine.
Then, from the **Select model** dropdown, select the name of the ML model service you configured in the last step (for example, `mlmodel-1`).

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/deploy.svg" class="fill alignleft" style="max-width: 300px" declaredimensions=true alt="Deploy your model">}}
**6. Test your classifier**
{{<imgproc src="/services/ml/deploy.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Deploy your model">}}
**4. Test your classifier [_(i)_](/services/vision/mlmodel/#test-your-detector-or-classifier)**

Test your [mlmodel classifier](/services/vision/mlmodel/#test-your-detector-or-classifier) with [existing images in the Viam app](/services/vision/mlmodel/#existing-images-in-the-cloud), [live camera footage,](/services/vision/mlmodel/#live-camera-footage) or [existing images on a computer](/services/vision/mlmodel/#existing-images-on-your-machine).
Test your ML model classifier with [existing images in the Viam app](/services/vision/mlmodel/#existing-images-in-the-cloud), [live camera footage,](/services/vision/mlmodel/#live-camera-footage) or [existing images on a computer](/services/vision/mlmodel/#existing-images-on-your-machine).

{{< /tablestep >}}
{{< /table >}}

## Next steps

After testing your classifier, see the following to further explore Viam's data management and computer vision capabilities:

- [Export Data Using the Viam CLI](/services/data/export/): Export your synced data from the Viam cloud.
- [2D Object Detection](/services/vision/#detections): Configure your machine's camera to draw a bounding box around detected objects, based on a machine learning model.
- [Update an existing ML model](/services/ml/train-model/#train-a-new-version-of-a-model): Refine an existing ML model you have trained, and select which model version to deploy.

You can also explore our [tutorials](/tutorials/) for more machine learning ideas:
See the following tutorials for examples of how to use the tools described on this page:

{{< cards >}}
{{% card link="/tutorials/services/data-mlmodel-tutorial/" %}}
{{% card link="/tutorials/projects/helmet/" %}}
{{% card link="/tutorials/projects/verification-system/" %}}
{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Smart Pet Feeder" %}}
{{% card link="/registry/examples/tflite-module/" %}}
{{< /cards >}}
115 changes: 93 additions & 22 deletions docs/use-cases/image-data.md
Original file line number Diff line number Diff line change
@@ -1,70 +1,141 @@
---
title: "Capture and sync image data"
title: "How to capture, filter, and sync image data"
linkTitle: "Capture and sync image data"
weight: 20
type: "docs"
tags: ["data management", "data", "services"]
images: ["/services/ml/collect.svg"]
description: "Capture image data from a camera on your machine and sync that data to the cloud."
description: "Capture images from a camera on your machine and selectively sync images to the cloud with filtering."
---

You can use the data management service to capture images from a camera on your machine and sync those images to the cloud.
Once you have synced your images, you can view them in the Viam app, filter your images using common search criteria, or export them to other machines.
You can use Viam's built-in data management service to capture images from a camera on your machine and sync the images to the cloud.

For example, you might add the data management service to multiple machines to be able to sync captured images from each of them to the Viam app so that you can search across all images from one interface.
If you want to capture only certain images, such as those containing a person, you can use a "filtering camera" to selectively capture images based on a computer vision model.

With your images synced to the cloud, you can view images from all your machines in one Viam app interface.
From there, you can use your image data to do things like [train ML models](/use-cases/deploy-ml/).

{{< alert title="In this page" color="tip" >}}

1. [Collect image data and sync it to the cloud](#collect-image-data-and-sync-it-to-the-cloud)
2. [Use filtering to collect and sync only certain images](#use-filtering-to-collect-and-sync-only-certain-images)

{{< /alert >}}

## Prerequisites

{{% expand "A running machine connected to the Viam app. Click to see instructions." %}}

{{% snippet "setup.md" %}}

{{% /expand%}}

## Collect image data and sync it to the cloud

{{< table >}}
{{< tablestep >}}
{{<imgproc src="/icons/components/camera.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="configure a camera component">}}
**1. Configure a camera**
**1. Configure a camera [_(i)_](/components/camera/)**

First, [create a machine](/cloud/machines/#add-a-new-machine) if you haven't yet.

Then [add a camera component](/components/camera/), such as a [webcam](/components/camera/webcam/).
Configure a camera component, such as a [webcam](/components/camera/webcam/), on your machine.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/icons/data-management.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Collect data">}}
**2. Configure the data management service**

Next, [add the data management service](/services/data/) to be able to configure how your camera captures and stores images.
**2. Enable the data management service [_(i)_](/services/data/)**

Then configure [data capture](/services/data/capture/) and [cloud sync](/services/data/cloud-sync/).
In your camera component configuration panel, find the **Data capture** section.
Click **Add method** and follow the prompt to **Create a data management service**.
You can leave the default data manager settings.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/icons/data-capture.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Collect data">}}
**3. Capture data**

With data management configured, [capture image data from a camera on your machine](/services/data/capture/#configure-data-capture-for-individual-components). Captured data is automatically synced to the cloud after a short delay.
With the data management service configured on your machine, you can continue configuring how the camera component itself captures data.
In the **Data capture** panel of your camera's config, select **Read image** from the method selector, and set your desired capture frequency.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/collect.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**4. View data in the Viam app**

Once you have synced images, you can [view those images in the Viam app](/services/data/view/) from the **Data** tab.
Once you have synced images, you can [view those images in the Viam app](/services/data/view/) from the **DATA** tab in the top navigation bar.

You can also [export your data from the Viam app](/services/data/export/) to a deployed machine, or to any computer.

{{< /tablestep >}}
{{< /table >}}

## Use filtering to collect and sync only certain images

You can use filtering to selectively capture images using a machine learning (ML) model, for example to only capture images with people in them.

Contributors have written several filtering {{< glossary_tooltip term_id="module" text="modules" >}} that you can use to filter image capture.
The following steps use the [`filtered_camera`](https://github.com/erh/filtered_camera) module:

{{< table >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/configure.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**5. Filter data by common search criteria**
{{<imgproc src="/services/ml/train.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**1. Add an ML model to your machine [_(i)_](/services/ml/deploy/)**

You can [filter synced images in the Viam app](/services/data/view/#filter-data) using the **Filters** menu under the **Data** tab in the Viam app, using search criteria such as machine name, location, date range, or component name.
Configure an ML model service on your machine that is compatible with the ML model you want to use, for example [TFLite CPU](/services/ml/deploy/tflite_cpu/).

From the **Model** dropdown, select the preexisting model you want to use, or click **Add new model** to upload your own.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/icons/vision.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**2. Add a vision service to use with the ML model [_(i)_](/services/vision/)**

You can think of the vision service as the bridge between the ML model service and the output from your camera.

Configure the `vision / ML model` service on your machine.
From the **Select model** dropdown, select the name of your ML model service (for example, `mlmodel-1`).

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/icons/modular-registry.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**3. Configure the filtered camera**

The `filtered-camera` {{< glossary_tooltip term_id="modular-resource" text="modular component" >}} pulls the stream of images from the camera you configured earlier, and applies the vision service to it.

Configure a `filtered-camera` component on your machine, following the [attribute guide in the README](https://github.com/erh/filtered_camera?tab=readme-ov-file#configure-your-filtered-camera) to specify the names of your webcam and vision service, and add classification and object detection filters.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/icons/data-capture.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**6. Export data**
**4. Configure data capture and sync on the filtered camera**

You can also [export your data from the Viam app](/services/data/export/) to a deployed machine, or to any computer.
Configure data capture and sync just as you would for a webcam.
The filtered camera will only capture image data that passes the filters you configured in the previous step.

Turn off data capture on your webcam if you haven't already, so that you don't capture duplicate or unfiltered images.

{{< /tablestep >}}
{{< tablestep >}}
{{<imgproc src="/services/ml/configure.svg" class="fill alignleft" style="max-width: 150px" declaredimensions=true alt="Train models">}}
**5. (Optional) Trigger sync with custom logic**

By default, the captured data syncs at the regular interval you specified in the data capture config.
If you need to trigger sync in a different way, see [Trigger cloud sync conditionally](/services/data/trigger-sync/) for a documented example of syncing data only at certain times of day.

{{< /tablestep >}}
{{< /table >}}

## Next steps

JessamyT marked this conversation as resolved.
Show resolved Hide resolved
Now that you have collected image data, you can [train new computer vision models](/use-cases/deploy-ml/) or [programmatically access your data](/services/data/export/):

{{< cards >}}
{{% card link="/use-cases/deploy-ml/" %}}
{{% card link="/services/data/query/" %}}
JessamyT marked this conversation as resolved.
Show resolved Hide resolved
{{% card link="/services/ml/" %}}
{{% card link="/tutorials/" %}}
{{< /cards >}}

To see image data filtering in action, check out these tutorials:

{{< cards >}}
{{% card link="/tutorials/projects/filtered-camera/" %}}
{{% card link="/tutorials/configure/pet-photographer.md" %}}
{{< /cards >}}
Loading