Skip to content

Commit

Permalink
docs
Browse files Browse the repository at this point in the history
  • Loading branch information
kozlov721 committed Jan 25, 2025
1 parent fca9f2c commit 01de24b
Show file tree
Hide file tree
Showing 5 changed files with 84 additions and 5 deletions.
43 changes: 43 additions & 0 deletions luxonis_train/attached_modules/losses/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ List of all the available loss functions.
- [`AdaptiveDetectionLoss`](#adaptivedetectionloss)
- [`EfficientKeypointBBoxLoss`](#efficientkeypointbboxloss)
- [`FOMOLocalizationLoss`](#fomolocalizationLoss)
- [Embedding Losses](#embedding-losses)

## `CrossEntropyLoss`

Expand Down Expand Up @@ -121,3 +122,45 @@ Adapted from [here](https://arxiv.org/abs/2108.07610).
| Key | Type | Default value | Description |
| --------------- | ------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `object_weight` | `float` | `1000` | Weight for the objects in the loss calculation. Training with a larger `object_weight` in the loss parameters may result in more false positives (FP), but it will improve accuracy. |

## Embedding Losses

We support the following losses taken from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/):

- [AngularLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#angularloss)
- [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss)
- [ContrastiveLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss)
- [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss)
- [FastAPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#fastaploss)
- [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss)
- [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss)
- [IntraPairVarianceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#intrapairvarianceloss)
- [GeneralizedLiftedStructureLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#generalizedliftedstructureloss)
- [LiftedStructureLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#liftedstructureloss)
- [MarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#marginloss)
- [MultiSimilarityLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#multisimilarityloss)
- [NPairsLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#npairsloss)
- [NCALoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ncaloss)
- [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss)
- [PNPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss)
- [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss)
- [SignalToNoiseRatioContrastiveLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#signaltonoisecontrastiveloss)
- [SupConLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss)
- [ThresholdConsistentMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#thresholdconsistentmarginloss)
- [TripletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tripletmarginloss)
- [TupletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tupletmarginloss)

**Parameters:**

For loss specific parameters, see the documentation pages linked above. In addition to the loss specific parameters, the following parameters are available:

| Key | Type | Default value | Description |
| -------------------- | ------ | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `miner` | `str` | `None` | Name of the miner to use with the loss. If `None`, no miner is used. All miners from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/) are supported. |
| `miner_params` | `dict` | `None` | Parameters for the miner. |
| `distance` | `str` | `None` | Name of the distance metric to use with the loss. If `None`, no distance metric is used. All distance metrics from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/) are supported. |
| `distance_params` | `dict` | `None` | Parameters for the distance metric. |
| `reducer` | `str` | `None` | Name of the reducer to use with the loss. If `None`, no reducer is used. All reducers from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/) are supported. |
| `reducer_params` | `dict` | `None` | Parameters for the reducer. |
| `regularizer` | `str` | `None` | Name of the regularizer to use with the loss. If `None`, no regularizer is used. All regularizers from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/regularizers/) are supported. |
| `regularizer_params` | `dict` | `None` | Parameters for the regularizer. |
12 changes: 12 additions & 0 deletions luxonis_train/attached_modules/metrics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ List of all the available metrics.
- [ObjectKeypointSimilarity](#objectkeypointsimilarity)
- [MeanAveragePrecision](#meanaverageprecision)
- [MeanAveragePrecisionKeypoints](#meanaverageprecisionkeypoints)
- [ClosestIsPositiveAccuracy](#closestispositiveaccuracy)
- [MedianDistances](#mediandistances)

## Torchmetrics

Expand Down Expand Up @@ -63,3 +65,13 @@ Evaluation leverages COCO evaluation framework (COCOeval) to assess mAP performa
| `area_factor` | `float` | `0.53` | Factor by which to multiply the bounding box area |
| `max_dets` | `int` | `20` | Maximum number of detections per image |
| `box_fotmat` | `Literal["xyxy", "xywh", "cxcywh"]` | `"xyxy"` | Format of the bounding boxes |

## ClosestIsPositiveAccuracy

Compute the accuracy of the closest positive sample to the query sample.
Needs to be connected to the `GhostFaceNetHead` node.

## MedianDistances

Compute the median distance between the query and the positive samples.
Needs to be connected to the `GhostFaceNetHead` node.
10 changes: 10 additions & 0 deletions luxonis_train/attached_modules/visualizers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ Visualizers are used to render the output of a node. They are used in the `visua
- [`BBoxVisualizer`](#bboxvisualizer)
- [`ClassificationVisualizer`](#classificationvisualizer)
- [`KeypointVisualizer`](#keypointvisualizer)
- [`SegmentationVisualizer`](#segmentationvisualizer)
- [`EmbeddingsVisualizer`](#embeddingsvisualizer)
- [`MultiVisualizer`](#multivisualizer)

## `BBoxVisualizer`
Expand Down Expand Up @@ -72,6 +74,14 @@ Visualizer for bounding boxes.

![class_viz_example](https://github.com/luxonis/luxonis-train/blob/main/media/example_viz/class.png)

## `EmbeddingsVisualizer`

**Parameters:**

| Key | Type | Default value | Description |
| ------------------- | ------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `z_score_threshold` | `float` | `3.0` | Threshold for z-score filtering. Embeddings with z-score higher than this value are considered as outliers and are not drawn. |

## `MultiVisualizer`

Special type of meta-visualizer that combines several visualizers into one. The combined visualizers share canvas.
Expand Down
18 changes: 18 additions & 0 deletions luxonis_train/nodes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ arbitrarily as long as the two nodes are compatible with each other. We've group
- [`DDRNet`](#ddrnet)
- [`RecSubNet`](#recsubnet)
- [`EfficientViT`](#efficientvit)
- [`GhostFaceNetV2`](#ghostfacenetv2)
- [Necks](#necks)
- [`RepPANNeck`](#reppanneck)
- [Heads](#heads)
Expand All @@ -29,6 +30,7 @@ arbitrarily as long as the two nodes are compatible with each other. We've group
- [`DDRNetSegmentationHead`](#ddrnetsegmentationhead)
- [`DiscSubNetHead`](#discsubnet)
- [`FOMOHead`](#fomohead)
- [`GhostFaceNetHead`](#ghostfacenethead)
Every node takes these parameters:

| Key | Type | Default value | Description |
Expand Down Expand Up @@ -186,6 +188,14 @@ Adapted from [here](https://arxiv.org/abs/2205.14756)
| `expand_ratio` | `int` | `4` | Factor by which channels expand in the local module |
| `dim` | `int` | `None` | Dimension size for each attention head |

### `GhostFaceNetV2`

**Parameters:**

| Key | Type | Default value | Description |
| --------- | --------------- | ------------- | --------------------------- |
| `variant` | `Literal["V2"]` | `"V2"` | The variant of the network. |

## Neck

### `RepPANNeck`
Expand Down Expand Up @@ -290,3 +300,11 @@ Adapted from [here](https://arxiv.org/abs/2108.07610).
| `num_conv_layers` | `int` | `3` | Number of convolutional layers to use in the model. |
| `conv_channels` | `int` | `16` | Number of output channels for each convolutional layer. |
| `use_nms` | `bool` | `False` | If True, enable NMS. This can reduce FP, but it will also reduce TP for close neighbors. |

### `GhostFaceNetHead`

**Parameters:**

| Key | Type | Default value | Description |
| ---------------- | ----- | ------------- | ---------------------------------------- |
| `embedding_size` | `int` | `512` | The size of the output embedding vector. |
6 changes: 1 addition & 5 deletions luxonis_train/nodes/backbones/ghostfacenet/ghostfacenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,7 @@ class GhostFaceNetV2(BaseNode[Tensor, Tensor]):
in_channels: int
in_width: int

def __init__(
self,
variant: Literal["V2"] = "V2",
**kwargs,
):
def __init__(self, variant: Literal["V2"] = "V2", **kwargs):
"""GhostFaceNetsV2 backbone.
GhostFaceNetsV2 is a convolutional neural network architecture focused on face recognition, but it is
Expand Down

0 comments on commit 01de24b

Please sign in to comment.