From 9aa42d0e260272a333b09172bdd6790e418a39d8 Mon Sep 17 00:00:00 2001 From: Martin Kozlovsky Date: Wed, 8 May 2024 02:02:57 +0200 Subject: [PATCH] updated readme --- configs/README.md | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/configs/README.md b/configs/README.md index 27e2fb6e..c1f4889b 100644 --- a/configs/README.md +++ b/configs/README.md @@ -142,23 +142,24 @@ To store and load the data we use LuxonisDataset and LuxonisLoader. For specific Here you can change everything related to actual training of the model. -| Key | Type | Default value | Description | -| ----------------------- | --------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -| batch_size | int | 32 | batch size used for training | -| accumulate_grad_batches | int | 1 | number of batches for gradient accumulation | -| use_weighted_sampler | bool | False | bool if use WeightedRandomSampler for training, only works with classification tasks | -| epochs | int | 100 | number of training epochs | -| num_workers | int | 2 | number of workers for data loading | -| train_metrics_interval | int | -1 | frequency of computing metrics on train data, -1 if don't perform | -| validation_interval | int | 1 | frequency of computing metrics on validation data | -| num_log_images | int | 4 | maximum number of images to visualize and log | -| skip_last_batch | bool | True | whether to skip last batch while training | -| accelerator | Literal\["auto", "cpu", "gpu"\] | "auto" | What accelerator to use for training. | -| devices | int \| list\[int\] \| str | "auto" | Either specify how many devices to use (int), list specific devices, or use "auto" for automatic configuration based on the selected accelerator | -| strategy | Literal\["auto", "ddp"\] | "auto" | What strategy to use for training. | -| num_sanity_val_steps | int | 2 | Number of sanity validation steps performed before training. | -| profiler | Literal\["simple", "advanced"\] \| None | None | PL profiler for GPU/CPU/RAM utilization analysis | -| verbose | bool | True | Print all intermediate results to console. | +| Key | Type | Default value | Description | +| ----------------------- | ---------------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| batch_size | int | 32 | batch size used for training | +| accumulate_grad_batches | int | 1 | number of batches for gradient accumulation | +| use_weighted_sampler | bool | False | bool if use WeightedRandomSampler for training, only works with classification tasks | +| epochs | int | 100 | number of training epochs | +| num_workers | int | 2 | number of workers for data loading | +| train_metrics_interval | int | -1 | frequency of computing metrics on train data, -1 if don't perform | +| validation_interval | int | 1 | frequency of computing metrics on validation data | +| num_log_images | int | 4 | maximum number of images to visualize and log | +| skip_last_batch | bool | True | whether to skip last batch while training | +| accelerator | Literal\["auto", "cpu", "gpu"\] | "auto" | What accelerator to use for training. | +| devices | int \| list\[int\] \| str | "auto" | Either specify how many devices to use (int), list specific devices, or use "auto" for automatic configuration based on the selected accelerator | +| matmul_precision | Literal\["medium", "high", "highest"\] \| None | None | Sets the internal precision of float32 matrix multiplications. | +| strategy | Literal\["auto", "ddp"\] | "auto" | What strategy to use for training. | +| num_sanity_val_steps | int | 2 | Number of sanity validation steps performed before training. | +| profiler | Literal\["simple", "advanced"\] \| None | None | PL profiler for GPU/CPU/RAM utilization analysis | +| verbose | bool | True | Print all intermediate results to console. | ### Preprocessing