Skip to content

Commit

Permalink
Update some docs (#852)
Browse files Browse the repository at this point in the history
* update some docs

* update the lr adjusting rule
  • Loading branch information
hellock authored Jun 22, 2019
1 parent d95727b commit ae856e1
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 5 deletions.
13 changes: 8 additions & 5 deletions GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [-
Optional arguments:
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.
- `EVAL_METRICS`: Items to be evaluated on the results. Allowed values are: `proposal_fast`, `proposal`, `bbox`, `segm`, `keypoints`.
- `--show`: If specified, detection results will be ploted on the images and shown in a new window. Only applicable for single GPU testing.
- `--show`: If specified, detection results will be ploted on the images and shown in a new window. (Only applicable for single GPU testing.)

Examples:

Expand Down Expand Up @@ -90,9 +90,8 @@ which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.
All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.

**\*Important\***: The default learning rate in config files is for 8 GPUs.
If you use less or more than 8 GPUs, you need to set the learning rate proportional
to the GPU num, e.g., 0.01 for 4 GPUs and 0.04 for 16 GPUs.
**\*Important\***: The default learning rate in config files is for 8 GPUs and 2 img/gpu (batch size = 8*2 = 16).
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU, e.g., lr=0.01 for 4 GPUs * 2 img/gpu and lr=0.08 for 16 GPUs * 4 img/gpu.

### Train with a single GPU

Expand All @@ -110,10 +109,14 @@ If you want to specify the working directory in the command, you can add an argu

Optional arguments are:

- `--validate` (recommended): Perform evaluation at every k (default=1) epochs during the training.
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like `this`[configs/mask_rcnn_r50_fpn_1x.py#L174]) epochs during the training.
- `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.

Difference between `resume_from` and `load_from`:
`resume_from` loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.
`load_from` only loads the model weights and the training epoch starts from 0. It is usually used for finetuning.

### Train with multiple machines

If you run mmdetection on a cluster managed with [slurm](https://slurm.schedmd.com/), you can just use the script `slurm_train.sh`.
Expand Down
7 changes: 7 additions & 0 deletions TECHNICAL_DETAILS.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ FPN structure in [Path Aggregation Network for Instance Segmentation](https://ar
1. create a new file in `mmdet/models/necks/pafpn.py`.

```python
from ..registry import NECKS

@NECKS.register
class PAFPN(nn.Module):

def __init__(self,
Expand Down Expand Up @@ -97,3 +100,7 @@ Model parameters are only synchronized once at the begining.
After a forward and backward pass, gradients will be allreduced among all GPUs,
and the optimizer will update model parameters.
Since the gradients are allreduced, the model parameter stays the same for all processes after the iteration.

## Other information

For more information, please refer to our [technical report](https://arxiv.org/abs/1906.07155).

0 comments on commit ae856e1

Please sign in to comment.