repo
stringlengths 2
152
⌀ | file
stringlengths 15
239
| code
stringlengths 0
58.4M
| file_length
int64 0
58.4M
| avg_line_length
float64 0
1.81M
| max_line_length
int64 0
12.7M
| extension_type
stringclasses 364
values |
---|---|---|---|---|---|---|
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 384, 768],
num_heads=[3, 6, 12, 24],
depths=[2, 2, 18, 2]),
neck=dict(in_channels=[96, 192, 384, 768]),
)
| 444 | 33.230769 | 123 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k.py | _base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 384, 768],
num_heads=[3, 6, 12, 24],
depths=[2, 2, 18, 2]),
decode_head=dict(in_channels=[96, 192, 384, 768]),
auxiliary_head=dict(in_channels=384))
| 489 | 36.692308 | 123 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, 512, 1024],
num_heads=[4, 8, 16, 32],
depths=[2, 2, 18, 2],
drop_path_rate=0.3),
neck=dict(in_channels=[128, 256, 512, 1024]),
)
| 477 | 33.142857 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k.py | _base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, 512, 1024],
num_heads=[4, 8, 16, 32],
depths=[2, 2, 18, 2],
drop_path_rate=0.3),
decode_head=dict(in_channels=[128, 256, 512, 1024]),
auxiliary_head=dict(in_channels=512))
| 522 | 36.357143 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/twins_pcpvt-s_fpn.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa
model = dict(
backbone=dict(
type='SVT',
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[64, 128, 256, 512],
num_heads=[2, 4, 8, 16],
mlp_ratios=[4, 4, 4, 4],
depths=[2, 2, 10, 4],
windiow_sizes=[7, 7, 7, 7],
norm_after_stage=True),
neck=dict(in_channels=[64, 128, 256, 512], out_channels=256, num_outs=4),
decode_head=dict(num_classes=150),
)
optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, weight_decay=0.0001)
| 811 | 34.304348 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/twins_pcpvt-s_upernet.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa
model = dict(
backbone=dict(
type='SVT',
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[64, 128, 256, 512],
num_heads=[2, 4, 8, 16],
mlp_ratios=[4, 4, 4, 4],
depths=[2, 2, 10, 4],
windiow_sizes=[7, 7, 7, 7],
norm_after_stage=True),
decode_head=dict(in_channels=[64, 128, 256, 512]),
auxiliary_head=dict(in_channels=256))
optimizer = dict(
_delete_=True,
type='AdamW',
lr=0.00006,
betas=(0.9, 0.999),
weight_decay=0.01,
paramwise_cfg=dict(custom_keys={
'pos_block': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)
}))
lr_config = dict(
_delete_=True,
policy='poly',
warmup='linear',
warmup_iters=1500,
warmup_ratio=1e-6,
power=1.0,
min_lr=0.0,
by_epoch=False)
data = dict(samples_per_gpu=2, workers_per_gpu=2)
| 1,187 | 26 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/unet/README.md | # UNet
[U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
## Introduction
<!-- [ALGORITHM] -->
<a href="http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/backbones/unet.py#L225">Code Snippet</a>
## Abstract
<!-- [ABSTRACT] -->
There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at [this http URL](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/).
<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/24582831/142902977-20fe689d-a147-4d92-9690-dbfde8b68dbe.png" width="70%"/>
</div>
## Citation
```bibtex
@inproceedings{ronneberger2015u,
title={U-net: Convolutional networks for biomedical image segmentation},
author={Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas},
booktitle={International Conference on Medical image computing and computer-assisted intervention},
pages={234--241},
year={2015},
organization={Springer}
}
```
## Results and models
### Cityscapes
| Method | Backbone | Loss | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
| ---------- | ----------- | ------------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UNet + FCN | UNet-S5-D16 | Cross Entropy | 512x1024 | 160000 | 17.91 | 3.05 | 69.10 | 71.05 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes_20211210_145204-6860854e.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes_20211210_145204.log.json) |
### DRIVE
| Method | Backbone | Loss | Image Size | Crop Size | Stride | Lr schd | Mem (GB) | Inf time (fps) | mDice | Dice | config | download |
| ---------------- | ----------- | -------------------- | ---------- | --------- | -----: | ------- | -------- | -------------: | ----: | ----: | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UNet + FCN | UNet-S5-D16 | Cross Entropy | 584x565 | 64x64 | 42x42 | 40000 | 0.680 | - | 88.38 | 78.67 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_64x64_40k_drive/fcn_unet_s5-d16_64x64_40k_drive_20201223_191051-5daf6d3b.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/unet_s5-d16_64x64_40k_drive/unet_s5-d16_64x64_40k_drive-20201223_191051.log.json) |
| UNet + FCN | UNet-S5-D16 | Cross Entropy + Dice | 584x565 | 64x64 | 42x42 | 40000 | 0.582 | - | 88.71 | 79.32 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201820-785de5c2.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201820.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy | 584x565 | 64x64 | 42x42 | 40000 | 0.599 | - | 88.35 | 78.62 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_64x64_40k_drive/pspnet_unet_s5-d16_64x64_40k_drive_20201227_181818-aac73387.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_64x64_40k_drive/pspnet_unet_s5-d16_64x64_40k_drive-20201227_181818.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy + Dice | 584x565 | 64x64 | 42x42 | 40000 | 0.585 | - | 88.76 | 79.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201821-22b3e3ba.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201821.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy | 584x565 | 64x64 | 42x42 | 40000 | 0.596 | - | 88.38 | 78.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_64x64_40k_drive/deeplabv3_unet_s5-d16_64x64_40k_drive_20201226_094047-0671ff20.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_64x64_40k_drive/deeplabv3_unet_s5-d16_64x64_40k_drive-20201226_094047.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy + Dice | 584x565 | 64x64 | 42x42 | 40000 | 0.582 | - | 88.84 | 79.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201825-6bf0efd7.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201825.log.json) |
### STARE
| Method | Backbone | Loss | Image Size | Crop Size | Stride | Lr schd | Mem (GB) | Inf time (fps) | mDice | Dice | config | download |
| ---------------- | ----------- | -------------------- | ---------- | --------- | -----: | ------- | -------- | -------------: | ----: | ----: | ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| UNet + FCN | UNet-S5-D16 | Cross Entropy | 605x700 | 128x128 | 85x85 | 40000 | 0.968 | - | 89.78 | 81.02 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_128x128_40k_stare/fcn_unet_s5-d16_128x128_40k_stare_20201223_191051-7d77e78b.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/unet_s5-d16_128x128_40k_stare/unet_s5-d16_128x128_40k_stare-20201223_191051.log.json) |
| UNet + FCN | UNet-S5-D16 | Cross Entropy + Dice | 605x700 | 128x128 | 85x85 | 40000 | 0.986 | - | 90.65 | 82.70 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201821-f75705a9.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201821.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy | 605x700 | 128x128 | 85x85 | 40000 | 0.982 | - | 89.89 | 81.22 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_stare/pspnet_unet_s5-d16_128x128_40k_stare_20201227_181818-3c2923c4.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_stare/pspnet_unet_s5-d16_128x128_40k_stare-20201227_181818.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy + Dice | 605x700 | 128x128 | 85x85 | 40000 | 1.028 | - | 90.72 | 82.84 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201823-f1063ef7.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201823.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy | 605x700 | 128x128 | 85x85 | 40000 | 0.999 | - | 89.73 | 80.93 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_stare/deeplabv3_unet_s5-d16_128x128_40k_stare_20201226_094047-93dcb93c.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_stare/deeplabv3_unet_s5-d16_128x128_40k_stare-20201226_094047.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy + Dice | 605x700 | 128x128 | 85x85 | 40000 | 1.010 | - | 90.65 | 82.71 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201825-21db614c.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201825.log.json) |
### CHASE_DB1
| Method | Backbone | Loss | Image Size | Crop Size | Stride | Lr schd | Mem (GB) | Inf time (fps) | mDice | Dice | config | download |
| ---------------- | ----------- | -------------------- | ---------- | --------- | -----: | ------- | -------- | -------------: | ----: | ----: | ---------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UNet + FCN | UNet-S5-D16 | Cross Entropy | 960x999 | 128x128 | 85x85 | 40000 | 0.968 | - | 89.46 | 80.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_128x128_40k_chase_db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_128x128_40k_chase_db1/fcn_unet_s5-d16_128x128_40k_chase_db1_20201223_191051-11543527.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/unet_s5-d16_128x128_40k_chase_db1/unet_s5-d16_128x128_40k_chase_db1-20201223_191051.log.json) |
| UNet + FCN | UNet-S5-D16 | Cross Entropy + Dice | 960x999 | 128x128 | 85x85 | 40000 | 0.986 | - | 89.52 | 80.40 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201821-1c4eb7cf.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201821.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy | 960x999 | 128x128 | 85x85 | 40000 | 0.982 | - | 89.52 | 80.36 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1/pspnet_unet_s5-d16_128x128_40k_chase_db1_20201227_181818-68d4e609.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1/pspnet_unet_s5-d16_128x128_40k_chase_db1-20201227_181818.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy + Dice | 960x999 | 128x128 | 85x85 | 40000 | 1.028 | - | 89.45 | 80.28 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201823-c0802c4d.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201823.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy | 960x999 | 128x128 | 85x85 | 40000 | 0.999 | - | 89.57 | 80.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1/deeplabv3_unet_s5-d16_128x128_40k_chase_db1_20201226_094047-4c5aefa3.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1/deeplabv3_unet_s5-d16_128x128_40k_chase_db1-20201226_094047.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy + Dice | 960x999 | 128x128 | 85x85 | 40000 | 1.010 | - | 89.49 | 80.37 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201825-4ef29df5.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201825.log.json) |
### HRF
| Method | Backbone | Loss | Image Size | Crop Size | Stride | Lr schd | Mem (GB) | Inf time (fps) | mDice | Dice | config | download |
| ---------------- | ----------- | -------------------- | ---------- | --------- | ------: | ------- | -------- | -------------: | ----: | ----: | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UNet + FCN | UNet-S5-D16 | Cross Entropy | 2336x3504 | 256x256 | 170x170 | 40000 | 2.525 | - | 88.92 | 79.45 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_256x256_40k_hrf/fcn_unet_s5-d16_256x256_40k_hrf_20201223_173724-d89cf1ed.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/unet_s5-d16_256x256_40k_hrf/unet_s5-d16_256x256_40k_hrf-20201223_173724.log.json) |
| UNet + FCN | UNet-S5-D16 | Cross Entropy + Dice | 2336x3504 | 256x256 | 170x170 | 40000 | 2.623 | - | 89.64 | 80.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201821-c314da8a.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201821.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy | 2336x3504 | 256x256 | 170x170 | 40000 | 2.588 | - | 89.24 | 80.07 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_256x256_40k_hrf/pspnet_unet_s5-d16_256x256_40k_hrf_20201227_181818-fdb7e29b.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_256x256_40k_hrf/pspnet_unet_s5-d16_256x256_40k_hrf-20201227_181818.log.json) |
| UNet + PSPNet | UNet-S5-D16 | Cross Entropy + Dice | 2336x3504 | 256x256 | 170x170 | 40000 | 2.798 | - | 89.69 | 80.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201823-53d492fa.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201823.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy | 2336x3504 | 256x256 | 170x170 | 40000 | 2.604 | - | 89.32 | 80.21 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf/deeplabv3_unet_s5-d16_256x256_40k_hrf_20201226_094047-3a1fdf85.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf/deeplabv3_unet_s5-d16_256x256_40k_hrf-20201226_094047.log.json) |
| UNet + DeepLabV3 | UNet-S5-D16 | Cross Entropy + Dice | 2336x3504 | 256x256 | 170x170 | 40000 | 2.607 | - | 89.56 | 80.71 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_202032-59daf7a4.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_202032.log.json) |
Note:
- In `DRIVE`, `STARE`, `CHASE_DB1`, and `HRF` dataset, `mDice` is mean dice of background and vessel, while `Dice` is dice metric of vessel(foreground) only.
| 26,493 | 283.88172 | 1,109 | md |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py',
'../_base_/datasets/chase_db1.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 276 | 33.625 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 268 | 37.428571 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 268 | 37.428571 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 266 | 37.142857 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 264 | 36.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './deeplabv3_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 260 | 36.285714 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './deeplabv3_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './deeplabv3_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/chase_db1.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 266 | 37.142857 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 262 | 36.571429 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 262 | 36.571429 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=19),
auxiliary_head=dict(num_classes=19),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
)
| 420 | 23.764706 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 260 | 36.285714 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './fcn_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './fcn_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 254 | 35.428571 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './fcn_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 252 | 35.142857 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './fcn_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 252 | 35.142857 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py',
'../_base_/datasets/chase_db1.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 273 | 33.25 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 265 | 37 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 265 | 37 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 263 | 36.714286 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './pspnet_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 261 | 36.428571 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './pspnet_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 257 | 35.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './pspnet_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 255 | 35.571429 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './pspnet_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 255 | 35.571429 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/unet.yml | Collections:
- Name: UNet
Metadata:
Training Data:
- Cityscapes
- DRIVE
- STARE
- CHASE_DB1
- HRF
Paper:
URL: https://arxiv.org/abs/1505.04597
Title: 'U-Net: Convolutional Networks for Biomedical Image Segmentation'
README: configs/unet/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/backbones/unet.py#L225
Version: v0.17.0
Converted From:
Code: http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net
Models:
- Name: fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (512,1024)
lr schd: 160000
inference time (ms/im):
- value: 327.87
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
Training Memory (GB): 17.91
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 69.1
mIoU(ms+flip): 71.05
Config: configs/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes_20211210_145204-6860854e.pth
- Name: fcn_unet_s5-d16_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.68
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 78.67
Config: configs/unet/fcn_unet_s5-d16_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_64x64_40k_drive/fcn_unet_s5-d16_64x64_40k_drive_20201223_191051-5daf6d3b.pth
- Name: fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.582
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 79.32
Config: configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201820-785de5c2.pth
- Name: pspnet_unet_s5-d16_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.599
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 78.62
Config: configs/unet/pspnet_unet_s5-d16_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_64x64_40k_drive/pspnet_unet_s5-d16_64x64_40k_drive_20201227_181818-aac73387.pth
- Name: pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.585
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 79.42
Config: configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201821-22b3e3ba.pth
- Name: deeplabv3_unet_s5-d16_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.596
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 78.69
Config: configs/unet/deeplabv3_unet_s5-d16_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_64x64_40k_drive/deeplabv3_unet_s5-d16_64x64_40k_drive_20201226_094047-0671ff20.pth
- Name: deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (64,64)
lr schd: 40000
Training Memory (GB): 0.582
Results:
- Task: Semantic Segmentation
Dataset: DRIVE
Metrics:
Dice: 79.56
Config: configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive_20211210_201825-6bf0efd7.pth
- Name: fcn_unet_s5-d16_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.968
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 81.02
Config: configs/unet/fcn_unet_s5-d16_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_128x128_40k_stare/fcn_unet_s5-d16_128x128_40k_stare_20201223_191051-7d77e78b.pth
- Name: fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.986
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 82.7
Config: configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201821-f75705a9.pth
- Name: pspnet_unet_s5-d16_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.982
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 81.22
Config: configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_stare/pspnet_unet_s5-d16_128x128_40k_stare_20201227_181818-3c2923c4.pth
- Name: pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 1.028
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 82.84
Config: configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201823-f1063ef7.pth
- Name: deeplabv3_unet_s5-d16_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.999
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 80.93
Config: configs/unet/deeplabv3_unet_s5-d16_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_stare/deeplabv3_unet_s5-d16_128x128_40k_stare_20201226_094047-93dcb93c.pth
- Name: deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 1.01
Results:
- Task: Semantic Segmentation
Dataset: STARE
Metrics:
Dice: 82.71
Config: configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare_20211210_201825-21db614c.pth
- Name: fcn_unet_s5-d16_128x128_40k_chase_db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.968
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.24
Config: configs/unet/fcn_unet_s5-d16_128x128_40k_chase_db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_128x128_40k_chase_db1/fcn_unet_s5-d16_128x128_40k_chase_db1_20201223_191051-11543527.pth
- Name: fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.986
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.4
Config: configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201821-1c4eb7cf.pth
- Name: pspnet_unet_s5-d16_128x128_40k_chase_db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.982
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.36
Config: configs/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1/pspnet_unet_s5-d16_128x128_40k_chase_db1_20201227_181818-68d4e609.pth
- Name: pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 1.028
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.28
Config: configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201823-c0802c4d.pth
- Name: deeplabv3_unet_s5-d16_128x128_40k_chase_db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 0.999
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.47
Config: configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1/deeplabv3_unet_s5-d16_128x128_40k_chase_db1_20201226_094047-4c5aefa3.pth
- Name: deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (128,128)
lr schd: 40000
Training Memory (GB): 1.01
Results:
- Task: Semantic Segmentation
Dataset: CHASE_DB1
Metrics:
Dice: 80.37
Config: configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1_20211210_201825-4ef29df5.pth
- Name: fcn_unet_s5-d16_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.525
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 79.45
Config: configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_256x256_40k_hrf/fcn_unet_s5-d16_256x256_40k_hrf_20201223_173724-d89cf1ed.pth
- Name: fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.623
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 80.87
Config: configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201821-c314da8a.pth
- Name: pspnet_unet_s5-d16_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.588
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 80.07
Config: configs/unet/pspnet_unet_s5-d16_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_256x256_40k_hrf/pspnet_unet_s5-d16_256x256_40k_hrf_20201227_181818-fdb7e29b.pth
- Name: pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.798
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 80.96
Config: configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_201823-53d492fa.pth
- Name: deeplabv3_unet_s5-d16_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.604
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 80.21
Config: configs/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf/deeplabv3_unet_s5-d16_256x256_40k_hrf_20201226_094047-3a1fdf85.pth
- Name: deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf
In Collection: UNet
Metadata:
backbone: UNet-S5-D16
crop size: (256,256)
lr schd: 40000
Training Memory (GB): 2.607
Results:
- Task: Semantic Segmentation
Dataset: HRF
Metrics:
Dice: 80.71
Config: configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf_20211210_202032-59daf7a4.pth
| 14,174 | 36.5 | 215 | yml |
mmsegmentation | mmsegmentation-master/configs/upernet/README.md | # UPerNet
[Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/pdf/1807.10221.pdf)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/CSAILVision/unifiedparsing">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/uper_head.py#L13">Code Snippet</a>
## Abstract
<!-- [ABSTRACT] -->
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. Models are available at [this https URL](https://github.com/CSAILVision/unifiedparsing).
<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/24582831/142903077-44e8e0da-7276-4bda-bd2b-0df1680ca845.png" width="70%"/>
</div>
## Citation
```bibtex
@inproceedings{xiao2018unified,
title={Unified perceptual parsing for scene understanding},
author={Xiao, Tete and Liu, Yingcheng and Zhou, Bolei and Jiang, Yuning and Sun, Jian},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={418--434},
year={2018}
}
```
## Results and models
### Cityscapes
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
| ------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UPerNet | R-18 | 512x1024 | 40000 | 4.8 | 4.47 | 75.39 | 77.0 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_40k_cityscapes/upernet_r18_512x1024_40k_cityscapes_20220615_113231-12ee861d.pth) \|[log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_40k_cityscapes/upernet_r18_512x1024_40k_cityscapes_20220615_113231.log.json) |
| UPerNet | R-50 | 512x1024 | 40000 | 6.4 | 4.25 | 77.10 | 78.37 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_40k_cityscapes/upernet_r50_512x1024_40k_cityscapes_20200605_094827-aa54cb54.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_40k_cityscapes/upernet_r50_512x1024_40k_cityscapes_20200605_094827.log.json) |
| UPerNet | R-101 | 512x1024 | 40000 | 7.4 | 3.79 | 78.69 | 80.11 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_40k_cityscapes/upernet_r101_512x1024_40k_cityscapes_20200605_094933-ebce3b10.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_40k_cityscapes/upernet_r101_512x1024_40k_cityscapes_20200605_094933.log.json) |
| UPerNet | R-50 | 769x769 | 40000 | 7.2 | 1.76 | 77.98 | 79.70 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_40k_cityscapes/upernet_r50_769x769_40k_cityscapes_20200530_033048-92d21539.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_40k_cityscapes/upernet_r50_769x769_40k_cityscapes_20200530_033048.log.json) |
| UPerNet | R-101 | 769x769 | 40000 | 8.4 | 1.56 | 79.03 | 80.77 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_40k_cityscapes/upernet_r101_769x769_40k_cityscapes_20200530_040819-83c95d01.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_40k_cityscapes/upernet_r101_769x769_40k_cityscapes_20200530_040819.log.json) |
| UPerNet | R-18 | 512x1024 | 80000 | - | - | 76.02 | 77.38 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_80k_cityscapes/upernet_r18_512x1024_80k_cityscapes_20220614_110712-c89a9188.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_80k_cityscapes/upernet_r18_512x1024_80k_cityscapes_20220614_110712.log.json) |
| UPerNet | R-50 | 512x1024 | 80000 | - | - | 78.19 | 79.19 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_80k_cityscapes/upernet_r50_512x1024_80k_cityscapes_20200607_052207-848beca8.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_80k_cityscapes/upernet_r50_512x1024_80k_cityscapes_20200607_052207.log.json) |
| UPerNet | R-101 | 512x1024 | 80000 | - | - | 79.40 | 80.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_80k_cityscapes/upernet_r101_512x1024_80k_cityscapes_20200607_002403-f05f2345.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_80k_cityscapes/upernet_r101_512x1024_80k_cityscapes_20200607_002403.log.json) |
| UPerNet | R-50 | 769x769 | 80000 | - | - | 79.39 | 80.92 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_80k_cityscapes/upernet_r50_769x769_80k_cityscapes_20200607_005107-82ae7d15.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_80k_cityscapes/upernet_r50_769x769_80k_cityscapes_20200607_005107.log.json) |
| UPerNet | R-101 | 769x769 | 80000 | - | - | 80.10 | 81.49 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_80k_cityscapes/upernet_r101_769x769_80k_cityscapes_20200607_001014-082fc334.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_80k_cityscapes/upernet_r101_769x769_80k_cityscapes_20200607_001014.log.json) |
### ADE20K
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
| ------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UPerNet | R-18 | 512x512 | 80000 | 6.6 | 24.76 | 38.76 | 39.81 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_80k_ade20k/upernet_r18_512x512_80k_ade20k_20220614_110319-22e81719.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_80k_ade20k/upernet_r18_512x512_80k_ade20k_20220614_110319.log.json) |
| UPerNet | R-50 | 512x512 | 80000 | 8.1 | 23.40 | 40.70 | 41.81 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_80k_ade20k/upernet_r50_512x512_80k_ade20k_20200614_144127-ecc8377b.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_80k_ade20k/upernet_r50_512x512_80k_ade20k_20200614_144127.log.json) |
| UPerNet | R-101 | 512x512 | 80000 | 9.1 | 20.34 | 42.91 | 43.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_80k_ade20k/upernet_r101_512x512_80k_ade20k_20200614_185117-32e4db94.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_80k_ade20k/upernet_r101_512x512_80k_ade20k_20200614_185117.log.json) |
| UPerNet | R-18 | 512x512 | 160000 | - | - | 39.23 | 39.97 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_160k_ade20k/upernet_r18_512x512_160k_ade20k_20220615_113300-791c3f3e.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_160k_ade20k/upernet_r18_512x512_160k_ade20k_20220615_113300.log.json) |
| UPerNet | R-50 | 512x512 | 160000 | - | - | 42.05 | 42.78 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_160k_ade20k/upernet_r50_512x512_160k_ade20k_20200615_184328-8534de8d.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_160k_ade20k/upernet_r50_512x512_160k_ade20k_20200615_184328.log.json) |
| UPerNet | R-101 | 512x512 | 160000 | - | - | 43.82 | 44.85 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_160k_ade20k/upernet_r101_512x512_160k_ade20k_20200615_161951-91b32684.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_160k_ade20k/upernet_r101_512x512_160k_ade20k_20200615_161951.log.json) |
### Pascal VOC 2012 + Aug
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
| ------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UPerNet | R-18 | 512x512 | 20000 | 4.8 | 25.80 | 72.9 | 74.71 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_20k_voc12aug/upernet_r18_512x512_20k_voc12aug_20220614_123910-ed66e455.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_20k_voc12aug/upernet_r18_512x512_20k_voc12aug_20220614_123910.log.json) |
| UPerNet | R-50 | 512x512 | 20000 | 6.4 | 23.17 | 74.82 | 76.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_20k_voc12aug/upernet_r50_512x512_20k_voc12aug_20200617_165330-5b5890a7.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_20k_voc12aug/upernet_r50_512x512_20k_voc12aug_20200617_165330.log.json) |
| UPerNet | R-101 | 512x512 | 20000 | 7.5 | 19.98 | 77.10 | 78.29 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_20k_voc12aug/upernet_r101_512x512_20k_voc12aug_20200617_165629-f14e7f27.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_20k_voc12aug/upernet_r101_512x512_20k_voc12aug_20200617_165629.log.json) |
| UPerNet | R-18 | 512x512 | 40000 | - | - | 73.71 | 74.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r18_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_40k_voc12aug/upernet_r18_512x512_40k_voc12aug_20220614_153605-fafeb868.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_40k_voc12aug/upernet_r18_512x512_40k_voc12aug_20220614_153605.log.json) |
| UPerNet | R-50 | 512x512 | 40000 | - | - | 75.92 | 77.44 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r50_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_40k_voc12aug/upernet_r50_512x512_40k_voc12aug_20200613_162257-ca9bcc6b.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_40k_voc12aug/upernet_r50_512x512_40k_voc12aug_20200613_162257.log.json) |
| UPerNet | R-101 | 512x512 | 40000 | - | - | 77.43 | 78.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet/upernet_r101_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_40k_voc12aug/upernet_r101_512x512_40k_voc12aug_20200613_163549-e26476ac.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_40k_voc12aug/upernet_r101_512x512_40k_voc12aug_20200613_163549.log.json) |
| 17,297 | 229.64 | 851 | md |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet.yml | Collections:
- Name: UPerNet
Metadata:
Training Data:
- Cityscapes
- ADE20K
- Pascal VOC 2012 + Aug
Paper:
URL: https://arxiv.org/pdf/1807.10221.pdf
Title: Unified Perceptual Parsing for Scene Understanding
README: configs/upernet/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/uper_head.py#L13
Version: v0.17.0
Converted From:
Code: https://github.com/CSAILVision/unifiedparsing
Models:
- Name: upernet_r18_512x1024_40k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,1024)
lr schd: 40000
inference time (ms/im):
- value: 223.71
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
Training Memory (GB): 4.8
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 75.39
mIoU(ms+flip): 77.0
Config: configs/upernet/upernet_r18_512x1024_40k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_40k_cityscapes/upernet_r18_512x1024_40k_cityscapes_20220615_113231-12ee861d.pth
- Name: upernet_r50_512x1024_40k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,1024)
lr schd: 40000
inference time (ms/im):
- value: 235.29
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
Training Memory (GB): 6.4
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 77.1
mIoU(ms+flip): 78.37
Config: configs/upernet/upernet_r50_512x1024_40k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_40k_cityscapes/upernet_r50_512x1024_40k_cityscapes_20200605_094827-aa54cb54.pth
- Name: upernet_r101_512x1024_40k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,1024)
lr schd: 40000
inference time (ms/im):
- value: 263.85
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
Training Memory (GB): 7.4
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 78.69
mIoU(ms+flip): 80.11
Config: configs/upernet/upernet_r101_512x1024_40k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_40k_cityscapes/upernet_r101_512x1024_40k_cityscapes_20200605_094933-ebce3b10.pth
- Name: upernet_r50_769x769_40k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (769,769)
lr schd: 40000
inference time (ms/im):
- value: 568.18
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (769,769)
Training Memory (GB): 7.2
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 77.98
mIoU(ms+flip): 79.7
Config: configs/upernet/upernet_r50_769x769_40k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_40k_cityscapes/upernet_r50_769x769_40k_cityscapes_20200530_033048-92d21539.pth
- Name: upernet_r101_769x769_40k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (769,769)
lr schd: 40000
inference time (ms/im):
- value: 641.03
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (769,769)
Training Memory (GB): 8.4
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 79.03
mIoU(ms+flip): 80.77
Config: configs/upernet/upernet_r101_769x769_40k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_40k_cityscapes/upernet_r101_769x769_40k_cityscapes_20200530_040819-83c95d01.pth
- Name: upernet_r18_512x1024_80k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,1024)
lr schd: 80000
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 76.02
mIoU(ms+flip): 77.38
Config: configs/upernet/upernet_r18_512x1024_80k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x1024_80k_cityscapes/upernet_r18_512x1024_80k_cityscapes_20220614_110712-c89a9188.pth
- Name: upernet_r50_512x1024_80k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,1024)
lr schd: 80000
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 78.19
mIoU(ms+flip): 79.19
Config: configs/upernet/upernet_r50_512x1024_80k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x1024_80k_cityscapes/upernet_r50_512x1024_80k_cityscapes_20200607_052207-848beca8.pth
- Name: upernet_r101_512x1024_80k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,1024)
lr schd: 80000
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 79.4
mIoU(ms+flip): 80.46
Config: configs/upernet/upernet_r101_512x1024_80k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x1024_80k_cityscapes/upernet_r101_512x1024_80k_cityscapes_20200607_002403-f05f2345.pth
- Name: upernet_r50_769x769_80k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (769,769)
lr schd: 80000
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 79.39
mIoU(ms+flip): 80.92
Config: configs/upernet/upernet_r50_769x769_80k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_769x769_80k_cityscapes/upernet_r50_769x769_80k_cityscapes_20200607_005107-82ae7d15.pth
- Name: upernet_r101_769x769_80k_cityscapes
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (769,769)
lr schd: 80000
Results:
- Task: Semantic Segmentation
Dataset: Cityscapes
Metrics:
mIoU: 80.1
mIoU(ms+flip): 81.49
Config: configs/upernet/upernet_r101_769x769_80k_cityscapes.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_769x769_80k_cityscapes/upernet_r101_769x769_80k_cityscapes_20200607_001014-082fc334.pth
- Name: upernet_r18_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 40.39
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 6.6
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 38.76
mIoU(ms+flip): 39.81
Config: configs/upernet/upernet_r18_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_80k_ade20k/upernet_r18_512x512_80k_ade20k_20220614_110319-22e81719.pth
- Name: upernet_r50_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 42.74
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 8.1
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 40.7
mIoU(ms+flip): 41.81
Config: configs/upernet/upernet_r50_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_80k_ade20k/upernet_r50_512x512_80k_ade20k_20200614_144127-ecc8377b.pth
- Name: upernet_r101_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 49.16
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.1
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 42.91
mIoU(ms+flip): 43.96
Config: configs/upernet/upernet_r101_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_80k_ade20k/upernet_r101_512x512_80k_ade20k_20200614_185117-32e4db94.pth
- Name: upernet_r18_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,512)
lr schd: 160000
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 39.23
mIoU(ms+flip): 39.97
Config: configs/upernet/upernet_r18_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_160k_ade20k/upernet_r18_512x512_160k_ade20k_20220615_113300-791c3f3e.pth
- Name: upernet_r50_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,512)
lr schd: 160000
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 42.05
mIoU(ms+flip): 42.78
Config: configs/upernet/upernet_r50_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_160k_ade20k/upernet_r50_512x512_160k_ade20k_20200615_184328-8534de8d.pth
- Name: upernet_r101_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,512)
lr schd: 160000
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 43.82
mIoU(ms+flip): 44.85
Config: configs/upernet/upernet_r101_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_160k_ade20k/upernet_r101_512x512_160k_ade20k_20200615_161951-91b32684.pth
- Name: upernet_r18_512x512_20k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,512)
lr schd: 20000
inference time (ms/im):
- value: 38.76
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 4.8
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 72.9
mIoU(ms+flip): 74.71
Config: configs/upernet/upernet_r18_512x512_20k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_20k_voc12aug/upernet_r18_512x512_20k_voc12aug_20220614_123910-ed66e455.pth
- Name: upernet_r50_512x512_20k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,512)
lr schd: 20000
inference time (ms/im):
- value: 43.16
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 6.4
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 74.82
mIoU(ms+flip): 76.35
Config: configs/upernet/upernet_r50_512x512_20k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_20k_voc12aug/upernet_r50_512x512_20k_voc12aug_20200617_165330-5b5890a7.pth
- Name: upernet_r101_512x512_20k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,512)
lr schd: 20000
inference time (ms/im):
- value: 50.05
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 7.5
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 77.1
mIoU(ms+flip): 78.29
Config: configs/upernet/upernet_r101_512x512_20k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_20k_voc12aug/upernet_r101_512x512_20k_voc12aug_20200617_165629-f14e7f27.pth
- Name: upernet_r18_512x512_40k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-18
crop size: (512,512)
lr schd: 40000
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 73.71
mIoU(ms+flip): 74.61
Config: configs/upernet/upernet_r18_512x512_40k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r18_512x512_40k_voc12aug/upernet_r18_512x512_40k_voc12aug_20220614_153605-fafeb868.pth
- Name: upernet_r50_512x512_40k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-50
crop size: (512,512)
lr schd: 40000
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 75.92
mIoU(ms+flip): 77.44
Config: configs/upernet/upernet_r50_512x512_40k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r50_512x512_40k_voc12aug/upernet_r50_512x512_40k_voc12aug_20200613_162257-ca9bcc6b.pth
- Name: upernet_r101_512x512_40k_voc12aug
In Collection: UPerNet
Metadata:
backbone: R-101
crop size: (512,512)
lr schd: 40000
Results:
- Task: Semantic Segmentation
Dataset: Pascal VOC 2012 + Aug
Metrics:
mIoU: 77.43
mIoU(ms+flip): 78.56
Config: configs/upernet/upernet_r101_512x512_40k_voc12aug.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/upernet/upernet_r101_512x512_40k_voc12aug/upernet_r101_512x512_40k_voc12aug_20200613_163549-e26476ac.pth
| 13,543 | 31.714976 | 172 | yml |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x1024_40k_cityscapes.py | _base_ = './upernet_r50_512x1024_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 132 | 43.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x1024_80k_cityscapes.py | _base_ = './upernet_r50_512x1024_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 132 | 43.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_160k_ade20k.py | _base_ = './upernet_r50_512x512_160k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 128 | 42 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_20k_voc12aug.py | _base_ = './upernet_r50_512x512_20k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 129 | 42.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_40k_voc12aug.py | _base_ = './upernet_r50_512x512_40k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 129 | 42.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_80k_ade20k.py | _base_ = './upernet_r50_512x512_80k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 127 | 41.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_769x769_40k_cityscapes.py | _base_ = './upernet_r50_769x769_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_769x769_80k_cityscapes.py | _base_ = './upernet_r50_769x769_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x1024_40k_cityscapes.py | _base_ = './upernet_r50_512x1024_40k_cityscapes.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512]),
auxiliary_head=dict(in_channels=256))
| 236 | 32.857143 | 54 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x1024_80k_cityscapes.py | _base_ = './upernet_r50_512x1024_80k_cityscapes.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512]),
auxiliary_head=dict(in_channels=256))
| 236 | 32.857143 | 54 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=150),
auxiliary_head=dict(in_channels=256, num_classes=150))
| 377 | 36.8 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=21),
auxiliary_head=dict(in_channels=256, num_classes=21))
| 388 | 34.363636 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=21),
auxiliary_head=dict(in_channels=256, num_classes=21))
| 388 | 34.363636 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=150),
auxiliary_head=dict(in_channels=256, num_classes=150))
| 376 | 36.7 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x1024_40k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
| 162 | 31.6 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
| 162 | 31.6 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 250 | 34.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 261 | 31.75 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 261 | 31.75 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 249 | 34.714286 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_769x769_40k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
| 349 | 34 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_769x769_80k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
| 349 | 34 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/vit/README.md | # Vision Transformer
[An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/pdf/2010.11929.pdf)
## Introduction
<!-- [BACKBONE] -->
<a href="https://github.com/google-research/vision_transformer">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/backbones/vit.py#L98">Code Snippet</a>
## Abstract
<!-- [ABSTRACT] -->
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/24582831/142903144-f80a12cc-8698-48ab-843c-49dedf558121.png" width="70%"/>
</div>
## Citation
```bibtex
@article{dosoViTskiy2020,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={DosoViTskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={arXiv preprint arXiv:2010.11929},
year={2020}
}
```
## Usage
To use other repositories' pre-trained models, it is necessary to convert keys.
We provide a script [`vit2mmseg.py`](../../tools/model_converters/vit2mmseg.py) in the tools directory to convert the key of models from [timm](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py) to MMSegmentation style.
```shell
python tools/model_converters/vit2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
```
E.g.
```shell
python tools/model_converters/vit2mmseg.py https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth pretrain/jx_vit_base_p16_224-80ecf9dd.pth
```
This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
## Results and models
### ADE20K
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
| ------- | ----------------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UPerNet | ViT-B + MLN | 512x512 | 80000 | 9.20 | 6.94 | 47.71 | 49.51 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_vit-b16_mln_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_80k_ade20k/upernet_vit-b16_mln_512x512_80k_ade20k_20210624_130547-0403cee1.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_80k_ade20k/20210624_130547.log.json) |
| UPerNet | ViT-B + MLN | 512x512 | 160000 | 9.20 | 7.58 | 46.75 | 48.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_vit-b16_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_160k_ade20k/upernet_vit-b16_mln_512x512_160k_ade20k_20210624_130547-852fa768.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_160k_ade20k/20210623_192432.log.json) |
| UPerNet | ViT-B + LN + MLN | 512x512 | 160000 | 9.21 | 6.82 | 47.73 | 49.95 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k/upernet_vit-b16_ln_mln_512x512_160k_ade20k_20210621_172828-f444c077.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k/20210621_172828.log.json) |
| UPerNet | DeiT-S | 512x512 | 80000 | 4.68 | 29.85 | 42.96 | 43.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-s16_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_80k_ade20k/upernet_deit-s16_512x512_80k_ade20k_20210624_095228-afc93ec2.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_80k_ade20k/20210624_095228.log.json) |
| UPerNet | DeiT-S | 512x512 | 160000 | 4.68 | 29.19 | 42.87 | 43.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-s16_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_160k_ade20k/upernet_deit-s16_512x512_160k_ade20k_20210621_160903-5110d916.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_160k_ade20k/20210621_160903.log.json) |
| UPerNet | DeiT-S + MLN | 512x512 | 160000 | 5.69 | 11.18 | 43.82 | 45.07 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-s16_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_mln_512x512_160k_ade20k/upernet_deit-s16_mln_512x512_160k_ade20k_20210621_161021-fb9a5dfb.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_mln_512x512_160k_ade20k/20210621_161021.log.json) |
| UPerNet | DeiT-S + LN + MLN | 512x512 | 160000 | 5.69 | 12.39 | 43.52 | 45.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k/upernet_deit-s16_ln_mln_512x512_160k_ade20k_20210621_161021-c0cd652f.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k/20210621_161021.log.json) |
| UPerNet | DeiT-B | 512x512 | 80000 | 7.75 | 9.69 | 45.24 | 46.73 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-b16_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_80k_ade20k/upernet_deit-b16_512x512_80k_ade20k_20210624_130529-1e090789.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_80k_ade20k/20210624_130529.log.json) |
| UPerNet | DeiT-B | 512x512 | 160000 | 7.75 | 10.39 | 45.36 | 47.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-b16_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_160k_ade20k/upernet_deit-b16_512x512_160k_ade20k_20210621_180100-828705d7.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_160k_ade20k/20210621_180100.log.json) |
| UPerNet | DeiT-B + MLN | 512x512 | 160000 | 9.21 | 7.78 | 45.46 | 47.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-b16_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_mln_512x512_160k_ade20k/upernet_deit-b16_mln_512x512_160k_ade20k_20210621_191949-4e1450f3.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_mln_512x512_160k_ade20k/20210621_191949.log.json) |
| UPerNet | DeiT-B + LN + MLN | 512x512 | 160000 | 9.21 | 7.75 | 45.37 | 47.23 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k/upernet_deit-b16_ln_mln_512x512_160k_ade20k_20210623_153535-8a959c14.pth) \| [log](https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k/20210623_153535.log.json) |
| 9,860 | 137.887324 | 854 | md |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
neck=None)
| 187 | 25.857143 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_512x512_80k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_80k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
neck=None)
| 186 | 25.714286 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1, final_norm=True))
| 189 | 30.666667 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
)
| 174 | 24 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=None,
auxiliary_head=dict(num_classes=150, in_channels=384))
| 349 | 37.888889 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_512x512_80k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_80k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=None,
auxiliary_head=dict(num_classes=150, in_channels=384))
| 348 | 37.777778 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(
num_heads=6, embed_dims=384, drop_path_rate=0.1, final_norm=True),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=dict(in_channels=[384, 384, 384, 384], out_channels=384),
auxiliary_head=dict(num_classes=150, in_channels=384))
| 427 | 41.8 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=dict(in_channels=[384, 384, 384, 384], out_channels=384),
auxiliary_head=dict(num_classes=150, in_channels=384))
| 401 | 43.666667 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
backbone=dict(drop_path_rate=0.1, final_norm=True),
decode_head=dict(num_classes=150),
auxiliary_head=dict(num_classes=150))
# AdamW optimizer, no weight decay for position embedding & layer norm
# in backbone
optimizer = dict(
_delete_=True,
type='AdamW',
lr=0.00006,
betas=(0.9, 0.999),
weight_decay=0.01,
paramwise_cfg=dict(
custom_keys={
'pos_embed': dict(decay_mult=0.),
'cls_token': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)
}))
lr_config = dict(
_delete_=True,
policy='poly',
warmup='linear',
warmup_iters=1500,
warmup_ratio=1e-6,
power=1.0,
min_lr=0.0,
by_epoch=False)
# By default, models are trained on 8 GPUs with 2 images per GPU
data = dict(samples_per_gpu=2)
| 1,044 | 25.125 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_mln_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
decode_head=dict(num_classes=150),
auxiliary_head=dict(num_classes=150))
# AdamW optimizer, no weight decay for position embedding & layer norm
# in backbone
optimizer = dict(
_delete_=True,
type='AdamW',
lr=0.00006,
betas=(0.9, 0.999),
weight_decay=0.01,
paramwise_cfg=dict(
custom_keys={
'pos_embed': dict(decay_mult=0.),
'cls_token': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)
}))
lr_config = dict(
_delete_=True,
policy='poly',
warmup='linear',
warmup_iters=1500,
warmup_ratio=1e-6,
power=1.0,
min_lr=0.0,
by_epoch=False)
# By default, models are trained on 8 GPUs with 2 images per GPU
data = dict(samples_per_gpu=2)
| 988 | 24.358974 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_mln_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
decode_head=dict(num_classes=150),
auxiliary_head=dict(num_classes=150))
# AdamW optimizer, no weight decay for position embedding & layer norm
# in backbone
optimizer = dict(
_delete_=True,
type='AdamW',
lr=0.00006,
betas=(0.9, 0.999),
weight_decay=0.01,
paramwise_cfg=dict(
custom_keys={
'pos_embed': dict(decay_mult=0.),
'cls_token': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)
}))
lr_config = dict(
_delete_=True,
policy='poly',
warmup='linear',
warmup_iters=1500,
warmup_ratio=1e-6,
power=1.0,
min_lr=0.0,
by_epoch=False)
# By default, models are trained on 8 GPUs with 2 images per GPU
data = dict(samples_per_gpu=2)
| 987 | 24.333333 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/vit.yml | Models:
- Name: upernet_vit-b16_mln_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: ViT-B + MLN
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 144.09
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.2
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 47.71
mIoU(ms+flip): 49.51
Config: configs/vit/upernet_vit-b16_mln_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_80k_ade20k/upernet_vit-b16_mln_512x512_80k_ade20k_20210624_130547-0403cee1.pth
- Name: upernet_vit-b16_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: ViT-B + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 131.93
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.2
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 46.75
mIoU(ms+flip): 48.46
Config: configs/vit/upernet_vit-b16_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_mln_512x512_160k_ade20k/upernet_vit-b16_mln_512x512_160k_ade20k_20210624_130547-852fa768.pth
- Name: upernet_vit-b16_ln_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: ViT-B + LN + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 146.63
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.21
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 47.73
mIoU(ms+flip): 49.95
Config: configs/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k/upernet_vit-b16_ln_mln_512x512_160k_ade20k_20210621_172828-f444c077.pth
- Name: upernet_deit-s16_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-S
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 33.5
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 4.68
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 42.96
mIoU(ms+flip): 43.79
Config: configs/vit/upernet_deit-s16_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_80k_ade20k/upernet_deit-s16_512x512_80k_ade20k_20210624_095228-afc93ec2.pth
- Name: upernet_deit-s16_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-S
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 34.26
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 4.68
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 42.87
mIoU(ms+flip): 43.79
Config: configs/vit/upernet_deit-s16_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_512x512_160k_ade20k/upernet_deit-s16_512x512_160k_ade20k_20210621_160903-5110d916.pth
- Name: upernet_deit-s16_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-S + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 89.45
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 5.69
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 43.82
mIoU(ms+flip): 45.07
Config: configs/vit/upernet_deit-s16_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_mln_512x512_160k_ade20k/upernet_deit-s16_mln_512x512_160k_ade20k_20210621_161021-fb9a5dfb.pth
- Name: upernet_deit-s16_ln_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-S + LN + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 80.71
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 5.69
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 43.52
mIoU(ms+flip): 45.01
Config: configs/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k/upernet_deit-s16_ln_mln_512x512_160k_ade20k_20210621_161021-c0cd652f.pth
- Name: upernet_deit-b16_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-B
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 103.2
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 7.75
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 45.24
mIoU(ms+flip): 46.73
Config: configs/vit/upernet_deit-b16_512x512_80k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_80k_ade20k/upernet_deit-b16_512x512_80k_ade20k_20210624_130529-1e090789.pth
- Name: upernet_deit-b16_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-B
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 96.25
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 7.75
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 45.36
mIoU(ms+flip): 47.16
Config: configs/vit/upernet_deit-b16_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_512x512_160k_ade20k/upernet_deit-b16_512x512_160k_ade20k_20210621_180100-828705d7.pth
- Name: upernet_deit-b16_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-B + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 128.53
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.21
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 45.46
mIoU(ms+flip): 47.16
Config: configs/vit/upernet_deit-b16_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_mln_512x512_160k_ade20k/upernet_deit-b16_mln_512x512_160k_ade20k_20210621_191949-4e1450f3.pth
- Name: upernet_deit-b16_ln_mln_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: DeiT-B + LN + MLN
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 129.03
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
Training Memory (GB): 9.21
Results:
- Task: Semantic Segmentation
Dataset: ADE20K
Metrics:
mIoU: 45.37
mIoU(ms+flip): 47.23
Config: configs/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k.py
Weights: https://download.openmmlab.com/mmsegmentation/v0.5/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k/upernet_deit-b16_ln_mln_512x512_160k_ade20k_20210623_153535-8a959c14.pth
| 7,742 | 30.733607 | 182 | yml |
mmsegmentation | mmsegmentation-master/demo/image_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
from argparse import ArgumentParser
from mmcv.cnn.utils.sync_bn import revert_sync_batchnorm
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
from mmseg.core.evaluation import get_palette
def main():
parser = ArgumentParser()
parser.add_argument('img', help='Image file')
parser.add_argument('config', help='Config file')
parser.add_argument('checkpoint', help='Checkpoint file')
parser.add_argument('--out-file', default=None, help='Path to output file')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--palette',
default='cityscapes',
help='Color palette used for segmentation map')
parser.add_argument(
'--opacity',
type=float,
default=0.5,
help='Opacity of painted segmentation map. In (0, 1] range.')
args = parser.parse_args()
# build the model from a config file and a checkpoint file
model = init_segmentor(args.config, args.checkpoint, device=args.device)
if args.device == 'cpu':
model = revert_sync_batchnorm(model)
# test a single image
result = inference_segmentor(model, args.img)
# show the results
show_result_pyplot(
model,
args.img,
result,
get_palette(args.palette),
opacity=args.opacity,
out_file=args.out_file)
if __name__ == '__main__':
main()
| 1,499 | 30.914894 | 79 | py |
mmsegmentation | mmsegmentation-master/demo/video_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
from argparse import ArgumentParser
import cv2
from mmseg.apis import inference_segmentor, init_segmentor
from mmseg.core.evaluation import get_palette
def main():
parser = ArgumentParser()
parser.add_argument('video', help='Video file or webcam id')
parser.add_argument('config', help='Config file')
parser.add_argument('checkpoint', help='Checkpoint file')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--palette',
default='cityscapes',
help='Color palette used for segmentation map')
parser.add_argument(
'--show', action='store_true', help='Whether to show draw result')
parser.add_argument(
'--show-wait-time', default=1, type=int, help='Wait time after imshow')
parser.add_argument(
'--output-file', default=None, type=str, help='Output video file path')
parser.add_argument(
'--output-fourcc',
default='MJPG',
type=str,
help='Fourcc of the output video')
parser.add_argument(
'--output-fps', default=-1, type=int, help='FPS of the output video')
parser.add_argument(
'--output-height',
default=-1,
type=int,
help='Frame height of the output video')
parser.add_argument(
'--output-width',
default=-1,
type=int,
help='Frame width of the output video')
parser.add_argument(
'--opacity',
type=float,
default=0.5,
help='Opacity of painted segmentation map. In (0, 1] range.')
args = parser.parse_args()
assert args.show or args.output_file, \
'At least one output should be enabled.'
# build the model from a config file and a checkpoint file
model = init_segmentor(args.config, args.checkpoint, device=args.device)
# build input video
cap = cv2.VideoCapture(args.video)
assert (cap.isOpened())
input_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
input_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
input_fps = cap.get(cv2.CAP_PROP_FPS)
# init output video
writer = None
output_height = None
output_width = None
if args.output_file is not None:
fourcc = cv2.VideoWriter_fourcc(*args.output_fourcc)
output_fps = args.output_fps if args.output_fps > 0 else input_fps
output_height = args.output_height if args.output_height > 0 else int(
input_height)
output_width = args.output_width if args.output_width > 0 else int(
input_width)
writer = cv2.VideoWriter(args.output_file, fourcc, output_fps,
(output_width, output_height), True)
# start looping
try:
while True:
flag, frame = cap.read()
if not flag:
break
# test a single image
result = inference_segmentor(model, frame)
# blend raw image and prediction
draw_img = model.show_result(
frame,
result,
palette=get_palette(args.palette),
show=False,
opacity=args.opacity)
if args.show:
cv2.imshow('video_demo', draw_img)
cv2.waitKey(args.show_wait_time)
if writer:
if draw_img.shape[0] != output_height or draw_img.shape[
1] != output_width:
draw_img = cv2.resize(draw_img,
(output_width, output_height))
writer.write(draw_img)
finally:
if writer:
writer.release()
cap.release()
if __name__ == '__main__':
main()
| 3,787 | 32.522124 | 79 | py |
mmsegmentation | mmsegmentation-master/docker/serve/entrypoint.sh | #!/bin/bash
set -e
if [[ "$1" = "serve" ]]; then
shift 1
torchserve --start --ts-config /home/model-server/config.properties
else
eval "$@"
fi
# prevent docker exit
tail -f /dev/null
| 197 | 14.230769 | 71 | sh |
mmsegmentation | mmsegmentation-master/docs/en/changelog.md | ## Changelog
### V0.30.0 (01/09/2023)
**New Features**
- Support Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets ([#2194](https://github.com/open-mmlab/mmsegmentation/pull/2194))
**Bug Fixes**
- Fix incorrect `test_cfg` setting in UNet base configs ([#2347](https://github.com/open-mmlab/mmsegmentation/pull/2347))
- Fix KNet `IterativeDecodeHead` bug in master branch ([#2333](https://github.com/open-mmlab/mmsegmentation/pull/2333))
- Fix deadlock issue related with MMSegWandbHook ([#2398](https://github.com/open-mmlab/mmsegmentation/pull/2398))
**Enhancement**
- Update CI and pre-commit checking ([#2309](https://github.com/open-mmlab/mmsegmentation/pull/2309),[#2331](https://github.com/open-mmlab/mmsegmentation/pull/2331))
- Add `Projects/` folder, and the first example project in 0.x ([#2457](https://github.com/open-mmlab/mmsegmentation/pull/2457))
- Fix the deprecation of `np.float` and CI configuration problems ([#2451](https://github.com/open-mmlab/mmsegmentation/pull/2451))
**Documentation**
- Add high quality synthetic face occlusion dataset link to readme ([#2453](https://github.com/open-mmlab/mmsegmentation/pull/2453))
- Fix the docstring error in the `PascalContextDataset59` class ([#2450](https://github.com/open-mmlab/mmsegmentation/pull/2450))
**Contributors**
- @smttsp made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2347
- @MilkClouds made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2398
- @Spritea made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2450
### V0.29.1 (11/3/2022)
**New Features**
- Add model ensemble tools ([#2218](https://github.com/open-mmlab/mmsegmentation/pull/2218))
**Bug Fixes**
- Use SyncBN in MobileNetV2 ([#2207](https://github.com/open-mmlab/mmsegmentation/pull/2207))
**Documentation**
- Update FAQ doc about binary segmentation and ReduceZeroLabel ([#2206](https://github.com/open-mmlab/mmsegmentation/pull/2206))
- Fix typos ([#2249](https://github.com/open-mmlab/mmsegmentation/pull/2249))
- Fix model results ([#2190](https://github.com/open-mmlab/mmsegmentation/pull/2190), [#2114](https://github.com/open-mmlab/mmsegmentation/pull/2114))
**Contributors**
- @isLinXu made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2219
- @zhijiejia made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2218
- @lee-jinhee made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2249
### V0.29.0 (10/10/2022)
**New Features**
- Support PoolFormer (CVPR'2022) ([#1537](https://github.com/open-mmlab/mmsegmentation/pull/1537))
**Enhancement**
- Improve structure and readability for FCNHead ([#2142](https://github.com/open-mmlab/mmsegmentation/pull/2142))
- Support IterableDataset in distributed training ([#2151](https://github.com/open-mmlab/mmsegmentation/pull/2151))
- Upgrade .dev scripts ([#2020](https://github.com/open-mmlab/mmsegmentation/pull/2020))
- Upgrade pre-commit hooks ([#2155](https://github.com/open-mmlab/mmsegmentation/pull/2155))
**Bug Fixes**
- Fix mmseg.api.inference inference_segmentor ([#1849](https://github.com/open-mmlab/mmsegmentation/pull/1849))
- fix bug about label_map in evaluation part ([#2075](https://github.com/open-mmlab/mmsegmentation/pull/2075))
- Add missing dependencies to torchserve docker file ([#2133](https://github.com/open-mmlab/mmsegmentation/pull/2133))
- Fix ddp unittest ([#2060](https://github.com/open-mmlab/mmsegmentation/pull/2060))
**Contributors**
- @jinwonkim93 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1849
- @rlatjcj made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2075
- @ShirleyWangCVR made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2151
- @mangelroman made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/2133
### V0.28.0 (9/8/2022)
**New Features**
- Support Tversky Loss ([#1896](https://github.com/open-mmlab/mmsegmentation/pull/1986))
**Bug Fixes**
- Fix binary segmentation ([#2016](https://github.com/open-mmlab/mmsegmentation/pull/2016))
- Fix config files ([#1901](https://github.com/open-mmlab/mmsegmentation/pull/1901), [#1893](https://github.com/open-mmlab/mmsegmentation/pull/1893), [#1871](https://github.com/open-mmlab/mmsegmentation/pull/1871))
- Revise documentation ([#1844](https://github.com/open-mmlab/mmsegmentation/pull/1844), [#1980](https://github.com/open-mmlab/mmsegmentation/pull/1980), [#2025](https://github.com/open-mmlab/mmsegmentation/pull/2025), [#1982](https://github.com/open-mmlab/mmsegmentation/pull/1982))
- Fix confusion matrix calculation ([#1992](https://github.com/open-mmlab/mmsegmentation/pull/1992))
- Fix decode head forward_train error ([#1997](https://github.com/open-mmlab/mmsegmentation/pull/1997))
**Contributors**
- @suchot made their first contribution in https://github.com/open-mmlab/mmsegmention/pull/1844
- @TimoK93 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1992
### V0.27.0 (7/28/2022)
**Enhancement**
- Add Swin-L Transformer models ([#1471](https://github.com/open-mmlab/mmsegmentation/pull/1471))
- Update ERFNet results ([#1744](https://github.com/open-mmlab/mmsegmentation/pull/1744))
**Bug Fixes**
- Revise documentation ([#1761](https://github.com/open-mmlab/mmsegmentation/pull/1761), [#1755](https://github.com/open-mmlab/mmsegmentation/pull/1755), [#1802](https://github.com/open-mmlab/mmsegmentation/pull/1802))
- Fix colab tutorial ([#1779](https://github.com/open-mmlab/mmsegmentation/pull/1779))
- Fix segformer checkpoint url ([#1785](https://github.com/open-mmlab/mmsegmentation/pull/1785))
**Contributors**
- @DataSttructure made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1802
- @AkideLiu made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1785
- @mawanda-jun made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1761
- @Yan-Daojiang made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1755
### V0.26.0 (7/1/2022)
**Highlights**
- Update New SegFormer models on ADE20K ([1705](https://github.com/open-mmlab/mmsegmentation/pull/1705))
- Dedicated MMSegWandbHook for MMSegmentation ([1603](https://github.com/open-mmlab/mmsegmentation/pull/1603))
**New Features**
- Update New SegFormer models on ADE20K ([1705](https://github.com/open-mmlab/mmsegmentation/pull/1705))
- Dedicated MMSegWandbHook for MMSegmentation ([1603](https://github.com/open-mmlab/mmsegmentation/pull/1603))
- Add UPerNet r18 results ([1669](https://github.com/open-mmlab/mmsegmentation/pull/1669))
**Enhancement**
- Keep dimension of `cls_token_weight` for easier ONNX deployment ([1642](https://github.com/open-mmlab/mmsegmentation/pull/1642))
- Support infererence with padding ([1607](https://github.com/open-mmlab/mmsegmentation/pull/1607))
**Bug Fixes**
- Fix typos ([#1640](https://github.com/open-mmlab/mmsegmentation/pull/1640), [#1667](https://github.com/open-mmlab/mmsegmentation/pull/1667), [#1656](https://github.com/open-mmlab/mmsegmentation/pull/1656), [#1699](https://github.com/open-mmlab/mmsegmentation/pull/1699), [#1702](https://github.com/open-mmlab/mmsegmentation/pull/1702), [#1695](https://github.com/open-mmlab/mmsegmentation/pull/1695), [#1707](https://github.com/open-mmlab/mmsegmentation/pull/1707), [#1708](https://github.com/open-mmlab/mmsegmentation/pull/1708), [#1721](https://github.com/open-mmlab/mmsegmentation/pull/1721))
**Documentation**
- Fix `mdformat` version to support python3.6 and remove ruby installation ([1672](https://github.com/open-mmlab/mmsegmentation/pull/1672))
**Contributors**
- @RunningLeon made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1642
- @zhouzaida made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1655
- @tkhe made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1667
- @rotorliu made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1656
- @EvelynWang-0423 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1679
- @ZhaoYi1222 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1616
- @Sanster made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1704
- @ayulockin made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1603
### V0.25.0 (6/2/2022)
**Highlights**
- Support PyTorch backend on MLU ([1515](https://github.com/open-mmlab/mmsegmentation/pull/1515))
**Bug Fixes**
- Fix the error of BCE loss when batch size is 1 ([1629](https://github.com/open-mmlab/mmsegmentation/pull/1629))
- Fix bug of `resize` function when align_corners is True ([1592](https://github.com/open-mmlab/mmsegmentation/pull/1592))
- Fix Dockerfile to run demo script in docker container ([1568](https://github.com/open-mmlab/mmsegmentation/pull/1568))
- Correct inference_demo.ipynb path ([1576](https://github.com/open-mmlab/mmsegmentation/pull/1576))
- Fix the `build_segmentor` in colab demo ([1551](https://github.com/open-mmlab/mmsegmentation/pull/1551))
- Fix md2yml script ([1633](https://github.com/open-mmlab/mmsegmentation/pull/1633), [1555](https://github.com/open-mmlab/mmsegmentation/pull/1555))
- Fix main line link in MAE README.md ([1556](https://github.com/open-mmlab/mmsegmentation/pull/1556))
- Fix fastfcn `crop_size` in README.md by ([1597](https://github.com/open-mmlab/mmsegmentation/pull/1597))
- Pip upgrade when testing windows platform ([1610](https://github.com/open-mmlab/mmsegmentation/pull/1610))
**Improvements**
- Delete DS_Store file ([1549](https://github.com/open-mmlab/mmsegmentation/pull/1549))
- Revise owners.yml ([1621](https://github.com/open-mmlab/mmsegmentation/pull/1621), [1534](https://github.com/open-mmlab/mmsegmentation/pull/1543))
**Documentation**
- Rewrite the installation guidance ([1630](https://github.com/open-mmlab/mmsegmentation/pull/1630))
- Format readme ([1635](https://github.com/open-mmlab/mmsegmentation/pull/1635))
- Replace markdownlint with mdformat to avoid ruby installation ([1591](https://github.com/open-mmlab/mmsegmentation/pull/1591))
- Add explanation and usage instructions for data configuration ([1548](https://github.com/open-mmlab/mmsegmentation/pull/1548))
- Configure Myst-parser to parse anchor tag ([1589](https://github.com/open-mmlab/mmsegmentation/pull/1589))
- Update QR code and link for QQ group ([1598](https://github.com/open-mmlab/mmsegmentation/pull/1598), [1574](https://github.com/open-mmlab/mmsegmentation/pull/1574))
**Contributors**
- @atinfinity made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1568
- @DoubleChuang made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1576
- @alpha-baymax made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1515
- @274869388 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1629
### V0.24.1 (5/1/2022)
**Bug Fixes**
- Fix `LayerDecayOptimizerConstructor` for MAE training ([#1539](https://github.com/open-mmlab/mmsegmentation/pull/1539), [#1540](https://github.com/open-mmlab/mmsegmentation/pull/1540))
### V0.24.0 (4/29/2022)
**Highlights**
- Support MAE: Masked Autoencoders Are Scalable Vision Learners
- Support Resnet strikes back
**New Features**
- Support MAE: Masked Autoencoders Are Scalable Vision Learners ([1307](https://github.com/open-mmlab/mmsegmentation/pull/1307), [1523](https://github.com/open-mmlab/mmsegmentation/pull/1523))
- Support Resnet strikes back ([1390](https://github.com/open-mmlab/mmsegmentation/pull/1390))
- Support extra dataloader settings in configs ([1435](https://github.com/open-mmlab/mmsegmentation/pull/1435))
**Bug Fixes**
- Fix input previous results for the last cascade_decode_head ([#1450](https://github.com/open-mmlab/mmsegmentation/pull/1450))
- Fix validation loss logging ([#1494](https://github.com/open-mmlab/mmsegmentation/pull/1494))
- Fix the bug in binary_cross_entropy ([1527](https://github.com/open-mmlab/mmsegmentation/pull/1527))
- Support single channel prediction for Binary Cross Entropy Loss ([#1454](https://github.com/open-mmlab/mmsegmentation/pull/1454))
- Fix potential bugs in accuracy.py ([1496](https://github.com/open-mmlab/mmsegmentation/pull/1496))
- Avoid converting label ids twice by label map during evaluation ([1417](https://github.com/open-mmlab/mmsegmentation/pull/1417))
- Fix bug about label_map ([1445](https://github.com/open-mmlab/mmsegmentation/pull/1445))
- Fix image save path bug in Windows ([1423](https://github.com/open-mmlab/mmsegmentation/pull/1423))
- Fix MMSegmentation Colab demo ([1501](https://github.com/open-mmlab/mmsegmentation/pull/1501), [1452](https://github.com/open-mmlab/mmsegmentation/pull/1452))
- Migrate azure blob for beit checkpoints ([1503](https://github.com/open-mmlab/mmsegmentation/pull/1503))
- Fix bug in `tools/analyse_logs.py` caused by wrong plot_iter in some cases ([1428](https://github.com/open-mmlab/mmsegmentation/pull/1428))
**Improvements**
- Merge BEiT and ConvNext's LR decay optimizer constructors ([#1438](https://github.com/open-mmlab/mmsegmentation/pull/1438))
- Register optimizer constructor with mmseg ([#1456](https://github.com/open-mmlab/mmsegmentation/pull/1456))
- Refactor transformer encode layer in ViT and BEiT backbone ([#1481](https://github.com/open-mmlab/mmsegmentation/pull/1481))
- Add `build_pos_embed` and `build_layers` for BEiT ([1517](https://github.com/open-mmlab/mmsegmentation/pull/1517))
- Add `with_cp` to mit and vit ([1431](https://github.com/open-mmlab/mmsegmentation/pull/1431))
- Fix inconsistent dtype of `seg_label` in stdc decode ([1463](https://github.com/open-mmlab/mmsegmentation/pull/1463))
- Delete random seed for training in `dist_train.sh` ([1519](https://github.com/open-mmlab/mmsegmentation/pull/1519))
- Revise high `workers_per_gpus` in config file ([#1506](https://github.com/open-mmlab/mmsegmentation/pull/1506))
- Add GPG keys and del mmcv version in Dockerfile ([1534](https://github.com/open-mmlab/mmsegmentation/pull/1534))
- Update checkpoint for model in deeplabv3plus ([#1487](https://github.com/open-mmlab/mmsegmentation/pull/1487))
- Add `DistSamplerSeedHook` to set epoch number to dataloader when runner is `EpochBasedRunner` ([1449](https://github.com/open-mmlab/mmsegmentation/pull/1449))
- Provide URLs of Swin Transformer pretrained models ([1389](https://github.com/open-mmlab/mmsegmentation/pull/1389))
- Updating Dockerfiles From Docker Directory and `get_started.md` to reach latest stable version of Python, PyTorch and MMCV ([1446](https://github.com/open-mmlab/mmsegmentation/pull/1446))
**Documentation**
- Add more clearly statement of CPU training/inference ([1518](https://github.com/open-mmlab/mmsegmentation/pull/1518))
**Contributors**
- @jiangyitong made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1431
- @kahkeng made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1447
- @Nourollah made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1446
- @androbaza made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1452
- @Yzichen made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1445
- @whu-pzhang made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1423
- @panfeng-hover made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1417
- @Johnson-Wang made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1496
- @jere357 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1460
- @mfernezir made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1494
- @donglixp made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1503
- @YuanLiuuuuuu made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1307
- @Dawn-bin made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1527
### V0.23.0 (4/1/2022)
**Highlights**
- Support BEiT: BERT Pre-Training of Image Transformers
- Support K-Net: Towards Unified Image Segmentation
- Add `avg_non_ignore` of CELoss to support average loss over non-ignored elements
- Support dataset initialization with file client
**New Features**
- Support BEiT: BERT Pre-Training of Image Transformers ([#1404](https://github.com/open-mmlab/mmsegmentation/pull/1404))
- Support K-Net: Towards Unified Image Segmentation ([#1289](https://github.com/open-mmlab/mmsegmentation/pull/1289))
- Support dataset initialization with file client ([#1402](https://github.com/open-mmlab/mmsegmentation/pull/1402))
- Add class name function for STARE datasets ([#1376](https://github.com/open-mmlab/mmsegmentation/pull/1376))
- Support different seeds on different ranks when distributed training ([#1362](https://github.com/open-mmlab/mmsegmentation/pull/1362))
- Add `nlc2nchw2nlc` and `nchw2nlc2nchw` to simplify tensor with different dimension operation ([#1249](https://github.com/open-mmlab/mmsegmentation/pull/1249))
**Improvements**
- Synchronize random seed for distributed sampler ([#1411](https://github.com/open-mmlab/mmsegmentation/pull/1411))
- Add script and documentation for multi-machine distributed training ([#1383](https://github.com/open-mmlab/mmsegmentation/pull/1383))
**Bug Fixes**
- Add `avg_non_ignore` of CELoss to support average loss over non-ignored elements ([#1409](https://github.com/open-mmlab/mmsegmentation/pull/1409))
- Fix some wrong URLs of models or logs in `./configs` ([#1336](https://github.com/open-mmlab/mmsegmentation/pull/1433))
- Add title and color theme arguments to plot function in `tools/confusion_matrix.py` ([#1401](https://github.com/open-mmlab/mmsegmentation/pull/1401))
- Fix outdated link in Colab demo ([#1392](https://github.com/open-mmlab/mmsegmentation/pull/1392))
- Fix typos ([#1424](https://github.com/open-mmlab/mmsegmentation/pull/1424), [#1405](https://github.com/open-mmlab/mmsegmentation/pull/1405), [#1371](https://github.com/open-mmlab/mmsegmentation/pull/1371), [#1366](https://github.com/open-mmlab/mmsegmentation/pull/1366), [#1363](https://github.com/open-mmlab/mmsegmentation/pull/1363))
**Documentation**
- Add FAQ document ([#1420](https://github.com/open-mmlab/mmsegmentation/pull/1420))
- Fix the config name style description in official docs([#1414](https://github.com/open-mmlab/mmsegmentation/pull/1414))
**Contributors**
- @kinglintianxia made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1371
- @CCODING04 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1376
- @mob5566 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1401
- @xiongnemo made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1392
- @Xiangxu-0103 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1405
### V0.22.1 (3/9/2022)
**Bug Fixes**
- Fix the ZeroDivisionError that all pixels in one image is ignored. ([#1336](https://github.com/open-mmlab/mmsegmentation/pull/1336))
**Improvements**
- Provide URLs of STDC, Segmenter and Twins pretrained models ([#1272](https://github.com/open-mmlab/mmsegmentation/pull/1357))
### V0.22 (3/04/2022)
**Highlights**
- Support ConvNeXt: A ConvNet for the 2020s. Please use the latest MMClassification (0.21.0) to try it out.
- Support iSAID aerial Dataset.
- Officially Support inference on Windows OS.
**New Features**
- Support ConvNeXt: A ConvNet for the 2020s. ([#1216](https://github.com/open-mmlab/mmsegmentation/pull/1216))
- Support iSAID aerial Dataset. ([#1115](https://github.com/open-mmlab/mmsegmentation/pull/1115)
- Generating and plotting confusion matrix. ([#1301](https://github.com/open-mmlab/mmsegmentation/pull/1301))
**Improvements**
- Refactor 4 decoder heads (ASPP, FCN, PSP, UPer): Split forward function into `_forward_feature` and `cls_seg`. ([#1299](https://github.com/open-mmlab/mmsegmentation/pull/1299))
- Add `min_size` arg in `Resize` to keep the shape after resize bigger than slide window. ([#1318](https://github.com/open-mmlab/mmsegmentation/pull/1318))
- Revise pre-commit-hooks. ([#1315](https://github.com/open-mmlab/mmsegmentation/pull/1315))
- Add win-ci. ([#1296](https://github.com/open-mmlab/mmsegmentation/pull/1296))
**Bug Fixes**
- Fix `mlp_ratio` type in Swin Transformer. ([#1274](https://github.com/open-mmlab/mmsegmentation/pull/1274))
- Fix path errors in `./demo` . ([#1269](https://github.com/open-mmlab/mmsegmentation/pull/1269))
- Fix bug in conversion of potsdam. ([#1279](https://github.com/open-mmlab/mmsegmentation/pull/1279))
- Make accuracy take into account `ignore_index`. ([#1259](https://github.com/open-mmlab/mmsegmentation/pull/1259))
- Add Pytorch HardSwish assertion in unit test. ([#1294](https://github.com/open-mmlab/mmsegmentation/pull/1294))
- Fix wrong palette value in vaihingen. ([#1292](https://github.com/open-mmlab/mmsegmentation/pull/1292))
- Fix the bug that SETR cannot load pretrain. ([#1293](https://github.com/open-mmlab/mmsegmentation/pull/1293))
- Update correct `In Collection` in metafile of each configs. ([#1239](https://github.com/open-mmlab/mmsegmentation/pull/1239))
- Upload completed STDC models. ([#1332](https://github.com/open-mmlab/mmsegmentation/pull/1332))
- Fix `DNLHead` exports onnx inference difference type Cast error. ([#1161](https://github.com/open-mmlab/mmsegmentation/pull/1332))
**Contributors**
- @JiaYanhao made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1269
- @andife made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1281
- @SBCV made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1279
- @HJoonKwon made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1259
- @Tsingularity made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1290
- @Waterman0524 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1115
- @MeowZheng made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1315
- @linfangjian01 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1318
### V0.21.1 (2/9/2022)
**Bug Fixes**
- Fix typos in docs. ([#1263](https://github.com/open-mmlab/mmsegmentation/pull/1263))
- Fix repeating log by `setup_multi_processes`. ([#1267](https://github.com/open-mmlab/mmsegmentation/pull/1267))
- Upgrade isort in pre-commit hook. ([#1270](https://github.com/open-mmlab/mmsegmentation/pull/1270))
**Improvements**
- Use MMCV load_state_dict func in ViT/Swin. ([#1272](https://github.com/open-mmlab/mmsegmentation/pull/1272))
- Add exception for PointRend for support CPU-only. ([#1271](https://github.com/open-mmlab/mmsegmentation/pull/1270))
### V0.21 (1/29/2022)
**Highlights**
- Officially Support CPUs training and inference, please use the latest MMCV (1.4.4) to try it out.
- Support Segmenter: Transformer for Semantic Segmentation (ICCV'2021).
- Support ISPRS Potsdam and Vaihingen Dataset.
- Add Mosaic transform and `MultiImageMixDataset` class in `dataset_wrappers`.
**New Features**
- Support Segmenter: Transformer for Semantic Segmentation (ICCV'2021) ([#955](https://github.com/open-mmlab/mmsegmentation/pull/955))
- Support ISPRS Potsdam and Vaihingen Dataset ([#1097](https://github.com/open-mmlab/mmsegmentation/pull/1097), [#1171](https://github.com/open-mmlab/mmsegmentation/pull/1171))
- Add segformer‘s benchmark on cityscapes ([#1155](https://github.com/open-mmlab/mmsegmentation/pull/1155))
- Add auto resume ([#1172](https://github.com/open-mmlab/mmsegmentation/pull/1172))
- Add Mosaic transform and `MultiImageMixDataset` class in `dataset_wrappers` ([#1093](https://github.com/open-mmlab/mmsegmentation/pull/1093), [#1105](https://github.com/open-mmlab/mmsegmentation/pull/1105))
- Add log collector ([#1175](https://github.com/open-mmlab/mmsegmentation/pull/1175))
**Improvements**
- New-style CPU training and inference ([#1251](https://github.com/open-mmlab/mmsegmentation/pull/1251))
- Add UNet benchmark with multiple losses supervision ([#1143](https://github.com/open-mmlab/mmsegmentation/pull/1143))
**Bug Fixes**
- Fix the model statistics in doc for readthedoc ([#1153](https://github.com/open-mmlab/mmsegmentation/pull/1153))
- Set random seed for `palette` if not given ([#1152](https://github.com/open-mmlab/mmsegmentation/pull/1152))
- Add `COCOStuffDataset` in `class_names.py` ([#1222](https://github.com/open-mmlab/mmsegmentation/pull/1222))
- Fix bug in non-distributed multi-gpu training/testing ([#1247](https://github.com/open-mmlab/mmsegmentation/pull/1247))
- Delete unnecessary lines of STDCHead ([#1231](https://github.com/open-mmlab/mmsegmentation/pull/1231))
**Contributors**
- @jbwang1997 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1152
- @BeaverCC made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1206
- @Echo-minn made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1214
- @rstrudel made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/955
### V0.20.2 (12/15/2021)
**Bug Fixes**
- Revise --option to --options to avoid BC-breaking. ([#1140](https://github.com/open-mmlab/mmsegmentation/pull/1140))
### V0.20.1 (12/14/2021)
**Improvements**
- Change options to cfg-options ([#1129](https://github.com/open-mmlab/mmsegmentation/pull/1129))
**Bug Fixes**
- Fix `<!-- [ABSTRACT] -->` in metafile. ([#1127](https://github.com/open-mmlab/mmsegmentation/pull/1127))
- Fix correct `num_classes` of HRNet in `LoveDA` dataset ([#1136](https://github.com/open-mmlab/mmsegmentation/pull/1136))
### V0.20 (12/10/2021)
**Highlights**
- Support Twins ([#989](https://github.com/open-mmlab/mmsegmentation/pull/989))
- Support a real-time segmentation model STDC ([#995](https://github.com/open-mmlab/mmsegmentation/pull/995))
- Support a widely-used segmentation model in lane detection ERFNet ([#960](https://github.com/open-mmlab/mmsegmentation/pull/960))
- Support A Remote Sensing Land-Cover Dataset LoveDA ([#1028](https://github.com/open-mmlab/mmsegmentation/pull/1028))
- Support focal loss ([#1024](https://github.com/open-mmlab/mmsegmentation/pull/1024))
**New Features**
- Support Twins ([#989](https://github.com/open-mmlab/mmsegmentation/pull/989))
- Support a real-time segmentation model STDC ([#995](https://github.com/open-mmlab/mmsegmentation/pull/995))
- Support a widely-used segmentation model in lane detection ERFNet ([#960](https://github.com/open-mmlab/mmsegmentation/pull/960))
- Add SETR cityscapes benchmark ([#1087](https://github.com/open-mmlab/mmsegmentation/pull/1087))
- Add BiSeNetV1 COCO-Stuff 164k benchmark ([#1019](https://github.com/open-mmlab/mmsegmentation/pull/1019))
- Support focal loss ([#1024](https://github.com/open-mmlab/mmsegmentation/pull/1024))
- Add Cutout transform ([#1022](https://github.com/open-mmlab/mmsegmentation/pull/1022))
**Improvements**
- Set a random seed when the user does not set a seed ([#1039](https://github.com/open-mmlab/mmsegmentation/pull/1039))
- Add CircleCI setup ([#1086](https://github.com/open-mmlab/mmsegmentation/pull/1086))
- Skip CI on ignoring given paths ([#1078](https://github.com/open-mmlab/mmsegmentation/pull/1078))
- Add abstract and image for every paper ([#1060](https://github.com/open-mmlab/mmsegmentation/pull/1060))
- Create a symbolic link on windows ([#1090](https://github.com/open-mmlab/mmsegmentation/pull/1090))
- Support video demo using trained model ([#1014](https://github.com/open-mmlab/mmsegmentation/pull/1014))
**Bug Fixes**
- Fix incorrectly loading init_cfg or pretrained models of several transformer models ([#999](https://github.com/open-mmlab/mmsegmentation/pull/999), [#1069](https://github.com/open-mmlab/mmsegmentation/pull/1069), [#1102](https://github.com/open-mmlab/mmsegmentation/pull/1102))
- Fix EfficientMultiheadAttention in SegFormer ([#1037](https://github.com/open-mmlab/mmsegmentation/pull/1037))
- Remove `fp16` folder in `configs` ([#1031](https://github.com/open-mmlab/mmsegmentation/pull/1031))
- Fix several typos in .yml file (Dice Metric [#1041](https://github.com/open-mmlab/mmsegmentation/pull/1041), ADE20K dataset [#1120](https://github.com/open-mmlab/mmsegmentation/pull/1120), Training Memory (GB) [#1083](https://github.com/open-mmlab/mmsegmentation/pull/1083))
- Fix test error when using `--show-dir` ([#1091](https://github.com/open-mmlab/mmsegmentation/pull/1091))
- Fix dist training infinite waiting issue ([#1035](https://github.com/open-mmlab/mmsegmentation/pull/1035))
- Change the upper version of mmcv to 1.5.0 ([#1096](https://github.com/open-mmlab/mmsegmentation/pull/1096))
- Fix symlink failure on Windows ([#1038](https://github.com/open-mmlab/mmsegmentation/pull/1038))
- Cancel previous runs that are not completed ([#1118](https://github.com/open-mmlab/mmsegmentation/pull/1118))
- Unified links of readthedocs in docs ([#1119](https://github.com/open-mmlab/mmsegmentation/pull/1119))
**Contributors**
- @Junjue-Wang made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1028
- @ddebby made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1066
- @del-zhenwu made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1078
- @KangBK0120 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1106
- @zergzzlun made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1091
- @fingertap made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1035
- @irvingzhang0512 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1014
- @littleSunlxy made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/989
- @lkm2835
- @RockeyCoss
- @MengzhangLI
- @Junjun2016
- @xiexinch
- @xvjiarui
### V0.19 (11/02/2021)
**Highlights**
- Support TIMMBackbone wrapper ([#998](https://github.com/open-mmlab/mmsegmentation/pull/998))
- Support custom hook ([#428](https://github.com/open-mmlab/mmsegmentation/pull/428))
- Add codespell pre-commit hook ([#920](https://github.com/open-mmlab/mmsegmentation/pull/920))
- Add FastFCN benchmark on ADE20K ([#972](https://github.com/open-mmlab/mmsegmentation/pull/972))
**New Features**
- Support TIMMBackbone wrapper ([#998](https://github.com/open-mmlab/mmsegmentation/pull/998))
- Support custom hook ([#428](https://github.com/open-mmlab/mmsegmentation/pull/428))
- Add FastFCN benchmark on ADE20K ([#972](https://github.com/open-mmlab/mmsegmentation/pull/972))
- Add codespell pre-commit hook and fix typos ([#920](https://github.com/open-mmlab/mmsegmentation/pull/920))
**Improvements**
- Make inputs & channels smaller in unittests ([#1004](https://github.com/open-mmlab/mmsegmentation/pull/1004))
- Change `self.loss_decode` back to `dict` in Single Loss situation ([#1002](https://github.com/open-mmlab/mmsegmentation/pull/1002))
**Bug Fixes**
- Fix typo in usage example ([#1003](https://github.com/open-mmlab/mmsegmentation/pull/1003))
- Add contiguous after permutation in ViT ([#992](https://github.com/open-mmlab/mmsegmentation/pull/992))
- Fix the invalid link ([#985](https://github.com/open-mmlab/mmsegmentation/pull/985))
- Fix bug in CI with python 3.9 ([#994](https://github.com/open-mmlab/mmsegmentation/pull/994))
- Fix bug when loading class name form file in custom dataset ([#923](https://github.com/open-mmlab/mmsegmentation/pull/923))
**Contributors**
- @ShoupingShan made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/923
- @RockeyCoss made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/954
- @HarborYuan made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/992
- @lkm2835 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1003
- @gszh made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/428
- @VVsssssk
- @MengzhangLI
- @Junjun2016
### V0.18 (10/07/2021)
**Highlights**
- Support three real-time segmentation models (ICNet [#884](https://github.com/open-mmlab/mmsegmentation/pull/884), BiSeNetV1 [#851](https://github.com/open-mmlab/mmsegmentation/pull/851), and BiSeNetV2 [#804](https://github.com/open-mmlab/mmsegmentation/pull/804))
- Support one efficient segmentation model (FastFCN [#885](https://github.com/open-mmlab/mmsegmentation/pull/885))
- Support one efficient non-local/self-attention based segmentation model (ISANet [#70](https://github.com/open-mmlab/mmsegmentation/pull/70))
- Support COCO-Stuff 10k and 164k datasets ([#625](https://github.com/open-mmlab/mmsegmentation/pull/625))
- Support evaluate concated dataset separately ([#833](https://github.com/open-mmlab/mmsegmentation/pull/833))
- Support loading GT for evaluation from multi-file backend ([#867](https://github.com/open-mmlab/mmsegmentation/pull/867))
**New Features**
- Support three real-time segmentation models (ICNet [#884](https://github.com/open-mmlab/mmsegmentation/pull/884), BiSeNetV1 [#851](https://github.com/open-mmlab/mmsegmentation/pull/851), and BiSeNetV2 [#804](https://github.com/open-mmlab/mmsegmentation/pull/804))
- Support one efficient segmentation model (FastFCN [#885](https://github.com/open-mmlab/mmsegmentation/pull/885))
- Support one efficient non-local/self-attention based segmentation model (ISANet [#70](https://github.com/open-mmlab/mmsegmentation/pull/70))
- Support COCO-Stuff 10k and 164k datasets ([#625](https://github.com/open-mmlab/mmsegmentation/pull/625))
- Support evaluate concated dataset separately ([#833](https://github.com/open-mmlab/mmsegmentation/pull/833))
**Improvements**
- Support loading GT for evaluation from multi-file backend ([#867](https://github.com/open-mmlab/mmsegmentation/pull/867))
- Auto-convert SyncBN to BN when training on DP automatly([#772](https://github.com/open-mmlab/mmsegmentation/pull/772))
- Refactor Swin-Transformer ([#800](https://github.com/open-mmlab/mmsegmentation/pull/800))
**Bug Fixes**
- Update mmcv installation in dockerfile ([#860](https://github.com/open-mmlab/mmsegmentation/pull/860))
- Fix number of iteration bug when resuming checkpoint in distributed train ([#866](https://github.com/open-mmlab/mmsegmentation/pull/866))
- Fix parsing parse in val_step ([#906](https://github.com/open-mmlab/mmsegmentation/pull/906))
### V0.17 (09/01/2021)
**Highlights**
- Support SegFormer
- Support DPT
- Support Dark Zurich and Nighttime Driving datasets
- Support progressive evaluation
**New Features**
- Support SegFormer ([#599](https://github.com/open-mmlab/mmsegmentation/pull/599))
- Support DPT ([#605](https://github.com/open-mmlab/mmsegmentation/pull/605))
- Support Dark Zurich and Nighttime Driving datasets ([#815](https://github.com/open-mmlab/mmsegmentation/pull/815))
- Support progressive evaluation ([#709](https://github.com/open-mmlab/mmsegmentation/pull/709))
**Improvements**
- Add multiscale_output interface and unittests for HRNet ([#830](https://github.com/open-mmlab/mmsegmentation/pull/830))
- Support inherit cityscapes dataset ([#750](https://github.com/open-mmlab/mmsegmentation/pull/750))
- Fix some typos in README.md ([#824](https://github.com/open-mmlab/mmsegmentation/pull/824))
- Delete convert function and add instruction to ViT/Swin README.md ([#791](https://github.com/open-mmlab/mmsegmentation/pull/791))
- Add vit/swin/mit convert weight scripts ([#783](https://github.com/open-mmlab/mmsegmentation/pull/783))
- Add copyright files ([#796](https://github.com/open-mmlab/mmsegmentation/pull/796))
**Bug Fixes**
- Fix invalid checkpoint link in inference_demo.ipynb ([#814](https://github.com/open-mmlab/mmsegmentation/pull/814))
- Ensure that items in dataset have the same order across multi machine ([#780](https://github.com/open-mmlab/mmsegmentation/pull/780))
- Fix the log error ([#766](https://github.com/open-mmlab/mmsegmentation/pull/766))
### V0.16 (08/04/2021)
**Highlights**
- Support PyTorch 1.9
- Support SegFormer backbone MiT
- Support md2yml pre-commit hook
- Support frozen stage for HRNet
**New Features**
- Support SegFormer backbone MiT ([#594](https://github.com/open-mmlab/mmsegmentation/pull/594))
- Support md2yml pre-commit hook ([#732](https://github.com/open-mmlab/mmsegmentation/pull/732))
- Support mim ([#717](https://github.com/open-mmlab/mmsegmentation/pull/717))
- Add mmseg2torchserve tool ([#552](https://github.com/open-mmlab/mmsegmentation/pull/552))
**Improvements**
- Support hrnet frozen stage ([#743](https://github.com/open-mmlab/mmsegmentation/pull/743))
- Add template of reimplementation questions ([#741](https://github.com/open-mmlab/mmsegmentation/pull/741))
- Output pdf and epub formats for readthedocs ([#742](https://github.com/open-mmlab/mmsegmentation/pull/742))
- Refine the docstring of ResNet ([#723](https://github.com/open-mmlab/mmsegmentation/pull/723))
- Replace interpolate with resize ([#731](https://github.com/open-mmlab/mmsegmentation/pull/731))
- Update resource limit ([#700](https://github.com/open-mmlab/mmsegmentation/pull/700))
- Update config.md ([#678](https://github.com/open-mmlab/mmsegmentation/pull/678))
**Bug Fixes**
- Fix ATTENTION registry ([#729](https://github.com/open-mmlab/mmsegmentation/pull/729))
- Fix analyze log script ([#716](https://github.com/open-mmlab/mmsegmentation/pull/716))
- Fix doc api display ([#725](https://github.com/open-mmlab/mmsegmentation/pull/725))
- Fix patch_embed and pos_embed mismatch error ([#685](https://github.com/open-mmlab/mmsegmentation/pull/685))
- Fix efficient test for multi-node ([#707](https://github.com/open-mmlab/mmsegmentation/pull/707))
- Fix init_cfg in resnet backbone ([#697](https://github.com/open-mmlab/mmsegmentation/pull/697))
- Fix efficient test bug ([#702](https://github.com/open-mmlab/mmsegmentation/pull/702))
- Fix url error in config docs ([#680](https://github.com/open-mmlab/mmsegmentation/pull/680))
- Fix mmcv installation ([#676](https://github.com/open-mmlab/mmsegmentation/pull/676))
- Fix torch version ([#670](https://github.com/open-mmlab/mmsegmentation/pull/670))
**Contributors**
@sshuair @xiexinch @Junjun2016 @mmeendez8 @xvjiarui @sennnnn @puhsu @BIGWangYuDong @keke1u @daavoo
### V0.15 (07/04/2021)
**Highlights**
- Support ViT, SETR, and Swin-Transformer
- Add Chinese documentation
- Unified parameter initialization
**Bug Fixes**
- Fix typo and links ([#608](https://github.com/open-mmlab/mmsegmentation/pull/608))
- Fix Dockerfile ([#607](https://github.com/open-mmlab/mmsegmentation/pull/607))
- Fix ViT init ([#609](https://github.com/open-mmlab/mmsegmentation/pull/609))
- Fix mmcv version compatible table ([#658](https://github.com/open-mmlab/mmsegmentation/pull/658))
- Fix model links of DMNEt ([#660](https://github.com/open-mmlab/mmsegmentation/pull/660))
**New Features**
- Support loading DeiT weights ([#538](https://github.com/open-mmlab/mmsegmentation/pull/538))
- Support SETR ([#531](https://github.com/open-mmlab/mmsegmentation/pull/531), [#635](https://github.com/open-mmlab/mmsegmentation/pull/635))
- Add config and models for ViT backbone with UperHead ([#520](https://github.com/open-mmlab/mmsegmentation/pull/531), [#635](https://github.com/open-mmlab/mmsegmentation/pull/520))
- Support Swin-Transformer ([#511](https://github.com/open-mmlab/mmsegmentation/pull/511))
- Add higher accuracy FastSCNN ([#606](https://github.com/open-mmlab/mmsegmentation/pull/606))
- Add Chinese documentation ([#666](https://github.com/open-mmlab/mmsegmentation/pull/666))
**Improvements**
- Unified parameter initialization ([#567](https://github.com/open-mmlab/mmsegmentation/pull/567))
- Separate CUDA and CPU in github action CI ([#602](https://github.com/open-mmlab/mmsegmentation/pull/602))
- Support persistent dataloader worker ([#646](https://github.com/open-mmlab/mmsegmentation/pull/646))
- Update meta file fields ([#661](https://github.com/open-mmlab/mmsegmentation/pull/661), [#664](https://github.com/open-mmlab/mmsegmentation/pull/664))
### V0.14 (06/02/2021)
**Highlights**
- Support ONNX to TensorRT
- Support MIM
**Bug Fixes**
- Fix ONNX to TensorRT verify ([#547](https://github.com/open-mmlab/mmsegmentation/pull/547))
- Fix save best for EvalHook ([#575](https://github.com/open-mmlab/mmsegmentation/pull/575))
**New Features**
- Support loading DeiT weights ([#538](https://github.com/open-mmlab/mmsegmentation/pull/538))
- Support ONNX to TensorRT ([#542](https://github.com/open-mmlab/mmsegmentation/pull/542))
- Support output results for ADE20k ([#544](https://github.com/open-mmlab/mmsegmentation/pull/544))
- Support MIM ([#549](https://github.com/open-mmlab/mmsegmentation/pull/549))
**Improvements**
- Add option for ViT output shape ([#530](https://github.com/open-mmlab/mmsegmentation/pull/530))
- Infer batch size using len(result) ([#532](https://github.com/open-mmlab/mmsegmentation/pull/532))
- Add compatible table between MMSeg and MMCV ([#558](https://github.com/open-mmlab/mmsegmentation/pull/558))
### V0.13 (05/05/2021)
**Highlights**
- Support Pascal Context Class-59 dataset.
- Support Visual Transformer Backbone.
- Support mFscore metric.
**Bug Fixes**
- Fixed Colaboratory tutorial ([#451](https://github.com/open-mmlab/mmsegmentation/pull/451))
- Fixed mIoU calculation range ([#471](https://github.com/open-mmlab/mmsegmentation/pull/471))
- Fixed sem_fpn, unet README.md ([#492](https://github.com/open-mmlab/mmsegmentation/pull/492))
- Fixed `num_classes` in FCN for Pascal Context 60-class dataset ([#488](https://github.com/open-mmlab/mmsegmentation/pull/488))
- Fixed FP16 inference ([#497](https://github.com/open-mmlab/mmsegmentation/pull/497))
**New Features**
- Support dynamic export and visualize to pytorch2onnx ([#463](https://github.com/open-mmlab/mmsegmentation/pull/463))
- Support export to torchscript ([#469](https://github.com/open-mmlab/mmsegmentation/pull/469), [#499](https://github.com/open-mmlab/mmsegmentation/pull/499))
- Support Pascal Context Class-59 dataset ([#459](https://github.com/open-mmlab/mmsegmentation/pull/459))
- Support Visual Transformer backbone ([#465](https://github.com/open-mmlab/mmsegmentation/pull/465))
- Support UpSample Neck ([#512](https://github.com/open-mmlab/mmsegmentation/pull/512))
- Support mFscore metric ([#509](https://github.com/open-mmlab/mmsegmentation/pull/509))
**Improvements**
- Add more CI for PyTorch ([#460](https://github.com/open-mmlab/mmsegmentation/pull/460))
- Add print model graph args for tools/print_config.py ([#451](https://github.com/open-mmlab/mmsegmentation/pull/451))
- Add cfg links in modelzoo README.md ([#468](https://github.com/open-mmlab/mmsegmentation/pull/469))
- Add BaseSegmentor import to segmentors/__init__.py ([#495](https://github.com/open-mmlab/mmsegmentation/pull/495))
- Add MMOCR, MMGeneration links ([#501](https://github.com/open-mmlab/mmsegmentation/pull/501), [#506](https://github.com/open-mmlab/mmsegmentation/pull/506))
- Add Chinese QR code ([#506](https://github.com/open-mmlab/mmsegmentation/pull/506))
- Use MMCV MODEL_REGISTRY ([#515](https://github.com/open-mmlab/mmsegmentation/pull/515))
- Add ONNX testing tools ([#498](https://github.com/open-mmlab/mmsegmentation/pull/498))
- Replace data_dict calling 'img' key to support MMDet3D ([#514](https://github.com/open-mmlab/mmsegmentation/pull/514))
- Support reading class_weight from file in loss function ([#513](https://github.com/open-mmlab/mmsegmentation/pull/513))
- Make tags as comment ([#505](https://github.com/open-mmlab/mmsegmentation/pull/505))
- Use MMCV EvalHook ([#438](https://github.com/open-mmlab/mmsegmentation/pull/438))
### V0.12 (04/03/2021)
**Highlights**
- Support FCN-Dilate 6 model.
- Support Dice Loss.
**Bug Fixes**
- Fixed PhotoMetricDistortion Doc ([#388](https://github.com/open-mmlab/mmsegmentation/pull/388))
- Fixed install scripts ([#399](https://github.com/open-mmlab/mmsegmentation/pull/399))
- Fixed Dice Loss multi-class ([#417](https://github.com/open-mmlab/mmsegmentation/pull/417))
**New Features**
- Support Dice Loss ([#396](https://github.com/open-mmlab/mmsegmentation/pull/396))
- Add plot logs tool ([#426](https://github.com/open-mmlab/mmsegmentation/pull/426))
- Add opacity option to show_result ([#425](https://github.com/open-mmlab/mmsegmentation/pull/425))
- Speed up mIoU metric ([#430](https://github.com/open-mmlab/mmsegmentation/pull/430))
**Improvements**
- Refactor unittest file structure ([#440](https://github.com/open-mmlab/mmsegmentation/pull/440))
- Fix typos in the repo ([#449](https://github.com/open-mmlab/mmsegmentation/pull/449))
- Include class-level metrics in the log ([#445](https://github.com/open-mmlab/mmsegmentation/pull/445))
### V0.11 (02/02/2021)
**Highlights**
- Support memory efficient test, add more UNet models.
**Bug Fixes**
- Fixed TTA resize scale ([#334](https://github.com/open-mmlab/mmsegmentation/pull/334))
- Fixed CI for pip 20.3 ([#307](https://github.com/open-mmlab/mmsegmentation/pull/307))
- Fixed ADE20k test ([#359](https://github.com/open-mmlab/mmsegmentation/pull/359))
**New Features**
- Support memory efficient test ([#330](https://github.com/open-mmlab/mmsegmentation/pull/330))
- Add more UNet benchmarks ([#324](https://github.com/open-mmlab/mmsegmentation/pull/324))
- Support Lovasz Loss ([#351](https://github.com/open-mmlab/mmsegmentation/pull/351))
**Improvements**
- Move train_cfg/test_cfg inside model ([#341](https://github.com/open-mmlab/mmsegmentation/pull/341))
### V0.10 (01/01/2021)
**Highlights**
- Support MobileNetV3, DMNet, APCNet. Add models of ResNet18V1b, ResNet18V1c, ResNet50V1b.
**Bug Fixes**
- Fixed CPU TTA ([#276](https://github.com/open-mmlab/mmsegmentation/pull/276))
- Fixed CI for pip 20.3 ([#307](https://github.com/open-mmlab/mmsegmentation/pull/307))
**New Features**
- Add ResNet18V1b, ResNet18V1c, ResNet50V1b, ResNet101V1b models ([#316](https://github.com/open-mmlab/mmsegmentation/pull/316))
- Support MobileNetV3 ([#268](https://github.com/open-mmlab/mmsegmentation/pull/268))
- Add 4 retinal vessel segmentation benchmark ([#315](https://github.com/open-mmlab/mmsegmentation/pull/315))
- Support DMNet ([#313](https://github.com/open-mmlab/mmsegmentation/pull/313))
- Support APCNet ([#299](https://github.com/open-mmlab/mmsegmentation/pull/299))
**Improvements**
- Refactor Documentation page ([#311](https://github.com/open-mmlab/mmsegmentation/pull/311))
- Support resize data augmentation according to original image size ([#291](https://github.com/open-mmlab/mmsegmentation/pull/291))
### V0.9 (30/11/2020)
**Highlights**
- Support 4 medical dataset, UNet and CGNet.
**New Features**
- Support RandomRotate transform ([#215](https://github.com/open-mmlab/mmsegmentation/pull/215), [#260](https://github.com/open-mmlab/mmsegmentation/pull/260))
- Support RGB2Gray transform ([#227](https://github.com/open-mmlab/mmsegmentation/pull/227))
- Support Rerange transform ([#228](https://github.com/open-mmlab/mmsegmentation/pull/228))
- Support ignore_index for BCE loss ([#210](https://github.com/open-mmlab/mmsegmentation/pull/210))
- Add modelzoo statistics ([#263](https://github.com/open-mmlab/mmsegmentation/pull/263))
- Support Dice evaluation metric ([#225](https://github.com/open-mmlab/mmsegmentation/pull/225))
- Support Adjust Gamma transform ([#232](https://github.com/open-mmlab/mmsegmentation/pull/232))
- Support CLAHE transform ([#229](https://github.com/open-mmlab/mmsegmentation/pull/229))
**Bug Fixes**
- Fixed detail API link ([#267](https://github.com/open-mmlab/mmsegmentation/pull/267))
### V0.8 (03/11/2020)
**Highlights**
- Support 4 medical dataset, UNet and CGNet.
**New Features**
- Support customize runner ([#118](https://github.com/open-mmlab/mmsegmentation/pull/118))
- Support UNet ([#161](https://github.com/open-mmlab/mmsegmentation/pull/162))
- Support CHASE_DB1, DRIVE, STARE, HRD ([#203](https://github.com/open-mmlab/mmsegmentation/pull/203))
- Support CGNet ([#223](https://github.com/open-mmlab/mmsegmentation/pull/223))
### V0.7 (07/10/2020)
**Highlights**
- Support Pascal Context dataset and customizing class dataset.
**Bug Fixes**
- Fixed CPU inference ([#153](https://github.com/open-mmlab/mmsegmentation/pull/153))
**New Features**
- Add DeepLab OS16 models ([#154](https://github.com/open-mmlab/mmsegmentation/pull/154))
- Support Pascal Context dataset ([#133](https://github.com/open-mmlab/mmsegmentation/pull/133))
- Support customizing dataset classes ([#71](https://github.com/open-mmlab/mmsegmentation/pull/71))
- Support customizing dataset palette ([#157](https://github.com/open-mmlab/mmsegmentation/pull/157))
**Improvements**
- Support 4D tensor output in ONNX ([#150](https://github.com/open-mmlab/mmsegmentation/pull/150))
- Remove redundancies in ONNX export ([#160](https://github.com/open-mmlab/mmsegmentation/pull/160))
- Migrate to MMCV DepthwiseSeparableConv ([#158](https://github.com/open-mmlab/mmsegmentation/pull/158))
- Migrate to MMCV collect_env ([#137](https://github.com/open-mmlab/mmsegmentation/pull/137))
- Use img_prefix and seg_prefix for loading ([#153](https://github.com/open-mmlab/mmsegmentation/pull/153))
### V0.6 (10/09/2020)
**Highlights**
- Support new methods i.e. MobileNetV2, EMANet, DNL, PointRend, Semantic FPN, Fast-SCNN, ResNeSt.
**Bug Fixes**
- Fixed sliding inference ONNX export ([#90](https://github.com/open-mmlab/mmsegmentation/pull/90))
**New Features**
- Support MobileNet v2 ([#86](https://github.com/open-mmlab/mmsegmentation/pull/86))
- Support EMANet ([#34](https://github.com/open-mmlab/mmsegmentation/pull/34))
- Support DNL ([#37](https://github.com/open-mmlab/mmsegmentation/pull/37))
- Support PointRend ([#109](https://github.com/open-mmlab/mmsegmentation/pull/109))
- Support Semantic FPN ([#94](https://github.com/open-mmlab/mmsegmentation/pull/94))
- Support Fast-SCNN ([#58](https://github.com/open-mmlab/mmsegmentation/pull/58))
- Support ResNeSt backbone ([#47](https://github.com/open-mmlab/mmsegmentation/pull/47))
- Support ONNX export (experimental) ([#12](https://github.com/open-mmlab/mmsegmentation/pull/12))
**Improvements**
- Support Upsample in ONNX ([#100](https://github.com/open-mmlab/mmsegmentation/pull/100))
- Support Windows install (experimental) ([#75](https://github.com/open-mmlab/mmsegmentation/pull/75))
- Add more OCRNet results ([#20](https://github.com/open-mmlab/mmsegmentation/pull/20))
- Add PyTorch 1.6 CI ([#64](https://github.com/open-mmlab/mmsegmentation/pull/64))
- Get version and githash automatically ([#55](https://github.com/open-mmlab/mmsegmentation/pull/55))
### v0.5.1 (11/08/2020)
**Highlights**
- Support FP16 and more generalized OHEM
**Bug Fixes**
- Fixed Pascal VOC conversion script (#19)
- Fixed OHEM weight assign bug (#54)
- Fixed palette type when palette is not given (#27)
**New Features**
- Support FP16 (#21)
- Generalized OHEM (#54)
**Improvements**
- Add load-from flag (#33)
- Fixed training tricks doc about different learning rates of model (#26)
| 51,944 | 55.832604 | 597 | md |
mmsegmentation | mmsegmentation-master/docs/en/conf.py | # Copyright (c) OpenMMLab. All rights reserved.
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import subprocess
import sys
import pytorch_sphinx_theme
sys.path.insert(0, os.path.abspath('../../'))
# -- Project information -----------------------------------------------------
project = 'MMSegmentation'
copyright = '2020-2021, OpenMMLab'
author = 'MMSegmentation Authors'
version_file = '../../mmseg/version.py'
def get_version():
with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec'))
return locals()['__version__']
# The full version, including alpha/beta/rc tags
release = get_version()
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
]
autodoc_mock_imports = [
'matplotlib', 'pycocotools', 'mmseg.version', 'mmcv.ops'
]
# Ignore >>> when copying code
copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# The master toctree document.
master_doc = 'index'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'sphinx_rtd_theme'
html_theme = 'pytorch_sphinx_theme'
html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
html_theme_options = {
'logo_url':
'https://mmsegmentation.readthedocs.io/en/latest/',
'menu': [
{
'name':
'Tutorial',
'url':
'https://github.com/open-mmlab/mmsegmentation/blob/master/'
'demo/MMSegmentation_Tutorial.ipynb'
},
{
'name': 'GitHub',
'url': 'https://github.com/open-mmlab/mmsegmentation'
},
{
'name':
'Upstream',
'children': [
{
'name': 'MMCV',
'url': 'https://github.com/open-mmlab/mmcv',
'description': 'Foundational library for computer vision'
},
]
},
],
# Specify the language of shared menu
'menu_lang':
'en'
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_css_files = ['css/readthedocs.css']
# Enable ::: for my_st
myst_enable_extensions = ['colon_fence']
myst_heading_anchors = 3
language = 'en'
def builder_inited_handler(app):
subprocess.run(['./stat.py'])
def setup(app):
app.connect('builder-inited', builder_inited_handler)
| 4,020 | 28.785185 | 79 | py |
mmsegmentation | mmsegmentation-master/docs/en/dataset_prepare.md | <!-- #region -->
## Prepare datasets
It is recommended to symlink the dataset root to `$MMSEGMENTATION/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
```none
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── cityscapes
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│ ├── VOCdevkit
│ │ ├── VOC2012
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClass
│ │ │ ├── ImageSets
│ │ │ │ ├── Segmentation
│ │ ├── VOC2010
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClassContext
│ │ │ ├── ImageSets
│ │ │ │ ├── SegmentationContext
│ │ │ │ │ ├── train.txt
│ │ │ │ │ ├── val.txt
│ │ │ ├── trainval_merged.json
│ │ ├── VOCaug
│ │ │ ├── dataset
│ │ │ │ ├── cls
│ ├── ade
│ │ ├── ADEChallengeData2016
│ │ │ ├── annotations
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ │ │ ├── images
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ ├── coco_stuff10k
│ │ ├── images
│ │ │ ├── train2014
│ │ │ ├── test2014
│ │ ├── annotations
│ │ │ ├── train2014
│ │ │ ├── test2014
│ │ ├── imagesLists
│ │ │ ├── train.txt
│ │ │ ├── test.txt
│ │ │ ├── all.txt
│ ├── coco_stuff164k
│ │ ├── images
│ │ │ ├── train2017
│ │ │ ├── val2017
│ │ ├── annotations
│ │ │ ├── train2017
│ │ │ ├── val2017
│ ├── CHASE_DB1
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── DRIVE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── HRF
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── STARE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
| ├── dark_zurich
| │ ├── gps
| │ │ ├── val
| │ │ └── val_ref
| │ ├── gt
| │ │ └── val
| │ ├── LICENSE.txt
| │ ├── lists_file_names
| │ │ ├── val_filenames.txt
| │ │ └── val_ref_filenames.txt
| │ ├── README.md
| │ └── rgb_anon
| │ | ├── val
| │ | └── val_ref
| ├── NighttimeDrivingTest
| | ├── gtCoarse_daytime_trainvaltest
| | │ └── test
| | │ └── night
| | └── leftImg8bit
| | | └── test
| | | └── night
│ ├── loveDA
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ │ ├── test
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── potsdam
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── vaihingen
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── iSAID
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ │ ├── test
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── occlusion-aware-face-dataset
│ │ ├── train.txt
│ │ ├── NatOcc_hand_sot
│ │ │ ├── img
│ │ │ ├── mask
│ │ ├── NatOcc_object
│ │ │ ├── img
│ │ │ ├── mask
│ │ ├── RandOcc
│ │ │ ├── img
│ │ │ ├── mask
│ │ ├── RealOcc
│ │ │ ├── img
│ │ │ ├── mask
│ │ │ ├── split
│ ├── ImageNetS
│ │ ├── ImageNetS919
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS300
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS50
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
```
### Cityscapes
The data could be found [here](https://www.cityscapes-dataset.com/downloads/) after registration.
By convention, `**labelTrainIds.png` are used for cityscapes training.
We provided a [scripts](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts)
to generate `**labelTrainIds.png`.
```shell
# --nproc means 8 process for conversion, which could be omitted as well.
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
```
### Pascal VOC
Pascal VOC 2012 could be downloaded from [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar).
Beside, most recent works on Pascal VOC dataset usually exploit extra augmentation data, which could be found [here](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz).
If you would like to use augmented VOC dataset, please run following command to convert augmentation annotations into proper format.
```shell
# --nproc means 8 process for conversion, which could be omitted as well.
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
```
Please refer to [concat dataset](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/tutorials/customize_datasets.md#concatenate-dataset) for details about how to concatenate them and train them together.
### ADE20K
The training and validation set of ADE20K could be download from this [link](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip).
We may also download test set from [here](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip).
### Pascal Context
The training and validation set of Pascal Context could be download from [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar). You may also download test set from [here](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) after registration.
To split the training and validation set from original dataset, you may download trainval_merged.json from [here](https://codalabuser.blob.core.windows.net/public/trainval_merged.json).
If you would like to use Pascal Context dataset, please install [Detail](https://github.com/zhanghang1989/detail-api) and then run the following command to convert annotations into proper format.
```shell
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
```
### COCO Stuff 10k
The data could be downloaded [here](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/cocostuff-10k-v1.1.zip) by wget.
For COCO Stuff 10k dataset, please run the following commands to download and convert the dataset.
```shell
# download
mkdir coco_stuff10k && cd coco_stuff10k
wget http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/cocostuff-10k-v1.1.zip
# unzip
unzip cocostuff-10k-v1.1.zip
# --nproc means 8 process for conversion, which could be omitted as well.
python tools/convert_datasets/coco_stuff10k.py /path/to/coco_stuff10k --nproc 8
```
By convention, mask labels in `/path/to/coco_stuff164k/annotations/*2014/*_labelTrainIds.png` are used for COCO Stuff 10k training and testing.
### COCO Stuff 164k
For COCO Stuff 164k dataset, please run the following commands to download and convert the augmented dataset.
```shell
# download
mkdir coco_stuff164k && cd coco_stuff164k
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip
# unzip
unzip train2017.zip -d images/
unzip val2017.zip -d images/
unzip stuffthingmaps_trainval2017.zip -d annotations/
# --nproc means 8 process for conversion, which could be omitted as well.
python tools/convert_datasets/coco_stuff164k.py /path/to/coco_stuff164k --nproc 8
```
By convention, mask labels in `/path/to/coco_stuff164k/annotations/*2017/*_labelTrainIds.png` are used for COCO Stuff 164k training and testing.
The details of this dataset could be found at [here](https://github.com/nightrome/cocostuff#downloads).
### CHASE DB1
The training and validation set of CHASE DB1 could be download from [here](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip).
To convert CHASE DB1 dataset to MMSegmentation format, you should run the following command:
```shell
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
```
The script will make directory structure automatically.
### DRIVE
The training and validation set of DRIVE could be download from [here](https://drive.grand-challenge.org/). Before that, you should register an account. Currently '1st_manual' is not provided officially.
To convert DRIVE dataset to MMSegmentation format, you should run the following command:
```shell
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
```
The script will make directory structure automatically.
### HRF
First, download [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip), [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) and [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip).
To convert HRF dataset to MMSegmentation format, you should run the following command:
```shell
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
```
The script will make directory structure automatically.
### STARE
First, download [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) and [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar).
To convert STARE dataset to MMSegmentation format, you should run the following command:
```shell
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
```
The script will make directory structure automatically.
### Dark Zurich
Since we only support test models on this dataset, you may only download [the validation set](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip).
### Nighttime Driving
Since we only support test models on this dataset, you may only download [the test set](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip).
### LoveDA
The data could be downloaded from Google Drive [here](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing).
Or it can be downloaded from [zenodo](https://zenodo.org/record/5706578#.YZvN7SYRXdF), you should run the following command:
```shell
# Download Train.zip
wget https://zenodo.org/record/5706578/files/Train.zip
# Download Val.zip
wget https://zenodo.org/record/5706578/files/Val.zip
# Download Test.zip
wget https://zenodo.org/record/5706578/files/Test.zip
```
For LoveDA dataset, please run the following command to download and re-organize the dataset.
```shell
python tools/convert_datasets/loveda.py /path/to/loveDA
```
Using trained model to predict test set of LoveDA and submit it to server can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/inference.md).
More details about LoveDA can be found [here](https://github.com/Junjue-Wang/LoveDA).
### ISPRS Potsdam
The [Potsdam](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-potsdam/)
dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Potsdam.
The dataset can be requested at the challenge [homepage](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/).
The '2_Ortho_RGB.zip' and '5_Labels_all_noBoundary.zip' are required.
For Potsdam dataset, please run the following command to download and re-organize the dataset.
```shell
python tools/convert_datasets/potsdam.py /path/to/potsdam
```
In our default setting, it will generate 3456 images for training and 2016 images for validation.
### ISPRS Vaihingen
The [Vaihingen](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-vaihingen/)
dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Vaihingen.
The dataset can be requested at the challenge [homepage](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/).
The 'ISPRS_semantic_labeling_Vaihingen.zip' and 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip' are required.
For Vaihingen dataset, please run the following command to download and re-organize the dataset.
```shell
python tools/convert_datasets/vaihingen.py /path/to/vaihingen
```
In our default setting (`clip_size` =512, `stride_size`=256), it will generate 344 images for training and 398 images for validation.
### iSAID
The data images could be download from [DOTA-v1.0](https://captain-whu.github.io/DOTA/dataset.html) (train/val/test)
The data annotations could be download from [iSAID](https://captain-whu.github.io/iSAID/dataset.html) (train/val)
The dataset is a Large-scale Dataset for Instance Segmentation (also have segmantic segmentation) in Aerial Images.
You may need to follow the following structure for dataset preparation after downloading iSAID dataset.
```
│ ├── iSAID
│ │ ├── train
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ │ ├── part2.zip
│ │ │ │ ├── part3.zip
│ │ │ ├── Semantic_masks
│ │ │ │ ├── images.zip
│ │ ├── val
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ ├── Semantic_masks
│ │ │ │ ├── images.zip
│ │ ├── test
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ │ ├── part2.zip
```
```shell
python tools/convert_datasets/isaid.py /path/to/iSAID
```
In our default setting (`patch_width`=896, `patch_height`=896, `overlap_area`=384), it will generate 33978 images for training and 11644 images for validation.
### Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets
The dataset is generated by two techniques, Naturalistic occlusion generation, Random occlusion generation. you must install face-occlusion-generation and dataset. see more guide in https://github.com/kennyvoo/face-occlusion-generation.git
## Dataset Preparation
step 1
Create a folder for data generation materials on mmsegmentation folder.
```shell
mkdir data_materials
```
step 2
Please download the masks (11k-hands_mask.7z,CelebAMask-HQ-masks_corrected.7z) from this [drive](https://drive.google.com/drive/folders/15nZETWlGMdcKY6aHbchRsWkUI42KTNs5?usp=sharing)
Please download the images from [CelebAMask-HQ](https://github.com/switchablenorms/CelebAMask-HQ), [11k Hands.zip](https://sites.google.com/view/11khands) and [dtd-r1.0.1.tar.gz](https://www.robots.ox.ac.uk/~vgg/data/dtd/).
step 3
Download a upsampled COCO objects images and masks (coco_object.7z). files can be found in this [drive](https://drive.google.com/drive/folders/15nZETWlGMdcKY6aHbchRsWkUI42KTNs5?usp=sharing).
Download CelebAMask-HQ and 11k Hands images split txt files. (11k_hands_sample.txt, CelebAMask-HQ-WO-train.txt) found in [drive](https://drive.google.com/drive/folders/15nZETWlGMdcKY6aHbchRsWkUI42KTNs5?usp=sharing).
download file to ./data_materials
```none
CelebAMask-HQ.zip
CelebAMask-HQ-masks_corrected.7z
CelebAMask-HQ-WO-train.txt
RealOcc.7z
RealOcc-Wild.7z
11k-hands_mask.7z
11k Hands.zip
11k_hands_sample.txt
coco_object.7z
dtd-r1.0.1.tar.gz
```
______________________________________________________________________
```bash
apt-get install p7zip-full
cd data_materials
#make occlusion-aware-face-dataset folder
mkdir path-to-mmsegmentaion/data/occlusion-aware-face-dataset
#extract celebAMask-HQ and split by train-set
unzip CelebAMask-HQ.zip
7za x CelebAMask-HQ-masks_corrected.7z -o./CelebAMask-HQ
#copy training data to train-image-folder
rsync -a ./CelebAMask-HQ/CelebA-HQ-img/ --files-from=./CelebAMask-HQ-WO-train.txt ./CelebAMask-HQ-WO-Train_img
#create a file-name txt file for copying mask
basename -s .jpg ./CelebAMask-HQ-WO-Train_img/* > train.txt
#add .png to file-name txt file
xargs -n 1 -i echo {}.png < train.txt > mask_train.txt
#copy training data to train-mask-folder
rsync -a ./CelebAMask-HQ/CelebAMask-HQ-masks_corrected/ --files-from=./mask_train.txt ./CelebAMask-HQ-WO-Train_mask
mv train.txt ../data/occlusion-aware-face-dataset
#extract DTD
tar -zxvf dtd-r1.0.1.tar.gz
mv dtd DTD
#extract hands dataset and split by 200 samples
7za x 11k-hands_masks.7z -o.
unzip Hands.zip
rsync -a ./Hands/ --files-from=./11k_hands_sample.txt ./11k-hands_img
#extract upscaled coco object
7za x coco_object.7z -o.
mv coco_object/* .
#extract validation set
7za x RealOcc.7z -o../data/occlusion-aware-face-dataset
```
**Dataset material Organization:**
```none
├── data_materials
│ ├── CelebAMask-HQ-WO-Train_img
│ │ ├── {image}.jpg
│ ├── CelebAMask-HQ-WO-Train_mask
│ │ ├── {mask}.png
│ ├── DTD
│ │ ├── images
│ │ │ ├── {classA}
│ │ │ │ ├── {image}.jpg
│ │ │ ├── {classB}
│ │ │ │ ├── {image}.jpg
│ ├── 11k-hands_img
│ │ ├── {image}.jpg
│ ├── 11k-hands_mask
│ │ ├── {mask}.png
│ ├── object_image_sr
│ │ ├── {image}.jpg
│ ├── object_mask_x4
│ │ ├── {mask}.png
```
## Data Generation
```bash
git clone https://github.com/kennyvoo/face-occlusion-generation.git
cd face_occlusion-generation
```
Example script to generate NatOcc hand dataset
```bash
CUDA_VISIBLE_DEVICES=0 NUM_WORKERS=4 python main.py \
--config ./configs/natocc_hand.yaml \
--opts OUTPUT_PATH "path/to/mmsegmentation/data/occlusion-aware-face-dataset/NatOcc_hand_sot"\
AUGMENTATION.SOT True \
SOURCE_DATASET.IMG_DIR "path/to/data_materials/CelebAMask-HQ-WO-Train_img" \
SOURCE_DATASET.MASK_DIR "path/to/mmsegmentation/data_materials/CelebAMask-HQ-WO-Train_mask" \
OCCLUDER_DATASET.IMG_DIR "path/to/mmsegmentation/data_materials/11k-hands_img" \
OCCLUDER_DATASET.MASK_DIR "path/to/mmsegmentation/data_materials/11k-hands_masks"
```
Example script to generate NatOcc object dataset
```bash
CUDA_VISIBLE_DEVICES=0 NUM_WORKERS=4 python main.py \
--config ./configs/natocc_objects.yaml \
--opts OUTPUT_PATH "path/to/mmsegmentation/data/occlusion-aware-face-dataset/NatOcc_object" \
SOURCE_DATASET.IMG_DIR "path/to/mmsegmentation/data_materials/CelebAMask-HQ-WO-Train_img" \
SOURCE_DATASET.MASK_DIR "path/to/mmsegmentation/data_materials/CelebAMask-HQ-WO-Train_mask" \
OCCLUDER_DATASET.IMG_DIR "path/to/mmsegmentation/data_materials/object_image_sr" \
OCCLUDER_DATASET.MASK_DIR "path/to/mmsegmentation/data_materials/object_mask_x4"
```
Example script to generate RandOcc dataset
```bash
CUDA_VISIBLE_DEVICES=0 NUM_WORKERS=4 python main.py \
--config ./configs/randocc.yaml \
--opts OUTPUT_PATH "path/to/mmsegmentation/data/occlusion-aware-face-dataset/RandOcc" \
SOURCE_DATASET.IMG_DIR "path/to/mmsegmentation/data_materials/CelebAMask-HQ-WO-Train_img/" \
SOURCE_DATASET.MASK_DIR "path/to/mmsegmentation/data_materials/CelebAMask-HQ-WO-Train_mask" \
OCCLUDER_DATASET.IMG_DIR "path/to/jw93/mmsegmentation/data_materials/DTD/images"
```
**Dataset Organization:**
```none
├── data
│ ├── occlusion-aware-face-dataset
│ │ ├── train.txt
│ │ ├── NatOcc_hand_sot
│ │ │ ├── img
│ │ │ │ ├── {image}.jpg
│ │ │ ├── mask
│ │ │ │ ├── {mask}.png
│ │ ├── NatOcc_object
│ │ │ ├── img
│ │ │ │ ├── {image}.jpg
│ │ │ ├── mask
│ │ │ │ ├── {mask}.png
│ │ ├── RandOcc
│ │ │ ├── img
│ │ │ │ ├── {image}.jpg
│ │ │ ├── mask
│ │ │ │ ├── {mask}.png
│ │ ├── RealOcc
│ │ │ ├── img
│ │ │ │ ├── {image}.jpg
│ │ │ ├── mask
│ │ │ │ ├── {mask}.png
│ │ │ ├── split
│ │ │ │ ├── val.txt
```
<!-- #endregion -->
```python
```
### ImageNetS
The ImageNet-S dataset is for [Large-scale unsupervised/semi-supervised semantic segmentation](https://arxiv.org/abs/2106.03149).
The images and annotations are available on [ImageNet-S](https://github.com/LUSSeg/ImageNet-S#imagenet-s-dataset-preparation).
```
│ ├── ImageNetS
│ │ ├── ImageNetS919
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS300
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS50
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
```
| 21,617 | 33.314286 | 698 | md |
mmsegmentation | mmsegmentation-master/docs/en/faq.md | # Frequently Asked Questions (FAQ)
We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided templates](https://github.com/open-mmlab/mmsegmentation/blob/master/.github/ISSUE_TEMPLATE/error-report.md/) and make sure you fill in all required information in the template.
## Installation
The compatible MMSegmentation and MMCV versions are as below. Please install the correct version of MMCV to avoid installation issues.
| MMSegmentation version | MMCV version | MMClassification version |
| :--------------------: | :-------------------------: | :----------------------: |
| master | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.30.0 | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.29.1 | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.29.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.28.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.27.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.26.0 | mmcv-full>=1.5.0, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.25.0 | mmcv-full>=1.5.0, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.24.1 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.23.0 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.22.0 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.21.1 | mmcv-full>=1.4.4, \<=1.6.0 | Not required |
| 0.20.2 | mmcv-full>=1.3.13, \<=1.6.0 | Not required |
| 0.19.0 | mmcv-full>=1.3.13, \<1.3.17 | Not required |
| 0.18.0 | mmcv-full>=1.3.13, \<1.3.17 | Not required |
| 0.17.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.16.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.15.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.14.1 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.14.0 | mmcv-full>=1.3.1, \<1.3.2 | Not required |
| 0.13.0 | mmcv-full>=1.3.1, \<1.3.2 | Not required |
| 0.12.0 | mmcv-full>=1.1.4, \<1.3.2 | Not required |
| 0.11.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.10.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.9.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.8.0 | mmcv-full>=1.1.4, \<1.2.0 | Not required |
| 0.7.0 | mmcv-full>=1.1.2, \<1.2.0 | Not required |
| 0.6.0 | mmcv-full>=1.1.2, \<1.2.0 | Not required |
You need to run `pip uninstall mmcv` first if you have mmcv installed.
If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`.
- "No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'".
1. Uninstall existing mmcv in the environment using `pip uninstall mmcv`.
2. Install mmcv-full following the [installation instruction](get_started#best-practices).
## How to know the number of GPUs needed to train the model
- Infer from the name of the config file of the model. You can refer to the `Config Name Style` part of [Learn about Configs](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/tutorials/config.md). For example, for config file with name `segformer_mit-b0_8x1_1024x1024_160k_cityscapes.py`, `8x1` means training the model corresponding to it needs 8 GPUs, and the batch size of each GPU is 1.
- Infer from the log file. Open the log file of the model and search `nGPU` in the file. The number of figures following `nGPU` is the number of GPUs needed to train the model. For instance, searching for `nGPU` in the log file yields the record `nGPU 0,1,2,3,4,5,6,7`, which indicates that eight GPUs are needed to train the model.
## What does the auxiliary head mean
Briefly, it is a deep supervision trick to improve the accuracy. In the training phase, `decode_head` is for decoding semantic segmentation output, `auxiliary_head` is just adding an auxiliary loss, the segmentation result produced by it has no impact to your model's result, it just works in training. You may read this [paper](https://arxiv.org/pdf/1612.01105.pdf) for more information.
## Why is the log file not created
In the train script, we call `get_root_logger`at Line 167, and `get_root_logger` in mmseg calls `get_logger` in mmcv, mmcv will return the same logger which has been initialized in 'mmsegmentation/tools/train.py' with the parameter `log_file`. There is only one logger (initialized with `log_file`) during training.
Ref: [https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16](https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16)
If you find the log file not been created, you might check if `mmcv.utils.get_logger` is called elsewhere.
## How to output the image for painting the segmentation mask when running the test script
In the test script, we provide `show-dir` argument to control whether output the painted images. Users might run the following command:
```shell
python tools/test.py {config} {checkpoint} --show-dir {/path/to/save/image} --opacity 1
```
## How to handle binary segmentation task
MMSegmentation uses `num_classes` and `out_channels` to control output of last layer `self.conv_seg`. More details could be found [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py).
`num_classes` should be the same as number of types of labels, in binary segmentation task, dataset only has two types of labels: foreground and background, so `num_classes=2`. `out_channels` controls the output channel of last layer of model, it usually equals to `num_classes`.
But in binary segmentation task, there are two solutions:
- Set `out_channels=2`, using Cross Entropy Loss in training, using `F.softmax()` and `argmax()` to get prediction of each pixel in inference.
- Set `out_channels=1`, using Binary Cross Entropy Loss in training, using `F.sigmoid()` and `threshold` to get prediction of each pixel in inference. `threshold` is set 0.3 as default.
In summary, to implement binary segmentation methods users should modify below parameters in the `decode_head` and `auxiliary_head` configs. Here is a modification example of [pspnet_unet_s5-d16.py](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/_base_/models/pspnet_unet_s5-d16.py):
- (1) `num_classes=2`, `out_channels=2` and `use_sigmoid=False` in `CrossEntropyLoss`.
```python
decode_head=dict(
type='PSPHead',
in_channels=64,
in_index=4,
num_classes=2,
out_channels=2,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
num_classes=2,
out_channels=2,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
```
- (2) `num_classes=2`, `out_channels=1` and `use_sigmoid=True` in `CrossEntropyLoss`.
```python
decode_head=dict(
type='PSPHead',
in_channels=64,
in_index=4,
num_classes=2,
out_channels=1,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
num_classes=2,
out_channels=1,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
```
## What does `reduce_zero_label` work for?
When [loading annotation](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/datasets/pipelines/loading.py#L91) in MMSegmentation, `reduce_zero_label (bool)` is provided to determine whether reduce all label value by 1:
```python
if self.reduce_zero_label:
# avoid using underflow conversion
gt_semantic_seg[gt_semantic_seg == 0] = 255
gt_semantic_seg = gt_semantic_seg - 1
gt_semantic_seg[gt_semantic_seg == 254] = 255
```
**Noted:** Please pay attention to label numbers of dataset when using `reduce_zero_label`. If dataset only has two types of labels (i.e., label 0 and 1), it needs to close `reduce_zero_label`, i.e., set `reduce_zero_label=False`.
| 8,826 | 62.05 | 459 | md |
mmsegmentation | mmsegmentation-master/docs/en/get_started.md | # Prerequisites
In this section we demonstrate how to prepare an environment with PyTorch.
MMSegmentation works on Linux, Windows and macOS. It requires Python 3.6+, CUDA 9.2+ and PyTorch 1.3+.
```{note}
If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next section](#installation). Otherwise, you can follow these steps for the preparation.
```
**Step 0.** Download and install Miniconda from the [official website](https://docs.conda.io/en/latest/miniconda.html).
**Step 1.** Create a conda environment and activate it.
```shell
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
```
**Step 2.** Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/), e.g.
On GPU platforms:
```shell
conda install pytorch torchvision -c pytorch
```
On CPU platforms:
```shell
conda install pytorch torchvision cpuonly -c pytorch
```
# Installation
We recommend that users follow our best practices to install MMSegmentation. However, the whole process is highly customizable. See [Customize Installation](#customize-installation) section for more information.
## Best Practices
**Step 0.** Install [MMCV](https://github.com/open-mmlab/mmcv) using [MIM](https://github.com/open-mmlab/mim).
```shell
pip install -U openmim
mim install mmcv-full
```
**Step 1.** Install MMSegmentation.
Case a: If you develop and run mmseg directly, install it from source:
```shell
git clone https://github.com/open-mmlab/mmsegmentation.git
cd mmsegmentation
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
```
Case b: If you use mmsegmentation as a dependency or third-party package, install it with pip:
```shell
pip install mmsegmentation
```
**Note:**
If you would like to use albumentations, we suggest using pip install -U albumentations --no-binary qudida,albumentations. If you simply use pip install albumentations>=0.3.2, it will install opencv-python-headless simultaneously (even though you have already installed opencv-python). We recommended checking the environment after installing albumentations to ensure that opencv-python and opencv-python-headless are not installed at the same time, because it might cause unexpected issues if they both installed. Please refer to [official documentation](https://albumentations.ai/docs/getting_started/installation/#note-on-opencv-dependencies) for more details.
## Verify the installation
To verify whether MMSegmentation is installed correctly, we provide some sample codes to run an inference demo.
**Step 1.** We need to download config and checkpoint files.
```shell
mim download mmsegmentation --config pspnet_r50-d8_512x1024_40k_cityscapes --dest .
```
The downloading will take several seconds or more, depending on your network environment. When it is done, you will find two files `pspnet_r50-d8_512x1024_40k_cityscapes.py` and `pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth` in your current folder.
**Step 2.** Verify the inference demo.
Option (a). If you install mmsegmentation from source, just run the following command.
```shell
python demo/image_demo.py demo/demo.png configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth --device cuda:0 --out-file result.jpg
```
You will see a new image `result.jpg` on your current folder, where segmentation masks are covered on all objects.
Option (b). If you install mmsegmentation with pip, open you python interpreter and copy&paste the following codes.
```python
from mmseg.apis import inference_segmentor, init_segmentor
import mmcv
config_file = 'pspnet_r50-d8_512x1024_40k_cityscapes.py'
checkpoint_file = 'pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
# build the model from a config file and a checkpoint file
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')
# test a single image and show the results
img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once
result = inference_segmentor(model, img)
# visualize the results in a new window
model.show_result(img, result, show=True)
# or save the visualization results to image files
# you can change the opacity of the painted segmentation map in (0, 1].
model.show_result(img, result, out_file='result.jpg', opacity=0.5)
# test a video and show the results
video = mmcv.VideoReader('video.mp4')
for frame in video:
result = inference_segmentor(model, frame)
model.show_result(frame, result, wait_time=1)
```
You can modify the code above to test a single image or a video, both of these options can verify that the installation was successful.
## Customize Installation
### CUDA versions
When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations:
- For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must.
- For older NVIDIA GPUs, CUDA 11 is backward compatible, but CUDA 10.2 offers better compatibility and is more lightweight.
Please make sure the GPU driver satisfies the minimum version requirements. See [this table](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions) for more information.
```{note}
Installing CUDA runtime libraries is enough if you follow our best practices, because no CUDA code will be compiled locally. However if you hope to compile MMCV from source or develop other CUDA operators, you need to install the complete CUDA toolkit from NVIDIA's [website](https://developer.nvidia.com/cuda-downloads), and its version should match the CUDA version of PyTorch. i.e., the specified version of cudatoolkit in `conda install` command.
```
### Install MMCV without MIM
MMCV contains C++ and CUDA extensions, thus depending on PyTorch in a complex way. MIM solves such dependencies automatically and makes the installation easier. However, it is not a must.
To install MMCV with pip instead of MIM, please follow [MMCV installation guides](https://mmcv.readthedocs.io/en/latest/get_started/installation.html). This requires manually specifying a find-url based on PyTorch version and its CUDA version.
For example, the following command install mmcv-full built for PyTorch 1.10.x and CUDA 11.3.
```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
```
### Install on CPU-only platforms
MMSegmentation can be built for CPU only environment. In CPU mode you can train (requires MMCV version >= 1.4.4), test or inference a model.
### Install on Google Colab
[Google Colab](https://research.google.com/) usually has PyTorch installed,
thus we only need to install MMCV and MMSegmentation with the following commands.
**Step 1.** Install [MMCV](https://github.com/open-mmlab/mmcv) using [MIM](https://github.com/open-mmlab/mim).
```shell
!pip3 install openmim
!mim install mmcv-full
```
**Step 2.** Install MMSegmentation from the source.
```shell
!git clone https://github.com/open-mmlab/mmsegmentation.git
%cd mmsegmentation
!pip install -e .
```
**Step 3.** Verification.
```python
import mmseg
print(mmseg.__version__)
# Example output: 0.24.1
```
```{note}
Within Jupyter, the exclamation mark `!` is used to call external executables and `%cd` is a [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-cd) to change the current working directory of Python.
```
### Using MMSegmentation with Docker
We provide a [Dockerfile](https://github.com/open-mmlab/mmsegmentation/blob/master/docker/Dockerfile) to build an image. Ensure that your [docker version](https://docs.docker.com/engine/install/) >=19.03.
```shell
# build an image with PyTorch 1.11, CUDA 11.3
# If you prefer other versions, just modified the Dockerfile
docker build -t mmsegmentation docker/
```
Run it with
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmsegmentation/data mmsegmentation
```
## Trouble shooting
If you have some issues during the installation, please first view the [FAQ](faq.md) page.
You may [open an issue](https://github.com/open-mmlab/mmsegmentation/issues/new/choose) on GitHub if no solution is found.
| 8,433 | 40.343137 | 663 | md |
mmsegmentation | mmsegmentation-master/docs/en/inference.md | ## Inference with pretrained models
We provide testing scripts to evaluate a whole dataset (Cityscapes, PASCAL VOC, ADE20k, etc.),
and also some high-level apis for easier integration to other projects.
### Test a dataset
- single GPU
- CPU
- single node multiple GPU
- multiple node
You can use the following commands to test a dataset.
```shell
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]
# CPU: If GPU unavailable, directly running single-gpu testing command above
# CPU: If GPU available, disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]
# multi-gpu testing
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]
```
Optional arguments:
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file. (After mmseg v0.17, the output results become pre-evaluation results or format result paths)
- `EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `mIoU` is available for all dataset. Cityscapes could be evaluated by `cityscapes` as well as standard `mIoU` metrics.
- `--show`: If specified, segmentation results will be plotted on the images and shown in a new window. It is only applicable to single GPU testing and used for debugging and visualization. Please make sure that GUI is available in your environment, otherwise you may encounter the error like `cannot connect to X server`.
- `--show-dir`: If specified, segmentation results will be plotted on the images and saved to the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
- `--eval-options`: Optional parameters for `dataset.format_results` and `dataset.evaluate` during evaluation. When `efficient_test=True`, it will save intermediate results to local files to save CPU memory. Make sure that you have enough local storage space (more than 20GB). (`efficient_test` argument does not have effect after mmseg v0.17, we use a progressive mode to evaluation and format results which can largely save memory cost and evaluation time.)
Examples:
Assume that you have already downloaded the checkpoints to the directory `checkpoints/`.
1. Test PSPNet and visualize the results. Press any key for the next image.
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
--show
```
2. Test PSPNet and save the painted images for latter visualization.
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
--show-dir psp_r50_512x1024_40ki_cityscapes_results
```
3. Test PSPNet on PASCAL VOC (without saving the test results) and evaluate the mIoU.
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py \
checkpoints/pspnet_r50-d8_512x512_20k_voc12aug_20200617_101958-ed5dfbd9.pth \
--eval mIoU
```
4. Test PSPNet with 4 GPUs, and evaluate the standard mIoU and cityscapes metric.
```shell
./tools/dist_test.sh configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
4 --out results.pkl --eval mIoU cityscapes
```
:::{note}
There is some gap (~0.1%) between cityscapes mIoU and our mIoU. The reason is that cityscapes average each class with class size by default.
We use the simple version without average for all datasets.
:::
5. Test PSPNet on cityscapes test split with 4 GPUs, and generate the png files to be submit to the official evaluation server.
First, add following to config file `configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py`,
```python
data = dict(
test=dict(
img_dir='leftImg8bit/test',
ann_dir='gtFine/test'))
```
Then run test.
```shell
./tools/dist_test.sh configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
4 --format-only --eval-options "imgfile_prefix=./pspnet_test_results"
```
You will get png files under `./pspnet_test_results` directory.
You may run `zip -r results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://www.cityscapes-dataset.com/submit/).
6. CPU memory efficient test DeeplabV3+ on Cityscapes (without saving the test results) and evaluate the mIoU.
```shell
python tools/test.py \
configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py \
deeplabv3plus_r18-d8_512x1024_80k_cityscapes_20201226_080942-cff257fe.pth \
--eval-options efficient_test=True \
--eval mIoU
```
Using `pmap` to view CPU memory footprint, it used 2.25GB CPU memory with `efficient_test=True` and 11.06GB CPU memory with `efficient_test=False` . This optional parameter can save a lot of memory. (After mmseg v0.17, efficient_test has not effect and we use a progressive mode to evaluation and format results efficiently by default.)
7. Test PSPNet on LoveDA test split with 1 GPU, and generate the png files to be submit to the official evaluation server.
First, add following to config file `configs/pspnet/pspnet_r50-d8_512x512_80k_loveda.py`,
```python
data = dict(
test=dict(
img_dir='img_dir/test',
ann_dir='ann_dir/test'))
```
Then run test.
```shell
python ./tools/test.py configs/pspnet/pspnet_r50-d8_512x512_80k_loveda.py \
checkpoints/pspnet_r50-d8_512x512_80k_loveda_20211104_155728-88610f9f.pth \
--format-only --eval-options "imgfile_prefix=./pspnet_test_results"
```
You will get png files under `./pspnet_test_results` directory.
You may run `zip -r -j Results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://codalab.lisn.upsaclay.fr/competitions/421).
| 6,384 | 47.371212 | 459 | md |
mmsegmentation | mmsegmentation-master/docs/en/model_zoo.md | # Benchmark and Model Zoo
## Common settings
- We use distributed training with 4 GPUs by default.
- All pytorch-style pretrained backbones on ImageNet are train by ourselves, with the same procedure in the [paper](https://arxiv.org/pdf/1812.01187.pdf).
Our ResNet style backbone are based on ResNetV1c variant, where the 7x7 conv in the input stem is replaced with three 3x3 convs.
- For the consistency across different hardwares, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 4 GPUs with `torch.backends.cudnn.benchmark=False`.
Note that this value is usually less than what `nvidia-smi` shows.
- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time.
Results are obtained with the script `tools/benchmark.py` which computes the average time on 200 images with `torch.backends.cudnn.benchmark=False`.
- There are two inference modes in this framework.
- `slide` mode: The `test_cfg` will be like `dict(mode='slide', crop_size=(769, 769), stride=(513, 513))`.
In this mode, multiple patches will be cropped from input image, passed into network individually.
The crop size and stride between patches are specified by `crop_size` and `stride`.
The overlapping area will be merged by average
- `whole` mode: The `test_cfg` will be like `dict(mode='whole')`.
In this mode, the whole imaged will be passed into network directly.
By default, we use `slide` inference for 769x769 trained model, `whole` inference for the rest.
- For input size of 8x+1 (e.g. 769), `align_corner=True` is adopted as a traditional practice.
Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopted.
## Baselines
### FCN
Please refer to [FCN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn) for details.
### PSPNet
Please refer to [PSPNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet) for details.
### DeepLabV3
Please refer to [DeepLabV3](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3) for details.
### PSANet
Please refer to [PSANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet) for details.
### DeepLabV3+
Please refer to [DeepLabV3+](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus) for details.
### UPerNet
Please refer to [UPerNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet) for details.
### NonLocal Net
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net) for details.
### EncNet
Please refer to [EncNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/encnet) for details.
### CCNet
Please refer to [CCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet) for details.
### DANet
Please refer to [DANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet) for details.
### APCNet
Please refer to [APCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet) for details.
### HRNet
Please refer to [HRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet) for details.
### GCNet
Please refer to [GCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/gcnet) for details.
### DMNet
Please refer to [DMNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet) for details.
### ANN
Please refer to [ANN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ann) for details.
### OCRNet
Please refer to [OCRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet) for details.
### Fast-SCNN
Please refer to [Fast-SCNN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fastscnn) for details.
### ResNeSt
Please refer to [ResNeSt](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/resnest) for details.
### Semantic FPN
Please refer to [Semantic FPN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/sem_fpn) for details.
### PointRend
Please refer to [PointRend](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend) for details.
### MobileNetV2
Please refer to [MobileNetV2](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2) for details.
### MobileNetV3
Please refer to [MobileNetV3](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3) for details.
### EMANet
Please refer to [EMANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/emanet) for details.
### DNLNet
Please refer to [DNLNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dnlnet) for details.
### CGNet
Please refer to [CGNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/cgnet) for details.
### Mixed Precision (FP16) Training
Please refer [Mixed Precision (FP16) Training on BiSeNetV2](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/bisenetv2/bisenetv2_fcn_fp16_4x4_1024x1024_160k_cityscapes.py) for details.
### U-Net
Please refer to [U-Net](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/README.md) for details.
### ViT
Please refer to [ViT](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/vit/README.md) for details.
### Swin
Please refer to [Swin](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/swin/README.md) for details.
### SETR
Please refer to [SETR](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/setr/README.md) for details.
## Speed benchmark
### Hardware
- 8 NVIDIA Tesla V100 (32G) GPUs
- Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
### Software environment
- Python 3.7
- PyTorch 1.5
- CUDA 10.1
- CUDNN 7.6.03
- NCCL 2.4.08
### Training speed
For fair comparison, we benchmark all implementations with ResNet-101V1c.
The input size is fixed to 1024x512 with batch size 2.
The training speed is reported as followed, in terms of second per iter (s/iter). The lower, the better.
| Implementation | PSPNet (s/iter) | DeepLabV3+ (s/iter) |
| --------------------------------------------------------------------------- | --------------- | ------------------- |
| [MMSegmentation](https://github.com/open-mmlab/mmsegmentation) | **0.83** | **0.85** |
| [SegmenTron](https://github.com/LikeLy-Journey/SegmenTron) | 0.84 | 0.85 |
| [CASILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch) | 1.15 | N/A |
| [vedaseg](https://github.com/Media-Smart/vedaseg) | 0.95 | 1.25 |
:::{note}
The output stride of DeepLabV3+ is 8.
:::
| 6,935 | 36.090909 | 200 | md |
mmsegmentation | mmsegmentation-master/docs/en/stat.py | #!/usr/bin/env python
# Copyright (c) OpenMMLab. All rights reserved.
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmsegmentation/blob/master/'
files = sorted(glob.glob('../../configs/*/README.md'))
stats = []
titles = []
num_ckpts = 0
for f in files:
url = osp.dirname(f.replace('../../', url_prefix))
with open(f, 'r') as content_file:
content = content_file.read()
title = content.split('\n')[0].replace('#', '').strip()
ckpts = set(x.lower().strip()
for x in re.findall(r'https?://download.*\.pth', content)
if 'mmsegmentation' in x)
if len(ckpts) == 0:
continue
_papertype = [
x for x in re.findall(r'<!--\s*\[([A-Z]*?)\]\s*-->', content)
]
assert len(_papertype) > 0
papertype = _papertype[0]
paper = set([(papertype, title)])
titles.append(title)
num_ckpts += len(ckpts)
statsmsg = f"""
\t* [{papertype}] [{title}]({url}) ({len(ckpts)} ckpts)
"""
stats.append((paper, ckpts, statsmsg))
allpapers = func.reduce(lambda a, b: a.union(b), [p for p, _, _ in stats])
msglist = '\n'.join(x for _, _, x in stats)
papertypes, papercounts = np.unique([t for t, _ in allpapers],
return_counts=True)
countstr = '\n'.join(
[f' - {t}: {c}' for t, c in zip(papertypes, papercounts)])
modelzoo = f"""
# Model Zoo Statistics
* Number of papers: {len(set(titles))}
{countstr}
* Number of checkpoints: {num_ckpts}
{msglist}
"""
with open('modelzoo_statistics.md', 'w') as f:
f.write(modelzoo)
| 1,642 | 23.893939 | 74 | py |
mmsegmentation | mmsegmentation-master/docs/en/switch_language.md | ## <a href='https://mmsegmentation.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmsegmentation.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 149 | 36.5 | 73 | md |
mmsegmentation | mmsegmentation-master/docs/en/train.md | ## Train a model
MMSegmentation implements distributed training and non-distributed training,
which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.
All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.
By default we evaluate the model on the validation set after some iterations, you can change the evaluation interval by adding the interval argument in the training config.
```python
evaluation = dict(interval=4000) # This evaluate the model per 4000 iterations.
```
**\*Important\***: The default learning rate in config files is for 4 GPUs and 2 img/gpu (batch size = 4x2 = 8).
Equivalently, you may also use 8 GPUs and 1 imgs/gpu since all models using cross-GPU SyncBN.
To trade speed with GPU memory, you may pass in `--cfg-options model.backbone.with_cp=True` to enable checkpoint in backbone.
### Train on a single machine
#### Train with a single GPU
official support:
```shell
sh tools/dist_train.sh ${CONFIG_FILE} 1 [optional arguments]
```
experimental support (Convert SyncBN to BN):
```shell
python tools/train.py ${CONFIG_FILE} [optional arguments]
```
If you want to specify the working directory in the command, you can add an argument `--work-dir ${YOUR_WORK_DIR}`.
#### Train with CPU
The process of training on the CPU is consistent with single GPU training if machine does not have GPU. If it has GPUs but not wanting to use it, we just need to disable GPUs before the training process.
```shell
export CUDA_VISIBLE_DEVICES=-1
```
And then run the script [above](#train-with-a-single-gpu).
```{warning}
The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.
```
#### Train with multiple GPUs
```shell
sh tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
Optional arguments are:
- `--no-validate` (**not suggested**): By default, the codebase will perform evaluation at every k iterations during the training. To disable this behavior, use `--no-validate`.
- `--work-dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume-from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file (to continue the training process).
- `--load-from ${CHECKPOINT_FILE}`: Load weights from a checkpoint file (to start finetuning for another task).
- `--deterministic`: Switch on "deterministic" mode which slows down training but the results are reproducible.
Difference between `resume-from` and `load-from`:
- `resume-from` loads both the model weights and optimizer state including the iteration number.
- `load-from` loads only the model weights, starts the training from iteration 0.
An example:
```shell
# checkpoints and logs saved in WORK_DIR=work_dirs/pspnet_r50-d8_512x512_80k_ade20k/
# If work_dir is not set, it will be generated automatically.
sh tools/dist_train.sh configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py 8 --work-dir work_dirs/pspnet_r50-d8_512x512_80k_ade20k/ --deterministic
```
**Note**: During training, checkpoints and logs are saved in the same folder structure as the config file under `work_dirs/`. Custom work directory is not recommended since evaluation scripts infer work directories from the config file name. If you want to save your weights somewhere else, please use symlink, for example:
```shell
ln -s ${YOUR_WORK_DIRS} ${MMSEG}/work_dirs
```
#### Launch multiple jobs on a single machine
If you launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs, you need to specify different ports (29500 by default) for each job to avoid communication conflict. Otherwise, there will be error message saying `RuntimeError: Address already in use`.
If you use `dist_train.sh` to launch training jobs, you can set the port in commands with environment variable `PORT`.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 sh tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 sh tools/dist_train.sh ${CONFIG_FILE} 4
```
### Train with multiple machines
If you launch with multiple machines simply connected with ethernet, you can simply run following commands:
On the first machine:
```shell
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
On the second machine:
```shell
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
Usually it is slow if you do not have high speed networking like InfiniBand.
### Manage jobs with Slurm
Slurm is a good job scheduling system for computing clusters. On a cluster managed by Slurm, you can use slurm_train.sh to spawn training jobs. It supports both single-node and multi-node training.
Train with multiple machines:
```shell
[GPUS=${GPUS}] sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} --work-dir ${WORK_DIR}
```
Here is an example of using 16 GPUs to train PSPNet on the dev partition.
```shell
GPUS=16 sh tools/slurm_train.sh dev pspr50 configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py work_dirs/pspnet_r50-d8_512x1024_40k_cityscapes/
```
When using 'slurm_train.sh' to start multiple tasks on a node, different ports need to be specified. Three settings are provided:
Option 1:
In `config1.py`:
```python
dist_params = dict(backend='nccl', port=29500)
```
In `config2.py`:
```python
dist_params = dict(backend='nccl', port=29501)
```
Then you can launch two jobs with config1.py and config2.py.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
```
Option 2:
You can set different communication ports without the need to modify the configuration file, but have to set the `cfg-options` to overwrite the default port in configuration file.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1 --cfg-options dist_params.port=29500
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2 --cfg-options dist_params.port=29501
```
Option 3:
You can set the port in the command using the environment variable 'MASTER_PORT':
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 MASTER_PORT=29500 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 MASTER_PORT=29501 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
```
| 6,671 | 38.247059 | 323 | md |
mmsegmentation | mmsegmentation-master/docs/en/useful_tools.md | ## Useful tools
Apart from training/testing scripts, We provide lots of useful tools under the
`tools/` directory.
### Get the FLOPs and params (experimental)
We provide a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
```shell
python tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
```
You will get the result like this.
```none
==============================
Input shape: (3, 2048, 1024)
Flops: 1429.68 GMac
Params: 48.98 M
==============================
```
:::{note}
This tool is still experimental and we do not guarantee that the number is correct. You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
:::
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800).
(2) Some operators are not counted into FLOPs like GN and custom operators.
### Publish a model
Before you upload a model to AWS, you may want to
(1) convert model weights to CPU tensors, (2) delete the optimizer states and
(3) compute the hash of the checkpoint file and append the hash id to the filename.
```shell
python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
```
E.g.,
```shell
python tools/publish_model.py work_dirs/pspnet/latest.pth psp_r50_hszhao_200ep.pth
```
The final output filename will be `psp_r50_512x1024_40ki_cityscapes-{hash id}.pth`.
### Convert to ONNX (experimental)
We provide a script to convert model to [ONNX](https://github.com/onnx/onnx) format. The converted model could be visualized by tools like [Netron](https://github.com/lutzroeder/netron). Besides, we also support comparing the output results between PyTorch and ONNX model.
```bash
python tools/pytorch2onnx.py \
${CONFIG_FILE} \
--checkpoint ${CHECKPOINT_FILE} \
--output-file ${ONNX_FILE} \
--input-img ${INPUT_IMG} \
--shape ${INPUT_SHAPE} \
--rescale-shape ${RESCALE_SHAPE} \
--show \
--verify \
--dynamic-export \
--cfg-options \
model.test_cfg.mode="whole"
```
Description of arguments:
- `config` : The path of a model config file.
- `--checkpoint` : The path of a model checkpoint file.
- `--output-file`: The path of output ONNX model. If not specified, it will be set to `tmp.onnx`.
- `--input-img` : The path of an input image for conversion and visualize.
- `--shape`: The height and width of input tensor to the model. If not specified, it will be set to img_scale of test_pipeline.
- `--rescale-shape`: rescale shape of output, set this value to avoid OOM, only work on `slide` mode.
- `--show`: Determines whether to print the architecture of the exported model. If not specified, it will be set to `False`.
- `--verify`: Determines whether to verify the correctness of an exported model. If not specified, it will be set to `False`.
- `--dynamic-export`: Determines whether to export ONNX model with dynamic input and output shapes. If not specified, it will be set to `False`.
- `--cfg-options`:Update config options.
:::{note}
This tool is still experimental. Some customized operators are not supported for now.
:::
### Evaluate ONNX model
We provide `tools/deploy_test.py` to evaluate ONNX model with different backend.
#### Prerequisite
- Install onnx and onnxruntime-gpu
```shell
pip install onnx onnxruntime-gpu
```
- Install TensorRT following [how-to-build-tensorrt-plugins-in-mmcv](https://mmcv.readthedocs.io/en/latest/tensorrt_plugin.html#how-to-build-tensorrt-plugins-in-mmcv)(optional)
#### Usage
```bash
python tools/deploy_test.py \
${CONFIG_FILE} \
${MODEL_FILE} \
${BACKEND} \
--out ${OUTPUT_FILE} \
--eval ${EVALUATION_METRICS} \
--show \
--show-dir ${SHOW_DIRECTORY} \
--cfg-options ${CFG_OPTIONS} \
--eval-options ${EVALUATION_OPTIONS} \
--opacity ${OPACITY} \
```
Description of all arguments
- `config`: The path of a model config file.
- `model`: The path of a converted model file.
- `backend`: Backend of the inference, options: `onnxruntime`, `tensorrt`.
- `--out`: The path of output result file in pickle format.
- `--format-only` : Format the output results without perform evaluation. It is useful when you want to format the result to a specific format and submit it to the test server. If not specified, it will be set to `False`. Note that this argument is **mutually exclusive** with `--eval`.
- `--eval`: Evaluation metrics, which depends on the dataset, e.g., "mIoU" for generic datasets, and "cityscapes" for Cityscapes. Note that this argument is **mutually exclusive** with `--format-only`.
- `--show`: Show results flag.
- `--show-dir`: Directory where painted images will be saved
- `--cfg-options`: Override some settings in the used config file, the key-value pair in `xxx=yyy` format will be merged into config file.
- `--eval-options`: Custom options for evaluation, the key-value pair in `xxx=yyy` format will be kwargs for `dataset.evaluate()` function
- `--opacity`: Opacity of painted segmentation map. In (0, 1\] range.
#### Results and Models
| Model | Config | Dataset | Metric | PyTorch | ONNXRuntime | TensorRT-fp32 | TensorRT-fp16 |
| :--------: | :---------------------------------------------: | :--------: | :----: | :-----: | :---------: | :-----------: | :-----------: |
| FCN | fcn_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 72.2 | 72.2 | 72.2 | 72.2 |
| PSPNet | pspnet_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 77.8 | 77.8 | 77.8 | 77.8 |
| deeplabv3 | deeplabv3_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 79.0 | 79.0 | 79.0 | 79.0 |
| deeplabv3+ | deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 79.6 | 79.5 | 79.5 | 79.5 |
| PSPNet | pspnet_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.2 | 78.1 | | |
| deeplabv3 | deeplabv3_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.5 | 78.3 | | |
| deeplabv3+ | deeplabv3plus_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.9 | 78.7 | | |
:::{note}
TensorRT is only available on configs with `whole mode`.
:::
### Convert to TorchScript (experimental)
We also provide a script to convert model to [TorchScript](https://pytorch.org/docs/stable/jit.html) format. You can use the pytorch C++ API [LibTorch](https://pytorch.org/docs/stable/cpp_index.html) inference the trained model. The converted model could be visualized by tools like [Netron](https://github.com/lutzroeder/netron). Besides, we also support comparing the output results between PyTorch and TorchScript model.
```shell
python tools/pytorch2torchscript.py \
${CONFIG_FILE} \
--checkpoint ${CHECKPOINT_FILE} \
--output-file ${ONNX_FILE}
--shape ${INPUT_SHAPE}
--verify \
--show
```
Description of arguments:
- `config` : The path of a pytorch model config file.
- `--checkpoint` : The path of a pytorch model checkpoint file.
- `--output-file`: The path of output TorchScript model. If not specified, it will be set to `tmp.pt`.
- `--input-img` : The path of an input image for conversion and visualize.
- `--shape`: The height and width of input tensor to the model. If not specified, it will be set to `512 512`.
- `--show`: Determines whether to print the traced graph of the exported model. If not specified, it will be set to `False`.
- `--verify`: Determines whether to verify the correctness of an exported model. If not specified, it will be set to `False`.
:::{note}
It's only support PyTorch>=1.8.0 for now.
:::
:::{note}
This tool is still experimental. Some customized operators are not supported for now.
:::
Examples:
- Convert the cityscapes PSPNet pytorch model.
```shell
python tools/pytorch2torchscript.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
--checkpoint checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
--output-file checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pt \
--shape 512 1024
```
### Convert to TensorRT (experimental)
A script to convert [ONNX](https://github.com/onnx/onnx) model to [TensorRT](https://developer.nvidia.com/tensorrt) format.
Prerequisite
- install `mmcv-full` with ONNXRuntime custom ops and TensorRT plugins follow [ONNXRuntime in mmcv](https://mmcv.readthedocs.io/en/latest/deployment/onnxruntime_op.html) and [TensorRT plugin in mmcv](https://github.com/open-mmlab/mmcv/blob/master/docs/en/deployment/tensorrt_plugin.md).
- Use [pytorch2onnx](#convert-to-onnx-experimental) to convert the model from PyTorch to ONNX.
Usage
```bash
python ${MMSEG_PATH}/tools/onnx2tensorrt.py \
${CFG_PATH} \
${ONNX_PATH} \
--trt-file ${OUTPUT_TRT_PATH} \
--min-shape ${MIN_SHAPE} \
--max-shape ${MAX_SHAPE} \
--input-img ${INPUT_IMG} \
--show \
--verify
```
Description of all arguments
- `config` : Config file of the model.
- `model` : Path to the input ONNX model.
- `--trt-file` : Path to the output TensorRT engine.
- `--max-shape` : Maximum shape of model input.
- `--min-shape` : Minimum shape of model input.
- `--fp16` : Enable fp16 model conversion.
- `--workspace-size` : Max workspace size in GiB.
- `--input-img` : Image for visualize.
- `--show` : Enable result visualize.
- `--dataset` : Palette provider, `CityscapesDataset` as default.
- `--verify` : Verify the outputs of ONNXRuntime and TensorRT.
- `--verbose` : Whether to verbose logging messages while creating TensorRT engine. Defaults to False.
:::{note}
Only tested on whole mode.
:::
## Miscellaneous
### Print the entire config
`tools/print_config.py` prints the whole config verbatim, expanding all its
imports.
```shell
python tools/print_config.py \
${CONFIG} \
--graph \
--cfg-options ${OPTIONS [OPTIONS...]} \
```
Description of arguments:
- `config` : The path of a pytorch model config file.
- `--graph` : Determines whether to print the models graph.
- `--cfg-options`: Custom options to replace the config file.
### Plot training logs
`tools/analyze_logs.py` plots loss/mIoU curves given a training log file. `pip install seaborn` first to install the dependency.
```shell
python tools/analyze_logs.py xxx.log.json [--keys ${KEYS}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
```
Examples:
- Plot the mIoU, mAcc, aAcc metrics.
```shell
python tools/analyze_logs.py log.json --keys mIoU mAcc aAcc --legend mIoU mAcc aAcc
```
- Plot loss metric.
```shell
python tools/analyze_logs.py log.json --keys loss --legend loss
```
### Model conversion
`tools/model_converters/` provide several scripts to convert pretrain models released by other repos to MMSegmentation style.
#### ViT Swin MiT Transformer Models
- ViT
`tools/model_converters/vit2mmseg.py` convert keys in timm pretrained vit models to MMSegmentation style.
```shell
python tools/model_converters/vit2mmseg.py ${SRC} ${DST}
```
- Swin
`tools/model_converters/swin2mmseg.py` convert keys in official pretrained swin models to MMSegmentation style.
```shell
python tools/model_converters/swin2mmseg.py ${SRC} ${DST}
```
- SegFormer
`tools/model_converters/mit2mmseg.py` convert keys in official pretrained mit models to MMSegmentation style.
```shell
python tools/model_converters/mit2mmseg.py ${SRC} ${DST}
```
## Model Serving
In order to serve an `MMSegmentation` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:
### 1. Convert model from MMSegmentation to TorchServe
```shell
python tools/torchserve/mmseg2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
:::{note}
${MODEL_STORE} needs to be an absolute path to a folder.
:::
### 2. Build `mmseg-serve` docker image
```shell
docker build -t mmseg-serve:latest docker/serve/
```
### 3. Run `mmseg-serve`
Check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).
In order to run in GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). You can omit the `--gpus` argument in order to run in CPU.
Example:
```shell
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmseg-serve:latest
```
[Read the docs](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md) about the Inference (8080), Management (8081) and Metrics (8082) APIs
### 4. Test deployment
```shell
curl -O https://raw.githubusercontent.com/open-mmlab/mmsegmentation/master/resources/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg -o 3dogs_mask.png
```
The response will be a ".png" mask.
You can visualize the output as follows:
```python
import matplotlib.pyplot as plt
import mmcv
plt.imshow(mmcv.imread("3dogs_mask.png", "grayscale"))
plt.show()
```
You should see something similar to:

And you can use `test_torchserve.py` to compare result of torchserve and pytorch, and visualize them.
```shell
python tools/torchserve/test_torchserve.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--result-image ${RESULT_IMAGE}] [--device ${DEVICE}]
```
Example:
```shell
python tools/torchserve/test_torchserve.py \
demo/demo.png \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
fcn
```
## Confusion Matrix
In order to generate and plot a `nxn` confusion matrix where `n` is the number of classes, you can follow the steps:
### 1.Generate a prediction result in pkl format using `test.py`
```shell
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${PATH_TO_RESULT_FILE}]
```
Note that the argument for `--eval` should be `None` so that the result file contains numpy type of prediction results. The usage for distribution test is just the same.
Example:
```shell
python tools/test.py \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
--out result/pred_result.pkl
```
### 2. Use `confusion_matrix.py` to generate and plot a confusion matrix
```shell
python tools/confusion_matrix.py ${CONFIG_FILE} ${PATH_TO_RESULT_FILE} ${SAVE_DIR} --show
```
Description of arguments:
- `config`: Path to the test config file.
- `prediction_path`: Path to the prediction .pkl result.
- `save_dir`: Directory where confusion matrix will be saved.
- `--show`: Enable result visualize.
- `--color-theme`: Theme of the matrix color map.
- `--cfg_options`: Custom options to replace the config file.
Example:
```shell
python tools/confusion_matrix.py \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
result/pred_result.pkl \
result/confusion_matrix \
--show
```
## Model ensemble
To complete the integration of prediction probabilities for multiple models, we provide 'tools/model_ensemble.py'
### Usage
```bash
python tools/model_ensemble.py \
--config ${CONFIG_FILE1} ${CONFIG_FILE2} ... \
--checkpoint ${CHECKPOINT_FILE1} ${CHECKPOINT_FILE2} ...\
--aug-test \
--out ${OUTPUT_DIR}\
--gpus ${GPU_USED}\
```
### Description of all arguments
- `--config`: Path to the config file for the ensemble model
- `--checkpoint`: Path to the checkpoint file for the ensemble model
- `--aug-test`: Whether to use flip and multi-scale test
- `--out`: Save folder for model ensemble results
- `--gpus`: Gpu-id used for model ensemble
### Result of model ensemble
- The model ensemble will generate an unrendered segmentation mask for each input, the input shape is `[H, W]`, the segmentation mask shape is `[H, W]`, and each pixel-value in the segmentation mask represents the pixel category after segmentation at that position.
- The filename of the model ensemble result will be named in the same filename as `Ground Truth`. If the filename of `Ground Truth` is called `1.png`, the model ensemble result file will also be named `1.png` and placed in the folder specified by `--out`.
| 16,685 | 35.592105 | 423 | md |
mmsegmentation | mmsegmentation-master/docs/en/_static/css/readthedocs.css | .header-logo {
background-image: url("../images/mmsegmentation.png");
background-size: 201px 40px;
height: 40px;
width: 201px;
}
| 145 | 19.857143 | 58 | css |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/config.md | # Tutorial 1: Learn about Configs
We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments.
If you wish to inspect the config file, you may run `python tools/print_config.py /PATH/TO/CONFIG` to see the complete config.
You may also pass `--cfg-options xxx.yyy=zzz` to see updated config.
## Config File Structure
There are 4 basic component types under `config/_base_`, dataset, model, schedule, default_runtime.
Many methods could be easily constructed with one of each like DeepLabV3, PSPNet.
The configs that are composed by components from `_base_` are called _primitive_.
For all configs under the same folder, it is recommended to have only **one** _primitive_ config. All other configs should inherit from the _primitive_ config. In this way, the maximum of inheritance level is 3.
For easy understanding, we recommend contributors to inherit from existing methods.
For example, if some modification is made base on DeepLabV3, user may first inherit the basic DeepLabV3 structure by specifying `_base_ = ../deeplabv3/deeplabv3_r50_512x1024_40ki_cityscapes.py`, then modify the necessary fields in the config files.
If you are building an entirely new method that does not share the structure with any of the existing methods, you may create a folder `xxxnet` under `configs`,
Please refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html) for detailed documentation.
## Config Name Style
We follow the below style to name config files. Contributors are advised to follow the same style.
```
{model}_{backbone}_[misc]_[gpu x batch_per_gpu]_{resolution}_{iterations}_{dataset}
```
`{xxx}` is required field and `[yyy]` is optional.
- `{model}`: model type like `psp`, `deeplabv3`, etc.
- `{backbone}`: backbone type like `r50` (ResNet-50), `x101` (ResNeXt-101).
- `[misc]`: miscellaneous setting/plugins of model, e.g. `dconv`, `gcb`, `attention`, `mstrain`.
- `[gpu x batch_per_gpu]`: GPUs and samples per GPU, `8x2` is used by default.
- `{iterations}`: number of training iterations like `160k`.
- `{dataset}`: dataset like `cityscapes`, `voc12aug`, `ade`.
## An Example of PSPNet
To help the users have a basic idea of a complete config and the modules in a modern semantic segmentation system,
we make brief comments on the config of PSPNet using ResNet50V1c as the following.
For more detailed usage and the corresponding alternative for each module, please refer to the API documentation.
```python
norm_cfg = dict(type='SyncBN', requires_grad=True) # Segmentation usually uses SyncBN
model = dict(
type='EncoderDecoder', # Name of segmentor
pretrained='open-mmlab://resnet50_v1c', # The ImageNet pretrained backbone to be loaded
backbone=dict(
type='ResNetV1c', # The type of backbone. Please refer to mmseg/models/backbones/resnet.py for details.
depth=50, # Depth of backbone. Normally 50, 101 are used.
num_stages=4, # Number of stages of backbone.
out_indices=(0, 1, 2, 3), # The index of output feature maps produced in each stages.
dilations=(1, 1, 2, 4), # The dilation rate of each layer.
strides=(1, 2, 1, 1), # The stride of each layer.
norm_cfg=dict( # The configuration of norm layer.
type='SyncBN', # Type of norm layer. Usually it is SyncBN.
requires_grad=True), # Whether to train the gamma and beta in norm
norm_eval=False, # Whether to freeze the statistics in BN
style='pytorch', # The style of backbone, 'pytorch' means that stride 2 layers are in 3x3 conv, 'caffe' means stride 2 layers are in 1x1 convs.
contract_dilation=True), # When dilation > 1, whether contract first layer of dilation.
decode_head=dict(
type='PSPHead', # Type of decode head. Please refer to mmseg/models/decode_heads for available options.
in_channels=2048, # Input channel of decode head.
in_index=3, # The index of feature map to select.
channels=512, # The intermediate channels of decode head.
pool_scales=(1, 2, 3, 6), # The avg pooling scales of PSPHead. Please refer to paper for details.
dropout_ratio=0.1, # The dropout ratio before final classification layer.
num_classes=19, # Number of segmentation class. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
norm_cfg=dict(type='SyncBN', requires_grad=True), # The configuration of norm layer.
align_corners=False, # The align_corners argument for resize in decoding.
loss_decode=dict( # Config of loss function for the decode_head.
type='CrossEntropyLoss', # Type of loss used for segmentation.
use_sigmoid=False, # Whether use sigmoid activation for segmentation.
loss_weight=1.0)), # Loss weight of decode head.
auxiliary_head=dict(
type='FCNHead', # Type of auxiliary head. Please refer to mmseg/models/decode_heads for available options.
in_channels=1024, # Input channel of auxiliary head.
in_index=2, # The index of feature map to select.
channels=256, # The intermediate channels of decode head.
num_convs=1, # Number of convs in FCNHead. It is usually 1 in auxiliary head.
concat_input=False, # Whether concat output of convs with input before classification layer.
dropout_ratio=0.1, # The dropout ratio before final classification layer.
num_classes=19, # Number of segmentation class. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
norm_cfg=dict(type='SyncBN', requires_grad=True), # The configuration of norm layer.
align_corners=False, # The align_corners argument for resize in decoding.
loss_decode=dict( # Config of loss function for the decode_head.
type='CrossEntropyLoss', # Type of loss used for segmentation.
use_sigmoid=False, # Whether use sigmoid activation for segmentation.
loss_weight=0.4))) # Loss weight of auxiliary head, which is usually 0.4 of decode head.
train_cfg = dict() # train_cfg is just a place holder for now.
test_cfg = dict(mode='whole') # The test mode, options are 'whole' and 'sliding'. 'whole': whole image fully-convolutional test. 'sliding': sliding crop window on the image.
dataset_type = 'CityscapesDataset' # Dataset type, this will be used to define the dataset.
data_root = 'data/cityscapes/' # Root path of data.
img_norm_cfg = dict( # Image normalization config to normalize the input images.
mean=[123.675, 116.28, 103.53], # Mean values used to pre-training the pre-trained backbone models.
std=[58.395, 57.12, 57.375], # Standard variance used to pre-training the pre-trained backbone models.
to_rgb=True) # The channel orders of image used to pre-training the pre-trained backbone models.
crop_size = (512, 1024) # The crop size during training.
train_pipeline = [ # Training pipeline.
dict(type='LoadImageFromFile'), # First pipeline to load images from file path.
dict(type='LoadAnnotations'), # Second pipeline to load annotations for current image.
dict(type='Resize', # Augmentation pipeline that resize the images and their annotations.
img_scale=(2048, 1024), # The largest scale of image.
ratio_range=(0.5, 2.0)), # The augmented scale range as ratio.
dict(type='RandomCrop', # Augmentation pipeline that randomly crop a patch from current image.
crop_size=(512, 1024), # The crop size of patch.
cat_max_ratio=0.75), # The max area ratio that could be occupied by single category.
dict(
type='RandomFlip', # Augmentation pipeline that flip the images and their annotations
flip_ratio=0.5), # The ratio or probability to flip
dict(type='PhotoMetricDistortion'), # Augmentation pipeline that distort current image with several photo metric methods.
dict(
type='Normalize', # Augmentation pipeline that normalize the input images
mean=[123.675, 116.28, 103.53], # These keys are the same of img_norm_cfg since the
std=[58.395, 57.12, 57.375], # keys of img_norm_cfg are used here as arguments
to_rgb=True),
dict(type='Pad', # Augmentation pipeline that pad the image to specified size.
size=(512, 1024), # The output size of padding.
pad_val=0, # The padding value for image.
seg_pad_val=255), # The padding value of 'gt_semantic_seg'.
dict(type='DefaultFormatBundle'), # Default format bundle to gather data in the pipeline
dict(type='Collect', # Pipeline that decides which keys in the data should be passed to the segmentor
keys=['img', 'gt_semantic_seg'])
]
test_pipeline = [
dict(type='LoadImageFromFile'), # First pipeline to load images from file path
dict(
type='MultiScaleFlipAug', # An encapsulation that encapsulates the test time augmentations
img_scale=(2048, 1024), # Decides the largest scale for testing, used for the Resize pipeline
flip=False, # Whether to flip images during testing
transforms=[
dict(type='Resize', # Use resize augmentation
keep_ratio=True), # Whether to keep the ratio between height and width, the img_scale set here will be suppressed by the img_scale set above.
dict(type='RandomFlip'), # Thought RandomFlip is added in pipeline, it is not used when flip=False
dict(
type='Normalize', # Normalization config, the values are from img_norm_cfg
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', # Convert image to tensor
keys=['img']),
dict(type='Collect', # Collect pipeline that collect necessary keys for testing.
keys=['img'])
])
]
data = dict(
samples_per_gpu=2, # Batch size of a single GPU
workers_per_gpu=2, # Worker to pre-fetch data for each single GPU
train=dict( # Train dataset config
type='CityscapesDataset', # Type of dataset, refer to mmseg/datasets/ for details.
data_root='data/cityscapes/', # The root of dataset.
img_dir='leftImg8bit/train', # The image directory of dataset.
ann_dir='gtFine/train', # The annotation directory of dataset.
pipeline=[ # pipeline, this is passed by the train_pipeline created before.
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=(512, 1024), cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size=(512, 1024), pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg'])
]),
val=dict( # Validation dataset config
type='CityscapesDataset',
data_root='data/cityscapes/',
img_dir='leftImg8bit/val',
ann_dir='gtFine/val',
pipeline=[ # Pipeline is passed by test_pipeline created before
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CityscapesDataset',
data_root='data/cityscapes/',
img_dir='leftImg8bit/val',
ann_dir='gtFine/val',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
log_config = dict( # config to register logger hook
interval=50, # Interval to print the log
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
dict(type='TensorboardLoggerHook', by_epoch=False),
dict(type='MMSegWandbHook', by_epoch=False, # The Wandb logger is also supported, It requires `wandb` to be installed.
init_kwargs={'entity': "OpenMMLab", # The entity used to log on Wandb
'project': "MMSeg", # Project name in WandB
'config': cfg_dict}), # Check https://docs.wandb.ai/ref/python/init for more init arguments.
# MMSegWandbHook is mmseg implementation of WandbLoggerHook. ClearMLLoggerHook, DvcliveLoggerHook, MlflowLoggerHook, NeptuneLoggerHook, PaviLoggerHook, SegmindLoggerHook are also supported based on MMCV implementation.
])
dist_params = dict(backend='nccl') # Parameters to setup distributed training, the port can also be set.
log_level = 'INFO' # The level of logging.
load_from = None # load models as a pre-trained model from a given path. This will not resume training.
resume_from = None # Resume checkpoints from a given path, the training will be resumed from the iteration when the checkpoint's is saved.
workflow = [('train', 1)] # Workflow for runner. [('train', 1)] means there is only one workflow and the workflow named 'train' is executed once. The workflow trains the model by 40000 iterations according to the `runner.max_iters`.
cudnn_benchmark = True # Whether use cudnn_benchmark to speed up, which is fast for fixed input size.
optimizer = dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
type='SGD', # Type of optimizers, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/optimizer/default_constructor.py#L13 for more details
lr=0.01, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
momentum=0.9, # Momentum
weight_decay=0.0005) # Weight decay of SGD
optimizer_config = dict() # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
lr_config = dict(
policy='poly', # The policy of scheduler, also support Step, CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9.
power=0.9, # The power of polynomial decay.
min_lr=0.0001, # The minimum learning rate to stable the training.
by_epoch=False) # Whether count by epoch or not.
runner = dict(
type='IterBasedRunner', # Type of runner to use (i.e. IterBasedRunner or EpochBasedRunner)
max_iters=40000) # Total number of iterations. For EpochBasedRunner use `max_epochs`
checkpoint_config = dict( # Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
by_epoch=False, # Whether count by epoch or not.
interval=4000) # The save interval.
evaluation = dict( # The config to build the evaluation hook. Please refer to mmseg/core/evaluation/eval_hook.py for details.
interval=4000, # The interval of evaluation.
metric='mIoU') # The evaluation metric.
```
## FAQ
### Ignore some fields in the base configs
Sometimes, you may set `_delete_=True` to ignore some of the fields in base configs.
You may refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#inherit-from-base-config-with-ignored-fields) for simple illustration.
In MMSegmentation, for example, to change the backbone of PSPNet with the following config.
```python
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='MaskRCNN',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(...),
auxiliary_head=dict(...))
```
`ResNet` and `HRNet` use different keywords to construct.
```python
_base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py'
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w32',
backbone=dict(
_delete_=True,
type='HRNet',
norm_cfg=norm_cfg,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
decode_head=dict(...),
auxiliary_head=dict(...))
```
The `_delete_=True` would replace all old keys in `backbone` field with new keys.
### Use intermediate variables in configs
Some intermediate variables are used in the configs files, like `train_pipeline`/`test_pipeline` in datasets.
It's worth noting that when modifying intermediate variables in the children configs, user need to pass the intermediate variables into corresponding fields again.
For example, we would like to change multi scale strategy to train/test a PSPNet. `train_pipeline`/`test_pipeline` are intermediate variable we would like to modify.
```python
_base_ = '../pspnet/psp_r50_512x1024_40ki_cityscapes.py'
crop_size = (512, 1024)
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(1.0, 2.0)), # change to [1., 2.]
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], # change to multi scale testing
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
train=dict(pipeline=train_pipeline),
val=dict(pipeline=test_pipeline),
test=dict(pipeline=test_pipeline))
```
We first define the new `train_pipeline`/`test_pipeline` and pass them into `data`.
Similarly, if we would like to switch from `SyncBN` to `BN` or `MMSyncBN`, we need to substitute every `norm_cfg` in the config.
```python
_base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py'
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
backbone=dict(norm_cfg=norm_cfg),
decode_head=dict(norm_cfg=norm_cfg),
auxiliary_head=dict(norm_cfg=norm_cfg))
```
| 20,875 | 52.804124 | 248 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_datasets.md | # Tutorial 2: Customize Datasets
## Data configuration
`data` in config file is the variable for data configuration, to define the arguments that are used in datasets and dataloaders.
Here is an example of data configuration:
```python
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
train=dict(
type='ADE20KDataset',
data_root='data/ade/ADEChallengeData2016',
img_dir='images/training',
ann_dir='annotations/training',
pipeline=train_pipeline),
val=dict(
type='ADE20KDataset',
data_root='data/ade/ADEChallengeData2016',
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline),
test=dict(
type='ADE20KDataset',
data_root='data/ade/ADEChallengeData2016',
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline))
```
- `train`, `val` and `test`: The [`config`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/config.md)s to build dataset instances for model training, validation and testing by
using [`build and registry`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/registry.md) mechanism.
- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel training and `samples_per_gpu=4`, the `batch_size` is `8*4=32`.
If you would like to define `batch_size` for testing and validation, please use `test_dataloader` and
`val_dataloader` with mmseg >=0.24.1.
- `workers_per_gpu`: How many subprocesses per gpu to use for data loading. `0` means that the data will be loaded in the main process.
**Note:** `samples_per_gpu` only works for model training, and the default setting of `samples_per_gpu` is 1 in mmseg when model testing and validation (DO NOT support batch inference yet).
**Note:** before v0.24.1, except `train`, `val` `test`, `samples_per_gpu` and `workers_per_gpu`, the other keys in `data` must be the
input keyword arguments for `dataloader` in pytorch, and the dataloaders used for model training, validation and testing have the same input arguments.
In v0.24.1, mmseg supports to use `train_dataloader`, `test_dataloader` and `val_dataloader` to specify different keyword arguments, and still supports the overall arguments definition but the specific dataloader setting has a higher priority.
Here is an example for specific dataloader:
```python
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
shuffle=True,
train=dict(type='xxx', ...),
val=dict(type='xxx', ...),
test=dict(type='xxx', ...),
# Use different batch size during validation and testing.
val_dataloader=dict(samples_per_gpu=1, workers_per_gpu=4, shuffle=False),
test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=4, shuffle=False))
```
Assume only one gpu used for model training and testing, as the priority of the overall arguments definition is low, the batch_size
for training is `4` and dataset will be shuffled, and batch_size for testing and validation is `1`, and dataset will not be shuffled.
To make data configuration much clearer, we recommend use specific dataloader setting instead of overall dataloader setting after v0.24.1, just like:
```python
data = dict(
train=dict(type='xxx', ...),
val=dict(type='xxx', ...),
test=dict(type='xxx', ...),
# Use specific dataloader setting
train_dataloader=dict(samples_per_gpu=4, workers_per_gpu=4, shuffle=True),
val_dataloader=dict(samples_per_gpu=1, workers_per_gpu=4, shuffle=False),
test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=4, shuffle=False))
```
**Note:** in model training, default values in the script of mmseg for dataloader are `shuffle=True, and drop_last=True`,
in model validation and testing, default values are `shuffle=False, and drop_last=False`
## Customize datasets by reorganizing data
The simplest way is to convert your dataset to organize your data into folders.
An example of file structure is as followed.
```none
├── data
│ ├── my_dataset
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ │ ├── xxx{img_suffix}
│ │ │ │ ├── yyy{img_suffix}
│ │ │ │ ├── zzz{img_suffix}
│ │ │ ├── val
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ │ ├── xxx{seg_map_suffix}
│ │ │ │ ├── yyy{seg_map_suffix}
│ │ │ │ ├── zzz{seg_map_suffix}
│ │ │ ├── val
```
A training pair will consist of the files with same suffix in img_dir/ann_dir.
If `split` argument is given, only part of the files in img_dir/ann_dir will be loaded.
We may specify the prefix of files we would like to be included in the split txt.
More specifically, for a split txt like following,
```none
xxx
zzz
```
Only
`data/my_dataset/img_dir/train/xxx{img_suffix}`,
`data/my_dataset/img_dir/train/zzz{img_suffix}`,
`data/my_dataset/ann_dir/train/xxx{seg_map_suffix}`,
`data/my_dataset/ann_dir/train/zzz{seg_map_suffix}` will be loaded.
:::{note}
The annotations are images of shape (H, W), the value pixel should fall in range `[0, num_classes - 1]`.
You may use `'P'` mode of [pillow](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#palette) to create your annotation image with color.
:::
## Customize datasets by mixing dataset
MMSegmentation also supports to mix dataset for training.
Currently it supports to concat, repeat and multi-image mix datasets.
### Repeat dataset
We use `RepeatDataset` as wrapper to repeat the dataset.
For example, suppose the original dataset is `Dataset_A`, to repeat it, the config looks like the following
```python
dataset_A_train = dict(
type='RepeatDataset',
times=N,
dataset=dict( # This is the original config of Dataset_A
type='Dataset_A',
...
pipeline=train_pipeline
)
)
```
### Concatenate dataset
There 2 ways to concatenate the dataset.
1. If the datasets you want to concatenate are in the same type with different annotation files,
you can concatenate the dataset configs like the following.
1. You may concatenate two `ann_dir`.
```python
dataset_A_train = dict(
type='Dataset_A',
img_dir = 'img_dir',
ann_dir = ['anno_dir_1', 'anno_dir_2'],
pipeline=train_pipeline
)
```
2. You may concatenate two `split`.
```python
dataset_A_train = dict(
type='Dataset_A',
img_dir = 'img_dir',
ann_dir = 'anno_dir',
split = ['split_1.txt', 'split_2.txt'],
pipeline=train_pipeline
)
```
3. You may concatenate two `ann_dir` and `split` simultaneously.
```python
dataset_A_train = dict(
type='Dataset_A',
img_dir = 'img_dir',
ann_dir = ['anno_dir_1', 'anno_dir_2'],
split = ['split_1.txt', 'split_2.txt'],
pipeline=train_pipeline
)
```
In this case, `ann_dir_1` and `ann_dir_2` are corresponding to `split_1.txt` and `split_2.txt`.
2. In case the dataset you want to concatenate is different, you can concatenate the dataset configs like the following.
```python
dataset_A_train = dict()
dataset_B_train = dict()
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train = [
dataset_A_train,
dataset_B_train
],
val = dataset_A_val,
test = dataset_A_test
)
```
A more complex example that repeats `Dataset_A` and `Dataset_B` by N and M times, respectively, and then concatenates the repeated datasets is as the following.
```python
dataset_A_train = dict(
type='RepeatDataset',
times=N,
dataset=dict(
type='Dataset_A',
...
pipeline=train_pipeline
)
)
dataset_A_val = dict(
...
pipeline=test_pipeline
)
dataset_A_test = dict(
...
pipeline=test_pipeline
)
dataset_B_train = dict(
type='RepeatDataset',
times=M,
dataset=dict(
type='Dataset_B',
...
pipeline=train_pipeline
)
)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train = [
dataset_A_train,
dataset_B_train
],
val = dataset_A_val,
test = dataset_A_test
)
```
### Multi-image Mix Dataset
We use `MultiImageMixDataset` as a wrapper to mix images from multiple datasets.
`MultiImageMixDataset` can be used by multiple images mixed data augmentation
like mosaic and mixup.
An example of using `MultiImageMixDataset` with `Mosaic` data augmentation:
```python
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='RandomMosaic', prob=1),
dict(type='Resize', img_scale=(1024, 512), keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
train_dataset = dict(
type='MultiImageMixDataset',
dataset=dict(
classes=classes,
palette=palette,
type=dataset_type,
reduce_zero_label=False,
img_dir=data_root + "images/train",
ann_dir=data_root + "annotations/train",
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
]
),
pipeline=train_pipeline
)
```
| 9,519 | 31.491468 | 288 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_models.md | # Tutorial 4: Customize Models
## Customize optimizer
Assume you want to add a optimizer named as `MyOptimizer`, which has arguments `a`, `b`, and `c`.
You need to first implement the new optimizer in a file, e.g., in `mmseg/core/optimizer/my_optimizer.py`:
```python
from mmcv.runner import OPTIMIZERS
from torch.optim import Optimizer
@OPTIMIZERS.register_module
class MyOptimizer(Optimizer):
def __init__(self, a, b, c)
```
Then add this module in `mmseg/core/optimizer/__init__.py` thus the registry will
find the new module and add it:
```python
from .my_optimizer import MyOptimizer
```
Then you can use `MyOptimizer` in `optimizer` field of config files.
In the configs, the optimizers are defined by the field `optimizer` like the following:
```python
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
```
To use your own optimizer, the field can be changed as
```python
optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value)
```
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the `optimizer` field of config files.
For example, if you want to use `ADAM`, though the performance will drop a lot, the modification could be as the following.
```python
optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001)
```
The users can directly set arguments following the [API doc](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) of PyTorch.
## Customize optimizer constructor
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNoarm layers.
The users can do those fine-grained parameter tuning through customizing optimizer constructor.
```
from mmcv.utils import build_from_cfg
from mmcv.runner import OPTIMIZER_BUILDERS
from .cocktail_optimizer import CocktailOptimizer
@OPTIMIZER_BUILDERS.register_module
class CocktailOptimizerConstructor(object):
def __init__(self, optimizer_cfg, paramwise_cfg=None):
def __call__(self, model):
return my_optimizer
```
## Develop new components
There are mainly 2 types of components in MMSegmentation.
- backbone: usually stacks of convolutional network to extract feature maps, e.g., ResNet, HRNet.
- head: the component for semantic segmentation map decoding.
### Add new backbones
Here we show how to develop new components with an example of MobileNet.
1. Create a new file `mmseg/models/backbones/mobilenet.py`.
```python
import torch.nn as nn
from ..builder import BACKBONES
@BACKBONES.register_module
class MobileNet(nn.Module):
def __init__(self, arg1, arg2):
pass
def forward(self, x): # should return a tuple
pass
def init_weights(self, pretrained=None):
pass
```
2. Import the module in `mmseg/models/backbones/__init__.py`.
```python
from .mobilenet import MobileNet
```
3. Use it in your config file.
```python
model = dict(
...
backbone=dict(
type='MobileNet',
arg1=xxx,
arg2=xxx),
...
```
### Add new heads
In MMSegmentation, we provide a base [BaseDecodeHead](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py) for all segmentation head.
All newly implemented decode heads should be derived from it.
Here we show how to develop a new head with the example of [PSPNet](https://arxiv.org/abs/1612.01105) as the following.
First, add a new decode head in `mmseg/models/decode_heads/psp_head.py`.
PSPNet implements a decode head for segmentation decode.
To implement a decode head, basically we need to implement three functions of the new module as the following.
```python
@HEADS.register_module()
class PSPHead(BaseDecodeHead):
def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs):
super(PSPHead, self).__init__(**kwargs)
def init_weights(self):
def forward(self, inputs):
```
Next, the users need to add the module in the `mmseg/models/decode_heads/__init__.py` thus the corresponding registry could find and load them.
To config file of PSPNet is as the following
```python
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='pretrain_model/resnet50_v1c_trick-2cccc1ad.pth',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='PSPHead',
in_channels=2048,
in_index=3,
channels=512,
pool_scales=(1, 2, 3, 6),
dropout_ratio=0.1,
num_classes=19,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)))
```
### Add new loss
Assume you want to add a new loss as `MyLoss` for segmentation decode.
To add a new loss function, the users need implement it in `mmseg/models/losses/my_loss.py`.
The decorator `weighted_loss` enable the loss to be weighted for each element.
```python
import torch
import torch.nn as nn
from ..builder import LOSSES
from .utils import weighted_loss
@weighted_loss
def my_loss(pred, target):
assert pred.size() == target.size() and target.numel() > 0
loss = torch.abs(pred - target)
return loss
@LOSSES.register_module
class MyLoss(nn.Module):
def __init__(self, reduction='mean', loss_weight=1.0):
super(MyLoss, self).__init__()
self.reduction = reduction
self.loss_weight = loss_weight
def forward(self,
pred,
target,
weight=None,
avg_factor=None,
reduction_override=None):
assert reduction_override in (None, 'none', 'mean', 'sum')
reduction = (
reduction_override if reduction_override else self.reduction)
loss = self.loss_weight * my_loss(
pred, target, weight, reduction=reduction, avg_factor=avg_factor)
return loss
```
Then the users need to add it in the `mmseg/models/losses/__init__.py`.
```python
from .my_loss import MyLoss, my_loss
```
To use it, modify the `loss_xxx` field.
Then you need to modify the `loss_decode` field in the head.
`loss_weight` could be used to balance multiple losses.
```python
loss_decode=dict(type='MyLoss', loss_weight=1.0))
```
| 6,521 | 26.753191 | 179 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_runtime.md | # Tutorial 6: Customize Runtime Settings
## Customize optimization settings
### Customize optimizer supported by Pytorch
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the `optimizer` field of config files.
For example, if you want to use `ADAM` (note that the performance could drop a lot), the modification could be as the following.
```python
optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001)
```
To modify the learning rate of the model, the users only need to modify the `lr` in the config of optimizer. The users can directly set arguments following the [API doc](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) of PyTorch.
### Customize self-implemented optimizer
#### 1. Define a new optimizer
A customized optimizer could be defined as following.
Assume you want to add a optimizer named `MyOptimizer`, which has arguments `a`, `b`, and `c`.
You need to create a new directory named `mmseg/core/optimizer`.
And then implement the new optimizer in a file, e.g., in `mmseg/core/optimizer/my_optimizer.py`:
```python
from .registry import OPTIMIZERS
from torch.optim import Optimizer
@OPTIMIZERS.register_module()
class MyOptimizer(Optimizer):
def __init__(self, a, b, c)
```
#### 2. Add the optimizer to registry
To find the above module defined above, this module should be imported into the main namespace at first. There are two options to achieve it.
- Modify `mmseg/core/optimizer/__init__.py` to import it.
The newly defined module should be imported in `mmseg/core/optimizer/__init__.py` so that the registry will
find the new module and add it:
```python
from .my_optimizer import MyOptimizer
```
- Use `custom_imports` in the config to manually import it
```python
custom_imports = dict(imports=['mmseg.core.optimizer.my_optimizer'], allow_failed_imports=False)
```
The module `mmseg.core.optimizer.my_optimizer` will be imported at the beginning of the program and the class `MyOptimizer` is then automatically registered.
Note that only the package containing the class `MyOptimizer` should be imported.
`mmseg.core.optimizer.my_optimizer.MyOptimizer` **cannot** be imported directly.
Actually users can use a totally different file directory structure using this importing method, as long as the module root can be located in `PYTHONPATH`.
#### 3. Specify the optimizer in the config file
Then you can use `MyOptimizer` in `optimizer` field of config files.
In the configs, the optimizers are defined by the field `optimizer` like the following:
```python
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
```
To use your own optimizer, the field can be changed to
```python
optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value)
```
### Customize optimizer constructor
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNorm layers.
The users can do those fine-grained parameter tuning through customizing optimizer constructor.
```python
from mmcv.utils import build_from_cfg
from mmcv.runner.optimizer import OPTIMIZER_BUILDERS, OPTIMIZERS
from mmseg.utils import get_root_logger
from .my_optimizer import MyOptimizer
@OPTIMIZER_BUILDERS.register_module()
class MyOptimizerConstructor(object):
def __init__(self, optimizer_cfg, paramwise_cfg=None):
def __call__(self, model):
return my_optimizer
```
The default optimizer constructor is implemented [here](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/optimizer/default_constructor.py#L11), which could also serve as a template for new optimizer constructor.
### Additional settings
Tricks not implemented by the optimizer should be implemented through optimizer constructor (e.g., set parameter-wise learning rates) or hooks. We list some common settings that could stabilize the training or accelerate the training. Feel free to create PR, issue for more settings.
- __Use gradient clip to stabilize training__:
Some models need gradient clip to clip the gradients to stabilize the training process. An example is as below:
```python
optimizer_config = dict(
_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
```
If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to override the unnecessary settings. See the [config documentation](https://mmsegmentation.readthedocs.io/en/latest/config.html) for more details.
- __Use momentum schedule to accelerate model convergence__:
We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way.
Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence.
For more details, please refer to the implementation of [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327) and [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130).
```python
lr_config = dict(
policy='cyclic',
target_ratio=(10, 1e-4),
cyclic_times=1,
step_ratio_up=0.4,
)
momentum_config = dict(
policy='cyclic',
target_ratio=(0.85 / 0.95, 1),
cyclic_times=1,
step_ratio_up=0.4,
)
```
## Customize training schedules
By default we use step learning rate with 40k/80k schedule, this calls [`PolyLrUpdaterHook`](https://github.com/open-mmlab/mmcv/blob/826d3a7b68596c824fa1e2cb89b6ac274f52179c/mmcv/runner/hooks/lr_updater.py#L196) in MMCV.
We support many other learning rate schedule [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py), such as `CosineAnnealing` and `Poly` schedule. Here are some examples
- Step schedule:
```python
lr_config = dict(policy='step', step=[9, 10])
```
- ConsineAnnealing schedule:
```python
lr_config = dict(
policy='CosineAnnealing',
warmup='linear',
warmup_iters=1000,
warmup_ratio=1.0 / 10,
min_lr_ratio=1e-5)
```
## Customize workflow
Workflow is a list of (phase, epochs) to specify the running order and epochs.
By default it is set to be
```python
workflow = [('train', 1)]
```
which means running 1 epoch for training.
Sometimes user may want to check some metrics (e.g. loss, accuracy) about the model on the validate set.
In such case, we can set the workflow as
```python
[('train', 1), ('val', 1)]
```
so that 1 epoch for training and 1 epoch for validation will be run iteratively.
:::{note}
1. The parameters of model will not be updated during val epoch.
2. Keyword `total_epochs` in the config only controls the number of training epochs and will not affect the validation workflow.
3. Workflows `[('train', 1), ('val', 1)]` and `[('train', 1)]` will not change the behavior of `EvalHook` because `EvalHook` is called by `after_train_epoch` and validation workflow only affect hooks that are called through `after_val_epoch`. Therefore, the only difference between `[('train', 1), ('val', 1)]` and `[('train', 1)]` is that the runner will calculate losses on validation set after each training epoch.
:::
## Customize hooks
### Use hooks implemented in MMCV
If the hook is already implemented in MMCV, you can directly modify the config to use the hook as below
```python
custom_hooks = [
dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL')
]
```
### Modify default runtime hooks
There are some common hooks that are not registered through `custom_hooks`, they are
- log_config
- checkpoint_config
- evaluation
- lr_config
- optimizer_config
- momentum_config
In those hooks, only the logger hook has the `VERY_LOW` priority, others' priority are `NORMAL`.
The above-mentioned tutorials already covers how to modify `optimizer_config`, `momentum_config`, and `lr_config`.
Here we reveals how what we can do with `log_config`, `checkpoint_config`, and `evaluation`.
#### Checkpoint config
The MMCV runner will use `checkpoint_config` to initialize [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/hooks/checkpoint.py#L9).
```python
checkpoint_config = dict(interval=1)
```
The users could set `max_keep_ckpts` to only save only small number of checkpoints or decide whether to store state dict of optimizer by `save_optimizer`. More details of the arguments are [here](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook)
#### Log config
The `log_config` wraps multiple logger hooks and enables to set intervals. Now MMCV supports `WandbLoggerHook`, `MlflowLoggerHook`, and `TensorboardLoggerHook`.
The detail usages can be found in the [doc](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.LoggerHook).
```python
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
```
#### Evaluation config
The config of `evaluation` will be used to initialize the [`EvalHook`](https://github.com/open-mmlab/mmsegmentation/blob/e3f6f655d69b777341aec2fe8829871cc0beadcb/mmseg/core/evaluation/eval_hooks.py#L7).
Except the key `interval`, other arguments such as `metric` will be passed to the `dataset.evaluate()`
```python
evaluation = dict(interval=1, metric='mIoU')
```
| 9,571 | 37.910569 | 417 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/data_pipeline.md | # Tutorial 3: Customize Data Pipelines
## Design of Data pipelines
Following typical conventions, we use `Dataset` and `DataLoader` for data loading
with multiple workers. `Dataset` returns a dict of data items corresponding
the arguments of models' forward method.
Since the data in semantic segmentation may not be the same size,
we introduce a new `DataContainer` type in MMCV to help collect and distribute
data of different size.
See [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py) for more details.
The data preparation pipeline and the dataset is decomposed. Usually a dataset
defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict.
A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.
The operations are categorized into data loading, pre-processing, formatting and test-time augmentation.
Here is an pipeline example for PSPNet.
```python
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
```
For each operation, we list the related dict fields that are added/updated/removed.
### Data loading
`LoadImageFromFile`
- add: img, img_shape, ori_shape
`LoadAnnotations`
- add: gt_semantic_seg, seg_fields
### Pre-processing
`Resize`
- add: scale, scale_idx, pad_shape, scale_factor, keep_ratio
- update: img, img_shape, \*seg_fields
`RandomFlip`
- add: flip
- update: img, \*seg_fields
`Pad`
- add: pad_fixed_size, pad_size_divisor
- update: img, pad_shape, \*seg_fields
`RandomCrop`
- update: img, pad_shape, \*seg_fields
`Normalize`
- add: img_norm_cfg
- update: img
`SegRescale`
- update: gt_semantic_seg
`PhotoMetricDistortion`
- update: img
### Formatting
`ToTensor`
- update: specified by `keys`.
`ImageToTensor`
- update: specified by `keys`.
`Transpose`
- update: specified by `keys`.
`ToDataContainer`
- update: specified by `fields`.
`DefaultFormatBundle`
- update: img, gt_semantic_seg
`Collect`
- add: img_meta (the keys of img_meta is specified by `meta_keys`)
- remove: all other keys except for those specified by `keys`
### Test time augmentation
`MultiScaleFlipAug`
## Extend and use custom pipelines
1. Write a new pipeline in any file, e.g., `my_pipeline.py`. It takes a dict as input and return a dict.
```python
from mmseg.datasets import PIPELINES
@PIPELINES.register_module()
class MyTransform:
def __call__(self, results):
results['dummy'] = True
return results
```
2. Import the new class.
```python
from .my_pipeline import MyTransform
```
3. Use it in config files.
```python
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='MyTransform'),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
```
| 4,496 | 25.145349 | 132 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/training_tricks.md | # Tutorial 5: Training Tricks
MMSegmentation support following training tricks out of box.
## Different Learning Rate(LR) for Backbone and Heads
In semantic segmentation, some methods make the LR of heads larger than backbone to achieve better performance or faster convergence.
In MMSegmentation, you may add following lines to config to make the LR of heads 10 times of backbone.
```python
optimizer=dict(
paramwise_cfg = dict(
custom_keys={
'head': dict(lr_mult=10.)}))
```
With this modification, the LR of any parameter group with `'head'` in name will be multiplied by 10.
You may refer to [MMCV doc](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.DefaultOptimizerConstructor) for further details.
## Online Hard Example Mining (OHEM)
We implement pixel sampler [here](https://github.com/open-mmlab/mmsegmentation/tree/master/mmseg/core/seg/sampler) for training sampling.
Here is an example config of training PSPNet with OHEM enabled.
```python
_base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py'
model=dict(
decode_head=dict(
sampler=dict(type='OHEMPixelSampler', thresh=0.7, min_kept=100000)) )
```
In this way, only pixels with confidence score under 0.7 are used to train. And we keep at least 100000 pixels during training. If `thresh` is not specified, pixels of top `min_kept` loss will be selected.
## Class Balanced Loss
For dataset that is not balanced in classes distribution, you may change the loss weight of each class.
Here is an example for cityscapes dataset.
```python
_base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py'
model=dict(
decode_head=dict(
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0,
# DeepLab used this class weight for cityscapes
class_weight=[0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754,
1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037,
1.0865, 1.0955, 1.0865, 1.1529, 1.0507])))
```
`class_weight` will be passed into `CrossEntropyLoss` as `weight` argument. Please refer to [PyTorch Doc](https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss) for details.
## Multiple Losses
For loss calculation, we support multiple losses training concurrently. Here is an example config of training `unet` on `DRIVE` dataset, whose loss function is `1:3` weighted sum of `CrossEntropyLoss` and `DiceLoss`:
```python
_base_ = './fcn_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)]),
auxiliary_head=dict(loss_decode=[dict(type='CrossEntropyLoss', loss_name='loss_ce',loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)]),
)
```
In this way, `loss_weight` and `loss_name` will be weight and name in training log of corresponding loss, respectively.
Note: If you want this loss item to be included into the backward graph, `loss_` must be the prefix of the name.
## Ignore specified label index in loss calculation
In default setting, `avg_non_ignore=False` which means each pixel counts for loss calculation although some of them belong to ignore-index labels.
For loss calculation, we support ignore index of certain label by `avg_non_ignore` and `ignore_index`. In this way, the average loss would only be calculated in non-ignored labels which may achieve better performance, and here is the [reference](https://github.com/open-mmlab/mmsegmentation/pull/1409). Here is an example config of training `unet` on `Cityscapes` dataset: in loss calculation it would ignore label 0 which is background and loss average is only calculated on non-ignore labels:
```python
_base_ = './fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py'
model = dict(
decode_head=dict(
ignore_index=0,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, avg_non_ignore=True),
auxiliary_head=dict(
ignore_index=0,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, avg_non_ignore=True)),
))
```
| 4,298 | 46.241758 | 494 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/conf.py | # Copyright (c) OpenMMLab. All rights reserved.
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import subprocess
import sys
import pytorch_sphinx_theme
sys.path.insert(0, os.path.abspath('../../'))
# -- Project information -----------------------------------------------------
project = 'MMSegmentation'
copyright = '2020-2021, OpenMMLab'
author = 'MMSegmentation Authors'
version_file = '../../mmseg/version.py'
def get_version():
with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec'))
return locals()['__version__']
# The full version, including alpha/beta/rc tags
release = get_version()
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
]
autodoc_mock_imports = [
'matplotlib', 'pycocotools', 'mmseg.version', 'mmcv.ops'
]
# Ignore >>> when copying code
copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# The master toctree document.
master_doc = 'index'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'sphinx_rtd_theme'
html_theme = 'pytorch_sphinx_theme'
html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
html_theme_options = {
'logo_url':
'https://mmsegmentation.readthedocs.io/zh-CN/latest/',
'menu': [
{
'name':
'教程',
'url':
'https://github.com/open-mmlab/mmsegmentation/blob/master/'
'demo/MMSegmentation_Tutorial.ipynb'
},
{
'name': 'GitHub',
'url': 'https://github.com/open-mmlab/mmsegmentation'
},
{
'name':
'上游库',
'children': [
{
'name': 'MMCV',
'url': 'https://github.com/open-mmlab/mmcv',
'description': '基础视觉库'
},
]
},
],
# Specify the language of shared menu
'menu_lang':
'cn',
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_css_files = ['css/readthedocs.css']
# Enable ::: for my_st
myst_enable_extensions = ['colon_fence']
myst_heading_anchors = 3
language = 'zh-CN'
def builder_inited_handler(app):
subprocess.run(['./stat.py'])
def setup(app):
app.connect('builder-inited', builder_inited_handler)
| 3,981 | 28.496296 | 79 | py |
mmsegmentation | mmsegmentation-master/docs/zh_cn/dataset_prepare.md | ## 准备数据集
推荐用软链接,将数据集根目录链接到 `$MMSEGMENTATION/data` 里。如果您的文件夹结构是不同的,您也许可以试着修改配置文件里对应的路径。
```none
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── cityscapes
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│ ├── VOCdevkit
│ │ ├── VOC2012
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClass
│ │ │ ├── ImageSets
│ │ │ │ ├── Segmentation
│ │ ├── VOC2010
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClassContext
│ │ │ ├── ImageSets
│ │ │ │ ├── SegmentationContext
│ │ │ │ │ ├── train.txt
│ │ │ │ │ ├── val.txt
│ │ │ ├── trainval_merged.json
│ │ ├── VOCaug
│ │ │ ├── dataset
│ │ │ │ ├── cls
│ ├── ade
│ │ ├── ADEChallengeData2016
│ │ │ ├── annotations
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ │ │ ├── images
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ ├── CHASE_DB1
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── DRIVE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── HRF
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── STARE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
| ├── dark_zurich
| │ ├── gps
| │ │ ├── val
| │ │ └── val_ref
| │ ├── gt
| │ │ └── val
| │ ├── LICENSE.txt
| │ ├── lists_file_names
| │ │ ├── val_filenames.txt
| │ │ └── val_ref_filenames.txt
| │ ├── README.md
| │ └── rgb_anon
| │ | ├── val
| │ | └── val_ref
| ├── NighttimeDrivingTest
| | ├── gtCoarse_daytime_trainvaltest
| | │ └── test
| | │ └── night
| | └── leftImg8bit
| | | └── test
| | | └── night
│ ├── loveDA
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ │ ├── test
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── potsdam
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── vaihingen
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── iSAID
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ │ ├── test
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── ImageNetS
│ │ ├── ImageNetS919
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS300
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS50
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
```
### Cityscapes
注册成功后,数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载。
通常情况下,`**labelTrainIds.png` 被用来训练 cityscapes。
基于 [cityscapesscripts](https://github.com/mcordts/cityscapesScripts),
我们提供了一个 [脚本](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py),
去生成 `**labelTrainIds.png`。
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
```
### Pascal VOC
Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载。
此外,许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据,它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到。
如果您想使用增广后的 VOC 数据集,请运行下面的命令来将数据增广的标注转成正确的格式。
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
```
关于如何拼接数据集 (concatenate) 并一起训练它们,更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86) 。
### ADE20K
ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载。
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集。
### Pascal Context
Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载。
注册成功后,您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集。
为了从原始数据集里切分训练集和验证集, 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json。
如果您想使用 Pascal Context 数据集,
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式。
```shell
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
```
### CHASE DB1
CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载。
为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式,您需要运行如下命令:
```shell
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
```
这个脚本将自动生成正确的文件夹结构。
### DRIVE
DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载。
在此之前,您需要注册一个账号,当前 '1st_manual' 并未被官方提供,因此需要您从其他地方获取。
为了将 DRIVE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
```
这个脚本将自动生成正确的文件夹结构。
### HRF
首先,下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip) 。
为了将 HRF 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
```
这个脚本将自动生成正确的文件夹结构。
### STARE
首先,下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar) 。
为了将 STARE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
```
这个脚本将自动生成正确的文件夹结构。
### Dark Zurich
因为我们只支持在此数据集上测试模型,所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip) 。
### Nighttime Driving
因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip) 。
### LoveDA
可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing) 。
或者它还可以从 [zenodo](https://zenodo.org/record/5706578#.YZvN7SYRXdF) 下载, 您需要运行如下命令:
```shell
# Download Train.zip
wget https://zenodo.org/record/5706578/files/Train.zip
# Download Val.zip
wget https://zenodo.org/record/5706578/files/Val.zip
# Download Test.zip
wget https://zenodo.org/record/5706578/files/Test.zip
```
对于 LoveDA 数据集,请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/loveda.py /path/to/loveDA
```
请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网。
关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到。
### ISPRS Potsdam
[Potsdam](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-potsdam/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集。
数据集可以从挑战[主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/) 获得。
需要其中的 '2_Ortho_RGB.zip' 和 '5_Labels_all_noBoundary.zip'。
对于 Potsdam 数据集,请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/potsdam.py /path/to/potsdam
```
使用我们默认的配置, 将生成 3456 张图片的训练集和 2016 张图片的验证集。
### ISPRS Vaihingen
[Vaihingen](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-vaihingen/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集。
数据集可以从挑战 [主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/).
需要其中的 'ISPRS_semantic_labeling_Vaihingen.zip' 和 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip'。
对于 Vaihingen 数据集,请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/vaihingen.py /path/to/vaihingen
```
使用我们默认的配置 (`clip_size`=512, `stride_size`=256), 将生成 344 张图片的训练集和 398 张图片的验证集。
### iSAID
iSAID 数据集(训练集/验证集/测试集)的图像可以从 [DOTA-v1.0](https://captain-whu.github.io/DOTA/dataset.html) 下载.
iSAID 数据集(训练集/验证集)的注释可以从 [iSAID](https://captain-whu.github.io/iSAID/dataset.html) 下载.
该数据集是一个大规模的实例分割(也可以用于语义分割)的遥感数据集.
下载后,在数据集转换前,您需要将数据集文件夹调整成如下格式.
```
│ ├── iSAID
│ │ ├── train
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ │ ├── part2.zip
│ │ │ │ ├── part3.zip
│ │ │ ├── Semantic_masks
│ │ │ │ ├── images.zip
│ │ ├── val
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ ├── Semantic_masks
│ │ │ │ ├── images.zip
│ │ ├── test
│ │ │ ├── images
│ │ │ │ ├── part1.zip
│ │ │ │ ├── part2.zip
```
```shell
python tools/convert_datasets/isaid.py /path/to/iSAID
```
使用我们默认的配置 (`patch_width`=896, `patch_height`=896, `overlap_area`=384), 将生成 33978 张图片的训练集和 11644 张图片的验证集。
### ImageNetS
ImageNet-S是用于[大规模无监督/半监督语义分割](https://arxiv.org/abs/2106.03149)任务的数据集。
ImageNet-S数据集可在[ImageNet-S](https://github.com/LUSSeg/ImageNet-S#imagenet-s-dataset-preparation)获取。
```
│ ├── ImageNetS
│ │ ├── ImageNetS919
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS300
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
│ │ ├── ImageNetS50
│ │ │ ├── train-semi
│ │ │ ├── train-semi-segmentation
│ │ │ ├── validation
│ │ │ ├── validation-segmentation
│ │ │ ├── test
```
| 10,853 | 28.574932 | 687 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/faq.md | # 常见问题解答(FAQ)
我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请使用 [issue模板](https://github.com/open-mmlab/mmsegmentation/blob/master/.github/ISSUE_TEMPLATE/error-report.md/)创建问题,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
## 安装
兼容的MMSegmentation和MMCV版本如下。请安装正确版本的MMCV以避免安装问题。
| MMSegmentation version | MMCV version | MMClassification version |
| :--------------------: | :-------------------------: | :----------------------: |
| master | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.30.0 | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.29.1 | mmcv-full>=1.5.0, \<1.8.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.29.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.28.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.27.0 | mmcv-full>=1.5.0, \<1.7.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.26.0 | mmcv-full>=1.5.0, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.25.0 | mmcv-full>=1.5.0, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.24.1 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.23.0 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.22.0 | mmcv-full>=1.4.4, \<=1.6.0 | mmcls>=0.20.1, \<=1.0.0 |
| 0.21.1 | mmcv-full>=1.4.4, \<=1.6.0 | Not required |
| 0.20.2 | mmcv-full>=1.3.13, \<=1.6.0 | Not required |
| 0.19.0 | mmcv-full>=1.3.13, \<1.3.17 | Not required |
| 0.18.0 | mmcv-full>=1.3.13, \<1.3.17 | Not required |
| 0.17.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.16.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.15.0 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.14.1 | mmcv-full>=1.3.7, \<1.3.17 | Not required |
| 0.14.0 | mmcv-full>=1.3.1, \<1.3.2 | Not required |
| 0.13.0 | mmcv-full>=1.3.1, \<1.3.2 | Not required |
| 0.12.0 | mmcv-full>=1.1.4, \<1.3.2 | Not required |
| 0.11.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.10.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.9.0 | mmcv-full>=1.1.4, \<1.3.0 | Not required |
| 0.8.0 | mmcv-full>=1.1.4, \<1.2.0 | Not required |
| 0.7.0 | mmcv-full>=1.1.2, \<1.2.0 | Not required |
| 0.6.0 | mmcv-full>=1.1.2, \<1.2.0 | Not required |
如果你安装了mmcv,你需要先运行`pip uninstall mmcv`。
如果mmcv和mmcv-full都安装了,会出现 "ModuleNotFoundError"。
- "No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'".
1. 使用`pip uninstall mmcv`卸载环境中现有的mmcv。
2. 按照[安装说明](get_started#best-practices)安装mmcv-full。
## 如何获知模型训练时需要的显卡数量
- 看模型的config文件的命名。可以参考[学习配置文件](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/config.md)中的`配置文件命名风格`部分。比如,对于名字为`segformer_mit-b0_8x1_1024x1024_160k_cityscapes.py`的config文件,`8x1`代表训练其对应的模型需要的卡数为8,每张卡中的batch size为1。
- 看模型的log文件。点开该模型的log文件,并在其中搜索`nGPU`,在`nGPU`后的数字个数即训练时所需的卡数。比如,在log文件中搜索`nGPU`得到`nGPU 0,1,2,3,4,5,6,7`的记录,则说明训练该模型需要使用八张卡。
## auxiliary head 是什么
简单来说,这是一个提高准确率的深度监督技术。在训练阶段,`decode_head` 用于输出语义分割的结果,`auxiliary_head` 只是增加了一个辅助损失,其产生的分割结果对你的模型结果没有影响,仅在在训练中起作用。你可以阅读这篇[论文](https://arxiv.org/pdf/1612.01105.pdf)了解更多信息。
## 为什么日志文件没有被创建
在训练脚本中,我们在第167行调用 `get_root_logger` 方法,然后 mmseg 的 `get_root_logger` 方法调用 mmcv 的 `get_logger`,mmcv 将返回在 'mmsegmentation/tools/train.py' 中使用参数 `log_file` 初始化的同一个 logger。在训练期间只存在一个用 `log_file` 初始化的 logger。
参考:[https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16](https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16)
如果你发现日志文件没有被创建,可以检查 `mmcv.utils.get_logger` 是否在其他地方被调用。
## 运行测试脚本时如何输出绘制分割掩膜的图像
在测试脚本中,我们提供了`show-dir`参数来控制是否输出绘制的图像。用户可以运行以下命令:
```shell
python tools/test.py {config} {checkpoint} --show-dir {/path/to/save/image} --opacity 1
```
## 如何处理二值分割任务?
MMSegmentation 使用 `num_classes` 和 `out_channels` 来控制模型最后一层 `self.conv_seg` 的输出. 更多细节可以参考 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py).
`num_classes` 应该和数据集本身类别个数一致,当是二值分割时,数据集只有前景和背景两类, 所以 `num_classes` 为 2. `out_channels` 控制模型最后一层的输出的通道数,通常和 `num_classes` 相等, 但当二值分割时候, 可以有两种处理方法, 分别是:
- 设置 `out_channels=2`, 在训练时以 Cross Entropy Loss 作为损失函数, 在推理时使用 `F.softmax()` 归一化 logits 值, 然后通过 `argmax()` 得到每个像素的预测结果.
- 设置 `out_channels=1`, 在训练时以 Binary Cross Entropy Loss 作为损失函数, 在推理时使用 `F.sigmoid()` 和 `threshold` 得到预测结果, `threshold` 默认为 0.3.
对于实现上述两种计算二值分割的方法, 需要在 `decode_head` 和 `auxiliary_head` 的配置里修改. 下面是对样例 [pspnet_unet_s5-d16.py](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/_base_/models/pspnet_unet_s5-d16.py) 做出的对应修改.
- (1) `num_classes=2`, `out_channels=2` 并在 `CrossEntropyLoss` 里面设置 `use_sigmoid=False`.
```python
decode_head=dict(
type='PSPHead',
in_channels=64,
in_index=4,
num_classes=2,
out_channels=2,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
num_classes=2,
out_channels=2,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
```
- (2) `num_classes=2`, `out_channels=1` 并在 `CrossEntropyLoss` 里面设置 `use_sigmoid=True`.
```python
decode_head=dict(
type='PSPHead',
in_channels=64,
in_index=4,
num_classes=2,
out_channels=1,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=128,
in_index=3,
num_classes=2,
out_channels=1,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
```
## `reduce_zero_label` 的作用
数据集中 `reduce_zero_label` 参数类型为布尔类型, 默认为 False, 它的功能是为了忽略数据集 label 0. 具体做法是将 label 0 改为 255, 其余 label 相应编号减 1, 同时 decode head 里将 255 设为 ignore index, 即不参与 loss 计算.
以下是 `reduce_zero_label` 具体实现逻辑:
```python
if self.reduce_zero_label:
# avoid using underflow conversion
gt_semantic_seg[gt_semantic_seg == 0] = 255
gt_semantic_seg = gt_semantic_seg - 1
gt_semantic_seg[gt_semantic_seg == 254] = 255
```
**注意:** 使用 `reduce_zero_label` 请确认数据集原始类别个数, 如果只有两类, 需要关闭 `reduce_zero_label` 即设置 `reduce_zero_label=False`.
| 6,735 | 47.114286 | 244 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/get_started.md | # 依赖
在本节中,我们将演示如何用PyTorch准备一个环境。
MMSegmentation 可以在 Linux、Windows 和 MacOS 上运行。它需要 Python 3.6 以上,CUDA 9.2 以上和 PyTorch 1.3 以上。
```{note}
如果您对PyTorch有经验并且已经安装了它,请跳到下一节。否则,您可以按照以下步骤进行准备。
```
**第一步** 从[官方网站](https://docs.conda.io/en/latest/miniconda.html)下载并安装 Miniconda。
**第二步** 创建并激活一个 conda 环境。
```shell
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
```
**第三步** 按照[官方说明](https://pytorch.org/get-started/locally/)安装 PyTorch。
在 GPU 平台上:
```shell
conda install pytorch torchvision -c pytorch
```
在 CPU 平台上:
```shell
conda install pytorch torchvision cpuonly -c pytorch
```
# 安装
我们建议用户遵循我们的最佳实践来安装MMSegmentation,同时整个过程是高度可定制的。更多信息见[自定义安装](#customize-installation)部分。
## 最佳实践
**第一步** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMCV](https://github.com/open-mmlab/mmcv)
```shell
pip install -U openmim
mim install mmcv-full
```
**第二步** 安装 MMSegmentation
根据具体需求,我们支持两种安装模式:
- [从源码安装(推荐)](#%E4%BB%8E%E6%BA%90%E7%A0%81%E5%AE%89%E8%A3%85):如果基于 MMSegmentation 框架开发自己的任务,需要添加新的功能,比如新的模型或是数据集,或者使用我们提供的各种工具。
- [作为 Python 包安装](#%E4%BD%9C%E4%B8%BA-python-%E5%8C%85%E5%AE%89%E8%A3%85):只是希望调用 MMSegmentation 的接口,或者在自己的项目中导入 MMSegmentation 中的模块。
### 从源码安装
```shell
git clone https://github.com/open-mmlab/mmsegmentation.git
cd mmsegmentation
pip install -v -e .
# "-v "指详细说明,或更多的输出
# "-e" 表示在可编辑模式下安装项目,因此对代码所做的任何本地修改都会生效,从而无需重新安装。
```
### 作为 Python 包安装
```shell
pip install mmsegmentation
```
**注意:**
如果你想使用 albumentations,我们建议使用 pip install-U albumentations --no-binary qudida,albumentations 进行安装。如果您仅使用 pip install albumentations>=0.3.2 进行安装,它将同时安装 opencv-python-headless(即使您已经安装了 opencv-python)。我们建议在安装了 albumentations 后检查环境,以确保没有同时安装 opencv-python 和 opencv-python-headless,因为如果两者都安装了,可能会导致意外问题。请参阅[官方文档](https://albumentations.ai/docs/getting_started/installation/#note-on-opencv-dependencies)了解更多详细信息。
## 验证安装
为了验证 MMSegmentation 是否安装正确,我们提供了一些示例代码来执行模型推理。
**第一步** 我们需要下载配置文件和模型权重文件。
```shell
mim download mmsegmentation --config pspnet_r50-d8_512x1024_40k_cityscapes --dest .
```
下载将需要几秒钟或更长时间,这取决于你的网络环境。完成后,你会在当前文件夹中发现两个文件`pspnet_r50-d8_512x1024_40k_cityscapes.py`和`pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth`。
**第二步** 验证推理示例
如果您是**从源码安装**的 MMSegmentation,那么直接运行以下命令进行验证:
```shell
python demo/image_demo.py demo/demo.png pspnet_r50-d8_512x1024_40k_cityscapes.py pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth --device cpu --out-file result.jpg
```
你会在你的当前文件夹中看到一个新的图像`result.jpg`,其中的分割掩膜覆盖在所有对象上。
如果您是**作为 PyThon 包安装**,那么可以打开您的 Python 解释器,复制并粘贴如下代码:
```python
from mmseg.apis import inference_segmentor, init_segmentor
import mmcv
config_file = 'pspnet_r50-d8_512x1024_40k_cityscapes.py'
checkpoint_file = 'pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
# 通过配置文件和模型权重文件构建模型
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')
# 对单张图片进行推理并展示结果
img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once
result = inference_segmentor(model, img)
# 在新窗口中可视化推理结果
model.show_result(img, result, show=True)
# 或将可视化结果存储在文件中
# 你可以修改 opacity 在(0,1]之间的取值来改变绘制好的分割图的透明度
model.show_result(img, result, out_file='result.jpg', opacity=0.5)
# 对视频进行推理并展示结果
video = mmcv.VideoReader('video.mp4')
for frame in video:
result = inference_segmentor(model, frame)
model.show_result(frame, result, wait_time=1)
```
你可以修改上面的代码来测试一张图片或一段视频,这两种方式都可以验证安装是否成功。
## 自定义安装
### CUDA 版本
在安装 PyTorch 时,您需要指定 CUDA 的版本。如果您不清楚应该选择哪一个,请遵循我们的建议。
- 对于 Ampere 架构的 NVIDIA GPU,例如 GeForce 30 系列 以及 NVIDIA A100,CUDA 11 是必需的。
- 对于更早的 NVIDIA GPU,CUDA 11 是向后兼容 (backward compatible) 的,但 CUDA 10.2 能够提供更好的兼容性,也更加轻量。
请确保您的 GPU 驱动版本满足最低的版本需求,参阅[这张表](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions)。
```{note}
如果按照我们的最佳实践进行安装,CUDA 运行时库就足够了,因为我们提供相关 CUDA 代码的预编译,您不需要进行本地编译。
但如果您希望从源码进行 MMCV 的编译,或是进行其他 CUDA 算子的开发,那么就必须安装完整的 CUDA 工具链,参见
[NVIDIA 官网](https://developer.nvidia.com/cuda-downloads),另外还需要确保该 CUDA 工具链的版本与 PyTorch 安装时
的配置相匹配(如用 `conda install` 安装 PyTorch 时指定的 cudatoolkit 版本)。
```
### 不使用 MIM 安装 MMCV
MMCV 包含 C++ 和 CUDA 扩展,因此其对 PyTorch 的依赖比较复杂。MIM 会自动解析这些
依赖,选择合适的 MMCV 预编译包,使安装更简单,但它并不是必需的。
要使用 pip 而不是 MIM 来安装 MMCV,请遵照 [MMCV 安装指南](https://mmcv.readthedocs.io/zh_CN/latest/get_started/installation.html)。
它需要您用指定 url 的形式手动指定对应的 PyTorch 和 CUDA 版本。
举个例子,如下命令将会安装基于 PyTorch 1.10.x 和 CUDA 11.3 编译的 mmcv-full。
```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
```
### 在 CPU 环境中安装
MMSegmentation 可以仅在 CPU 环境中安装,在 CPU 模式下,您可以完成训练(需要 MMCV 版本 >= 1.4.4)、测试和模型推理等所有操作。
### 在 Google Colab 中安装
[Google Colab](https://colab.research.google.com/) 通常已经包含了 PyTorch 环境,因此我们只需要安装 MMCV 和 MMSegmentation 即可,命令如下:
**第一步** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMCV](https://github.com/open-mmlab/mmcv)
```shell
!pip3 install openmim
!mim install mmcv-full
```
**第二步** 从源码安装 MMSegmentation
```shell
!git clone https://github.com/open-mmlab/mmsegmentation.git
%cd mmsegmentation
!pip install -e .
```
**第三步** 验证
```python
import mmseg
print(mmseg.__version__)
# 预期输出:0.24.1 或其他版本号
```
```{note}
在 Jupyter 中,感叹号 `!` 用于执行外部命令,而 `%cd` 是一个[魔术命令](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-cd),用于切换 Python 的工作路径。
```
### 通过 Docker 使用 MMSegmentation
我们提供了一个[Dockerfile](https://github.com/open-mmlab/mmsegmentation/blob/master/docker/Dockerfile)来构建一个镜像。请确保你的[docker版本](https://docs.docker.com/engine/install/) >=19.03。
```shell
# build an image with PyTorch 1.11, CUDA 11.3
# If you prefer other versions, just modified the Dockerfile
docker build -t mmsegmentation docker/
```
用以下命令运行 Docker 镜像:
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmpose/data mmpose
```
## 故障解决
如果你在安装过程中遇到一些问题,请先查看[FAQ](faq.md)页面。
如果没有找到解决方案,你也可以在GitHub上[打开一个问题](https://github.com/open-mmlab/mmsegmentation/issues/new/choose)。
| 5,921 | 26.802817 | 405 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/inference.md | ## 使用预训练模型推理
我们提供测试脚本来评估完整数据集(Cityscapes, PASCAL VOC, ADE20k 等)上的结果,同时为了使其他项目的整合更容易,也提供一些高级 API。
### 测试一个数据集
- 单卡 GPU
- CPU
- 单节点多卡 GPU
- 多节点
您可以使用以下命令来测试一个数据集。
```shell
# 单卡 GPU 测试
python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}] [--eval ${评估指标}] [--show]
# CPU: 如果机器没有 GPU, 则跟上述单卡 GPU 测试一致
# CPU: 如果机器有 GPU, 那么先禁用 GPU 再运行单 GPU 测试脚本
export CUDA_VISIBLE_DEVICES=-1 # 禁用 GPU
python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}] [--eval ${评估指标}] [--show]
# 多卡GPU 测试
./tools/dist_test.sh ${配置文件} ${检查点文件} ${GPU数目} [--out ${结果文件}] [--eval ${评估指标}]
```
可选参数:
- `RESULT_FILE`: pickle 格式的输出结果的文件名,如果不专门指定,结果将不会被专门保存成文件。(MMseg v0.17 之后,args.out 将只会保存评估时的中间结果或者是分割图的保存路径。)
- `EVAL_METRICS`: 在结果里将被评估的指标。这主要取决于数据集, `mIoU` 对于所有数据集都可获得,像 Cityscapes 数据集可以通过 `cityscapes` 命令来专门评估,就像标准的 `mIoU`一样。
- `--show`: 如果被指定,分割结果将会在一张图像里画出来并且在另一个窗口展示。它仅仅是用来调试与可视化,并且仅针对单卡 GPU 测试。请确认 GUI 在您的环境里可用,否则您也许会遇到报错 `cannot connect to X server`
- `--show-dir`: 如果被指定,分割结果将会在一张图像里画出来并且保存在指定文件夹里。它仅仅是用来调试与可视化,并且仅针对单卡GPU测试。使用该参数时,您的环境不需要 GUI。
- `--eval-options`: 评估时的可选参数,当设置 `efficient_test=True` 时,它将会保存中间结果至本地文件里以节约 CPU 内存。请确认您本地硬盘有足够的存储空间(大于20GB)。(MMseg v0.17 之后,`efficient_test` 不再生效,我们重构了 test api,通过使用一种渐近式的方式来提升评估和保存结果的效率。)
例子:
假设您已经下载检查点文件至文件夹 `checkpoints/` 里。
1. 测试 PSPNet 并可视化结果。按下任何键会进行到下一张图
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
--show
```
2. 测试 PSPNet 并保存画出的图以便于之后的可视化
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
--show-dir psp_r50_512x1024_40ki_cityscapes_results
```
3. 在数据集 PASCAL VOC (不保存测试结果) 上测试 PSPNet 并评估 mIoU
```shell
python tools/test.py configs/pspnet/pspnet_r50-d8_512x1024_20k_voc12aug.py \
checkpoints/pspnet_r50-d8_512x1024_20k_voc12aug_20200605_003338-c57ef100.pth \
--eval mAP
```
4. 使用4卡 GPU 测试 PSPNet,并且在标准 mIoU 和 cityscapes 指标里评估模型
```shell
./tools/dist_test.sh configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
4 --out results.pkl --eval mIoU cityscapes
```
注意:在 cityscapes mIoU 和我们的 mIoU 指标会有一些差异 (~0.1%) 。因为 cityscapes 默认是根据类别样本数的多少进行加权平均,而我们对所有的数据集都是采取直接平均的方法来得到 mIoU。
5. 在 cityscapes 数据集上4卡 GPU 测试 PSPNet, 并生成 png 文件以便提交给官方评估服务器
首先,在配置文件里添加内容: `configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py`,
```python
data = dict(
test=dict(
img_dir='leftImg8bit/test',
ann_dir='gtFine/test'))
```
随后,进行测试。
```shell
./tools/dist_test.sh configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
4 --format-only --eval-options "imgfile_prefix=./pspnet_test_results"
```
您会在文件夹 `./pspnet_test_results` 里得到生成的 png 文件。
您也许可以运行 `zip -r results.zip pspnet_test_results/` 并提交 zip 文件给 [evaluation server](https://www.cityscapes-dataset.com/submit/) 。
6. 在 Cityscapes 数据集上使用 CPU 高效内存选项来测试 DeeplabV3+ `mIoU` 指标 (没有保存测试结果)
```shell
python tools/test.py \
configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py \
deeplabv3plus_r18-d8_512x1024_80k_cityscapes_20201226_080942-cff257fe.pth \
--eval-options efficient_test=True \
--eval mIoU
```
使用 `pmap` 可查看 CPU 内存情况, `efficient_test=True` 会使用约 2.25GB 的 CPU 内存, `efficient_test=False` 会使用约 11.06GB 的 CPU 内存。 这个可选参数可以节约很多 CPU 内存。(MMseg v0.17 之后, `efficient_test` 参数将不再生效, 我们使用了一种渐近的方式来更加有效快速地评估和保存结果。)
7. 在 LoveDA 数据集上1卡 GPU 测试 PSPNet, 并生成 png 文件以便提交给官方评估服务器
首先,在配置文件里添加内容: `configs/pspnet/pspnet_r50-d8_512x512_80k_loveda.py`,
```python
data = dict(
test=dict(
img_dir='img_dir/test',
ann_dir='ann_dir/test'))
```
随后,进行测试。
```shell
python ./tools/test.py configs/pspnet/pspnet_r50-d8_512x512_80k_loveda.py \
checkpoints/pspnet_r50-d8_512x512_80k_loveda_20211104_155728-88610f9f.pth \
--format-only --eval-options "imgfile_prefix=./pspnet_test_results"
```
您会在文件夹 `./pspnet_test_results` 里得到生成的 png 文件。
您也许可以运行 `zip -r -j Results.zip pspnet_test_results/` 并提交 zip 文件给 [evaluation server](https://codalab.lisn.upsaclay.fr/competitions/421) 。
| 4,424 | 33.570313 | 210 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/model_zoo.md | # 标准与模型库
## 共同设定
- 我们默认使用 4 卡分布式训练
- 所有 PyTorch 风格的 ImageNet 预训练网络由我们自己训练,和 [论文](https://arxiv.org/pdf/1812.01187.pdf) 保持一致。
我们的 ResNet 网络是基于 ResNetV1c 的变种,在这里输入层的 7x7 卷积被 3个 3x3 取代
- 为了在不同的硬件上保持一致,我们以 `torch.cuda.max_memory_allocated()` 的最大值作为 GPU 占用率,同时设置 `torch.backends.cudnn.benchmark=False`。
注意,这通常比 `nvidia-smi` 显示的要少
- 我们以网络 forward 和后处理的时间加和作为推理时间,除去数据加载时间。我们使用脚本 `tools/benchmark.py` 来获取推理时间,它在 `torch.backends.cudnn.benchmark=False` 的设定下,计算 200 张图片的平均推理时间
- 在框架中,有两种推理模式
- `slide` 模式(滑动模式):测试的配置文件字段 `test_cfg` 会是 `dict(mode='slide', crop_size=(769, 769), stride=(513, 513))`.
在这个模式下,从原图中裁剪多个小图分别输入网络中进行推理。小图的大小和小图之间的距离由 `crop_size` 和 `stride` 决定,重合区域会进行平均
- `whole` 模式 (全图模式):测试的配置文件字段 `test_cfg` 会是 `dict(mode='whole')`. 在这个模式下,全图会被直接输入到网络中进行推理。
对于 769x769 下训练的模型,我们默认使用 `slide` 进行推理,其余模型用 `whole` 进行推理
- 对于输入大小为 8x+1 (比如769),我们使用 `align_corners=True`。其余情况,对于输入大小为 8x (比如 512,1024),我们使用 `align_corners=False`
## 基线
### FCN
请参考 [FCN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn) 获得详细信息。
### PSPNet
请参考 [PSPNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet) 获得详细信息。
### DeepLabV3
请参考 [DeepLabV3](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3) 获得详细信息。
### PSANet
请参考 [PSANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet) 获得详细信息。
### DeepLabV3+
请参考 [DeepLabV3+](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus) 获得详细信息。
### UPerNet
请参考 [UPerNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet) 获得详细信息。
### NonLocal Net
请参考 [NonLocal Net](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nlnet) 获得详细信息。
### EncNet
请参考 [EncNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/encnet) 获得详细信息。
### CCNet
请参考 [CCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet) 获得详细信息。
### DANet
请参考 [DANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet) 获得详细信息。
### APCNet
请参考 [APCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet) 获得详细信息。
### HRNet
请参考 [HRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet) 获得详细信息。
### GCNet
请参考 [GCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/gcnet) 获得详细信息。
### DMNet
请参考 [DMNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet) 获得详细信息。
### ANN
请参考 [ANN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ann) 获得详细信息。
### OCRNet
请参考 [OCRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet) 获得详细信息。
### Fast-SCNN
请参考 [Fast-SCNN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fastscnn) 获得详细信息。
### ResNeSt
请参考 [ResNeSt](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/resnest) 获得详细信息。
### Semantic FPN
请参考 [Semantic FPN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/semfpn) 获得详细信息。
### PointRend
请参考 [PointRend](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend) 获得详细信息。
### MobileNetV2
请参考 [MobileNetV2](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2) 获得详细信息。
### MobileNetV3
请参考 [MobileNetV3](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3) 获得详细信息。
### EMANet
请参考 [EMANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/emanet) 获得详细信息。
### DNLNet
请参考 [DNLNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dnlnet) 获得详细信息。
### CGNet
请参考 [CGNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/cgnet) 获得详细信息。
### Mixed Precision (FP16) Training
请参考 [Mixed Precision (FP16) Training 在 BiSeNetV2 训练的样例](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/bisenetv2/bisenetv2_fcn_fp16_4x4_1024x1024_160k_cityscapes.py) 获得详细信息。
## 速度标定
### 硬件
- 8 NVIDIA Tesla V100 (32G) GPUs
- Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
### 软件环境
- Python 3.7
- PyTorch 1.5
- CUDA 10.1
- CUDNN 7.6.03
- NCCL 2.4.08
### 训练速度
为了公平比较,我们全部使用 ResNet-101V1c 进行标定。输入大小为 1024x512,批量样本数为 2。
训练速度如下表,指标为每次迭代的时间,以秒为单位,越低越快。
| Implementation | PSPNet (s/iter) | DeepLabV3+ (s/iter) |
| --------------------------------------------------------------------------- | --------------- | ------------------- |
| [MMSegmentation](https://github.com/open-mmlab/mmsegmentation) | **0.83** | **0.85** |
| [SegmenTron](https://github.com/LikeLy-Journey/SegmenTron) | 0.84 | 0.85 |
| [CASILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch) | 1.15 | N/A |
| [vedaseg](https://github.com/Media-Smart/vedaseg) | 0.95 | 1.25 |
注意:DeepLabV3+ 的输出步长为 8。
| 4,939 | 31.287582 | 191 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/stat.py | #!/usr/bin/env python
# Copyright (c) OpenMMLab. All rights reserved.
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmsegmentation/blob/master/'
files = sorted(glob.glob('../../configs/*/README.md'))
stats = []
titles = []
num_ckpts = 0
for f in files:
url = osp.dirname(f.replace('../../', url_prefix))
with open(f, 'r') as content_file:
content = content_file.read()
title = content.split('\n')[0].replace('#', '').strip()
ckpts = set(x.lower().strip()
for x in re.findall(r'https?://download.*\.pth', content)
if 'mmsegmentation' in x)
if len(ckpts) == 0:
continue
_papertype = [
x for x in re.findall(r'<!--\s*\[([A-Z]*?)\]\s*-->', content)
]
assert len(_papertype) > 0
papertype = _papertype[0]
paper = set([(papertype, title)])
titles.append(title)
num_ckpts += len(ckpts)
statsmsg = f"""
\t* [{papertype}] [{title}]({url}) ({len(ckpts)} ckpts)
"""
stats.append((paper, ckpts, statsmsg))
allpapers = func.reduce(lambda a, b: a.union(b), [p for p, _, _ in stats])
msglist = '\n'.join(x for _, _, x in stats)
papertypes, papercounts = np.unique([t for t, _ in allpapers],
return_counts=True)
countstr = '\n'.join(
[f' - {t}: {c}' for t, c in zip(papertypes, papercounts)])
modelzoo = f"""
# 模型库统计数据
* 论文数量: {len(set(titles))}
{countstr}
* 模型数量: {num_ckpts}
{msglist}
"""
with open('modelzoo_statistics.md', 'w') as f:
f.write(modelzoo)
| 1,600 | 23.257576 | 74 | py |
mmsegmentation | mmsegmentation-master/docs/zh_cn/switch_language.md | ## <a href='https://mmsegmentation.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmsegmentation.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 149 | 36.5 | 73 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/train.md | ## 训练一个模型
MMSegmentation 可以执行分布式训练和非分布式训练,分别使用 `MMDistributedDataParallel` 和 `MMDataParallel` 命令。
所有的输出(日志 log 和检查点 checkpoints )将被保存到工作路径文件夹里,它可以通过配置文件里的 `work_dir` 指定。
在一定迭代轮次后,我们默认在验证集上评估模型表现。您可以在训练配置文件中添加间隔参数来改变评估间隔。
```python
evaluation = dict(interval=4000) # 每4000 iterations 评估一次模型的性能
```
**\*重要提示\***: 在配置文件里的默认学习率是针对4卡 GPU 和2张图/GPU (此时 batchsize = 4x2 = 8)来设置的。
同样,您也可以使用8卡 GPU 和 1张图/GPU 的设置,因为所有的模型均使用 cross-GPU 的 SyncBN 模式。
我们可以在训练速度和 GPU 显存之间做平衡。当模型或者 Batch Size 比较大的时,可以传递`--cfg-options model.backbone.with_cp=True` ,使用 `with_cp` 来节省显存,但是速度会更慢,因为原先使用 `with_cp` 时,是逐层反向传播(Back Propagation, BP),不会保存所有的梯度。
### 使用单台机器训练
#### 使用单卡 GPU 训练
```shell
python tools/train.py ${CONFIG_FILE} [可选参数]
```
如果您想在命令里定义工作文件夹路径,您可以添加一个参数`--work-dir ${工作路径}`。
#### 使用 CPU 训练
如果计算机没有 GPU,那么使用 CPU 训练的流程和使用单 GPU 训练的流程一致。如果计算机有 GPU 但是想使用 CPU,我们仅需要在训练流程开始前禁用 GPU。
```shell
export CUDA_VISIBLE_DEVICES=-1
```
之后运行单 GPU 训练脚本即可。
```{warning}
我们不推荐用户使用 CPU 进行训练,这太过缓慢。我们支持这个功能是为了方便用户在没有 GPU 的机器上进行调试。
```
#### 使用多卡 GPU 训练
```shell
sh tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [可选参数]
```
可选参数可以为:
- `--no-validate` (**不推荐**): 训练时代码库默认会在每 k 轮迭代后在验证集上进行评估,如果不需评估使用命令 `--no-validate`
- `--work-dir ${工作路径}`: 在配置文件里重写工作路径文件夹
- `--resume-from ${检查点文件}`: 继续使用先前的检查点 (checkpoint) 文件(可以继续训练过程)
- `--load-from ${检查点文件}`: 从一个检查点 (checkpoint) 文件里加载权重(对另一个任务进行精调)
- `--deterministic`: 选择此模式会减慢训练速度,但结果易于复现
`resume-from` 和 `load-from` 的区别:
- `resume-from` 加载出模型权重和优化器状态包括迭代轮数等
- `load-from` 仅加载模型权重,从第0轮开始训练
示例:
```shell
# 模型的权重和日志将会存储在这个路径下: WORK_DIR=work_dirs/pspnet_r50-d8_512x512_80k_ade20k/
# 如果work_dir没有被设定,它将会被自动生成
sh tools/dist_train.sh configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py 8 --work_dir work_dirs/pspnet_r50-d8_512x512_80k_ade20k/ --deterministic
```
**注意**: 在训练时,模型的和日志保存在“work_dirs/”下的配置文件的相同文件夹结构中。不建议使用自定义的“work_dirs/”,因为验证脚本可以从配置文件名中推断工作目录。如果你想在其他地方保存模型的权重,请使用符号链接,例如:
```shell
ln -s ${YOUR_WORK_DIRS} ${MMSEG}/work_dirs
```
#### 在单个机器上启动多个任务
如果您在单个机器上启动多个任务,例如在8卡 GPU 的一个机器上有2个4卡 GPU 的训练任务,您需要特别对每个任务指定不同的端口(默认为29500)来避免通讯冲突。否则,将会有报错信息 `RuntimeError: Address already in use`。
如果您使用命令 `dist_train.sh` 来启动一个训练任务,您可以在命令行的用环境变量 `PORT` 设置端口:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 sh tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 sh tools/dist_train.sh ${CONFIG_FILE} 4
```
### 使用多台机器训练
如果您想使用由 ethernet 连接起来的多台机器, 您可以使用以下命令:
在第一台机器上:
```shell
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
在第二台机器上:
```shell
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
但是,如果您不使用高速网路连接这几台机器的话,训练将会非常慢。
### 使用slurm管理任务
Slurm是一个很好的计算集群作业调度系统。在由Slurm管理的集群中,可以使用slurm_train.sh来进行训练。它同时支持单节点和多节点训练。
在多台机器上训练:
```shell
[GPUS=${GPUS}] sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} --work-dir ${WORK_DIR}
```
这里有一个在dev分区上使用16块GPUs来训练PSPNet的例子:
```shell
GPUS=16 sh tools/slurm_train.sh dev pspr50 configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py work_dirs/pspnet_r50-d8_512x1024_40k_cityscapes/
```
当使用 `slurm_train.sh` 在一个节点上启动多个任务时,需要指定不同的端口号,这里提供了三种设置:
方式1:
在`config1.py`中设置:
```python
dist_params = dict(backend='nccl', port=29500)
```
在`config2.py`中设置:
```python
dist_params = dict(backend='nccl', port=29501)
```
然后就可以使用config1.py和config2.py启动两个作业:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
```
方式2:
您可以设置不同的通信端口,而不需要修改配置文件,但必须设置“cfg-options”,以覆盖配置文件中的默认端口。
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1 --cfg-options dist_params.port=29500
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2 --cfg-options dist_params.port=29501
```
方式3:
您可以使用环境变量’ MASTER_PORT ‘在命令中设置端口:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 MASTER_PORT=29500 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 MASTER_PORT=29501 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
```
| 4,285 | 25.7875 | 181 | md |
Subsets and Splits