repo
stringlengths
2
152
file
stringlengths
15
239
code
stringlengths
0
58.4M
file_length
int64
0
58.4M
avg_line_length
float64
0
1.81M
max_line_length
int64
0
12.7M
extension_type
stringclasses
364 values
mmsegmentation
mmsegmentation-master/docs/zh_cn/useful_tools.md
## 常用工具 除了训练和测试的脚本,我们在 `tools/` 文件夹路径下还提供许多有用的工具。 ### 计算参数量(params)和计算量( FLOPs) (试验性) 我们基于 [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) 提供了一个用于计算给定模型参数量和计算量的脚本。 ```shell python tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}] ``` 您将得到如下的结果: ```none ============================== Input shape: (3, 2048, 1024) Flops: 1429.68 GMac Params: 48.98 M ============================== ``` **注意**: 这个工具仍然是试验性的,我们无法保证数字是正确的。您可以拿这些结果做简单的实验的对照,在写技术文档报告或者论文前您需要再次确认一下。 (1) 计算量与输入的形状有关,而参数量与输入的形状无关,默认的输入形状是 (1, 3, 1280, 800); (2) 一些运算操作,如 GN 和其他定制的运算操作没有加入到计算量的计算中。 ### 发布模型 在您上传一个模型到云服务器之前,您需要做以下几步: (1) 将模型权重转成 CPU 张量; (2) 删除记录优化器状态 (optimizer states)的相关信息; (3) 计算检查点文件 (checkpoint file) 的哈希编码(hash id)并且将哈希编码加到文件名中。 ```shell python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME} ``` 例如, ```shell python tools/publish_model.py work_dirs/pspnet/latest.pth psp_r50_hszhao_200ep.pth ``` 最终输出文件将是 `psp_r50_512x1024_40ki_cityscapes-{hash id}.pth`。 ### 导出 ONNX (试验性) 我们提供了一个脚本来导出模型到 [ONNX](https://github.com/onnx/onnx) 格式。被转换的模型可以通过工具 [Netron](https://github.com/lutzroeder/netron) 来可视化。除此以外,我们同样支持对 PyTorch 和 ONNX 模型的输出结果做对比。 ```bash python tools/pytorch2onnx.py \ ${CONFIG_FILE} \ --checkpoint ${CHECKPOINT_FILE} \ --output-file ${ONNX_FILE} \ --input-img ${INPUT_IMG} \ --shape ${INPUT_SHAPE} \ --rescale-shape ${RESCALE_SHAPE} \ --show \ --verify \ --dynamic-export \ --cfg-options \ model.test_cfg.mode="whole" ``` 各个参数的描述: - `config` : 模型配置文件的路径 - `--checkpoint` : 模型检查点文件的路径 - `--output-file`: 输出的 ONNX 模型的路径。如果没有专门指定,它默认是 `tmp.onnx` - `--input-img` : 用来转换和可视化的一张输入图像的路径 - `--shape`: 模型的输入张量的高和宽。如果没有专门指定,它将被设置成 `test_pipeline` 的 `img_scale` - `--rescale-shape`: 改变输出的形状。设置这个值来避免 OOM,它仅在 `slide` 模式下可以用 - `--show`: 是否打印输出模型的结构。如果没有被专门指定,它将被设置成 `False` - `--verify`: 是否验证一个输出模型的正确性 (correctness)。如果没有被专门指定,它将被设置成 `False` - `--dynamic-export`: 是否导出形状变化的输入与输出的 ONNX 模型。如果没有被专门指定,它将被设置成 `False` - `--cfg-options`: 更新配置选项 **注意**: 这个工具仍然是试验性的,目前一些自定义操作还没有被支持 ### 评估 ONNX 模型 我们提供 `tools/deploy_test.py` 去评估不同后端的 ONNX 模型。 #### 先决条件 - 安装 onnx 和 onnxruntime-gpu ```shell pip install onnx onnxruntime-gpu ``` - 参考 [如何在 MMCV 里构建 tensorrt 插件](https://mmcv.readthedocs.io/en/latest/tensorrt_plugin.html#how-to-build-tensorrt-plugins-in-mmcv) 安装TensorRT (可选) #### 使用方法 ```bash python tools/deploy_test.py \ ${CONFIG_FILE} \ ${MODEL_FILE} \ ${BACKEND} \ --out ${OUTPUT_FILE} \ --eval ${EVALUATION_METRICS} \ --show \ --show-dir ${SHOW_DIRECTORY} \ --cfg-options ${CFG_OPTIONS} \ --eval-options ${EVALUATION_OPTIONS} \ --opacity ${OPACITY} \ ``` 各个参数的描述: - `config`: 模型配置文件的路径 - `model`: 被转换的模型文件的路径 - `backend`: 推理的后端,可选项:`onnxruntime`, `tensorrt` - `--out`: 输出结果成 pickle 格式文件的路径 - `--format-only` : 不评估直接给输出结果的格式。通常用在当您想把结果输出成一些测试服务器需要的特定格式时。如果没有被专门指定,它将被设置成 `False`。 注意这个参数是用 `--eval` 来 **手动添加** - `--eval`: 评估指标,取决于每个数据集的要求,例如 "mIoU" 是大多数据集的指标而 "cityscapes" 仅针对 Cityscapes 数据集。注意这个参数是用 `--format-only` 来 **手动添加** - `--show`: 是否展示结果 - `--show-dir`: 涂上结果的图像被保存的文件夹的路径 - `--cfg-options`: 重写配置文件里的一些设置,`xxx=yyy` 格式的键值对将被覆盖到配置文件里 - `--eval-options`: 自定义的评估的选项, `xxx=yyy` 格式的键值对将成为 `dataset.evaluate()` 函数的参数变量 - `--opacity`: 涂上结果的分割图的透明度,范围在 (0, 1\] 之间 #### 结果和模型 | 模型 | 配置文件 | 数据集 | 评价指标 | PyTorch | ONNXRuntime | TensorRT-fp32 | TensorRT-fp16 | | :--------: | :---------------------------------------------: | :--------: | :------: | :-----: | :---------: | :-----------: | :-----------: | | FCN | fcn_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 72.2 | 72.2 | 72.2 | 72.2 | | PSPNet | pspnet_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 77.8 | 77.8 | 77.8 | 77.8 | | deeplabv3 | deeplabv3_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 79.0 | 79.0 | 79.0 | 79.0 | | deeplabv3+ | deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 79.6 | 79.5 | 79.5 | 79.5 | | PSPNet | pspnet_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.2 | 78.1 | | | | deeplabv3 | deeplabv3_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.5 | 78.3 | | | | deeplabv3+ | deeplabv3plus_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.9 | 78.7 | | | **注意**: TensorRT 仅在使用 `whole mode` 测试模式时的配置文件里可用。 ### 导出 TorchScript (试验性) 我们同样提供一个脚本去把模型导出成 [TorchScript](https://pytorch.org/docs/stable/jit.html) 格式。您可以使用 pytorch C++ API [LibTorch](https://pytorch.org/docs/stable/cpp_index.html) 去推理训练好的模型。 被转换的模型能被像 [Netron](https://github.com/lutzroeder/netron) 的工具来可视化。此外,我们还支持 PyTorch 和 TorchScript 模型的输出结果的比较。 ```shell python tools/pytorch2torchscript.py \ ${CONFIG_FILE} \ --checkpoint ${CHECKPOINT_FILE} \ --output-file ${ONNX_FILE} --shape ${INPUT_SHAPE} --verify \ --show ``` 各个参数的描述: - `config` : pytorch 模型的配置文件的路径 - `--checkpoint` : pytorch 模型的检查点文件的路径 - `--output-file`: TorchScript 模型输出的路径,如果没有被专门指定,它将被设置成 `tmp.pt` - `--input-img` : 用来转换和可视化的输入图像的路径 - `--shape`: 模型的输入张量的宽和高。如果没有被专门指定,它将被设置成 `512 512` - `--show`: 是否打印输出模型的追踪图 (traced graph),如果没有被专门指定,它将被设置成 `False` - `--verify`: 是否验证一个输出模型的正确性 (correctness),如果没有被专门指定,它将被设置成 `False` **注意**: 目前仅支持 PyTorch>=1.8.0 版本 **注意**: 这个工具仍然是试验性的,一些自定义操作符目前还不被支持 例子: - 导出 PSPNet 在 cityscapes 数据集上的 pytorch 模型 ```shell python tools/pytorch2torchscript.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \ --checkpoint checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ --output-file checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pt \ --shape 512 1024 ``` ### 导出 TensorRT (试验性) 一个导出 [ONNX](https://github.com/onnx/onnx) 模型成 [TensorRT](https://developer.nvidia.com/tensorrt) 格式的脚本 先决条件 - 按照 [ONNXRuntime in mmcv](https://mmcv.readthedocs.io/en/latest/deployment/onnxruntime_op.html) 和 [TensorRT plugin in mmcv](https://github.com/open-mmlab/mmcv/blob/master/docs/en/deployment/tensorrt_plugin.md) ,用 ONNXRuntime 自定义运算 (custom ops) 和 TensorRT 插件安装 `mmcv-full` - 使用 [pytorch2onnx](#convert-to-onnx-experimental) 将模型从 PyTorch 转成 ONNX 使用方法 ```bash python ${MMSEG_PATH}/tools/onnx2tensorrt.py \ ${CFG_PATH} \ ${ONNX_PATH} \ --trt-file ${OUTPUT_TRT_PATH} \ --min-shape ${MIN_SHAPE} \ --max-shape ${MAX_SHAPE} \ --input-img ${INPUT_IMG} \ --show \ --verify ``` 各个参数的描述: - `config` : 模型的配置文件 - `model` : 输入的 ONNX 模型的路径 - `--trt-file` : 输出的 TensorRT 引擎的路径 - `--max-shape` : 模型的输入的最大形状 - `--min-shape` : 模型的输入的最小形状 - `--fp16` : 做 fp16 模型转换 - `--workspace-size` : 在 GiB 里的最大工作空间大小 (Max workspace size) - `--input-img` : 用来可视化的图像 - `--show` : 做结果的可视化 - `--dataset` : Palette provider, 默认为 `CityscapesDataset` - `--verify` : 验证 ONNXRuntime 和 TensorRT 的输出 - `--verbose` : 当创建 TensorRT 引擎时,是否详细做信息日志。默认为 False **注意**: 仅在全图测试模式 (whole mode) 下测试过 ## 其他内容 ### 打印完整的配置文件 `tools/print_config.py` 会逐字逐句的打印整个配置文件,展开所有的导入。 ```shell python tools/print_config.py \ ${CONFIG} \ --graph \ --cfg-options ${OPTIONS [OPTIONS...]} \ ``` 各个参数的描述: - `config` : pytorch 模型的配置文件的路径 - `--graph` : 是否打印模型的图 (models graph) - `--cfg-options`: 自定义替换配置文件的选项 ### 对训练日志 (training logs) 画图 `tools/analyze_logs.py` 会画出给定的训练日志文件的 loss/mIoU 曲线,首先需要 `pip install seaborn` 安装依赖包。 ```shell python tools/analyze_logs.py xxx.log.json [--keys ${KEYS}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] ``` 示例: - 对 mIoU, mAcc, aAcc 指标画图 ```shell python tools/analyze_logs.py log.json --keys mIoU mAcc aAcc --legend mIoU mAcc aAcc ``` - 对 loss 指标画图 ```shell python tools/analyze_logs.py log.json --keys loss --legend loss ``` ### 转换其他仓库的权重 `tools/model_converters/` 提供了若干个预训练权重转换脚本,支持将其他仓库的预训练权重的 key 转换为与 MMSegmentation 相匹配的 key。 #### ViT Swin MiT Transformer 模型 - ViT `tools/model_converters/vit2mmseg.py` 将 timm 预训练模型转换到 MMSegmentation。 ```shell python tools/model_converters/vit2mmseg.py ${SRC} ${DST} ``` - Swin `tools/model_converters/swin2mmseg.py` 将官方预训练模型转换到 MMSegmentation。 ```shell python tools/model_converters/swin2mmseg.py ${SRC} ${DST} ``` - SegFormer `tools/model_converters/mit2mmseg.py` 将官方预训练模型转换到 MMSegmentation。 ```shell python tools/model_converters/mit2mmseg.py ${SRC} ${DST} ``` ## 模型服务 为了用 [`TorchServe`](https://pytorch.org/serve/) 服务 `MMSegmentation` 的模型 , 您可以遵循如下流程: ### 1. 将 model 从 MMSegmentation 转换到 TorchServe ```shell python tools/mmseg2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \ --output-folder ${MODEL_STORE} \ --model-name ${MODEL_NAME} ``` **注意**: ${MODEL_STORE} 需要设置为某个文件夹的绝对路径 ### 2. 构建 `mmseg-serve` 容器镜像 (docker image) ```shell docker build -t mmseg-serve:latest docker/serve/ ``` ### 3. 运行 `mmseg-serve` 请查阅官方文档: [使用容器运行 TorchServe](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment) 为了在 GPU 环境下使用, 您需要安装 [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). 若在 CPU 环境下使用,您可以忽略添加 `--gpus` 参数。 示例: ```shell docker run --rm \ --cpus 8 \ --gpus device=0 \ -p8080:8080 -p8081:8081 -p8082:8082 \ --mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \ mmseg-serve:latest ``` 阅读关于推理 (8080), 管理 (8081) 和指标 (8082) APIs 的 [文档](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md) 。 ### 4. 测试部署 ```shell curl -O https://raw.githubusercontent.com/open-mmlab/mmsegmentation/master/resources/3dogs.jpg curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg -o 3dogs_mask.png ``` 得到的响应将是一个 ".png" 的分割掩码. 您可以按照如下方法可视化输出: ```python import matplotlib.pyplot as plt import mmcv plt.imshow(mmcv.imread("3dogs_mask.png", "grayscale")) plt.show() ``` 看到的东西将会和下图类似: ![3dogs_mask](../../resources/3dogs_mask.png) 然后您可以使用 `test_torchserve.py` 比较 torchserve 和 pytorch 的结果,并将它们可视化。 ```shell python tools/torchserve/test_torchserve.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME} [--inference-addr ${INFERENCE_ADDR}] [--result-image ${RESULT_IMAGE}] [--device ${DEVICE}] ``` 示例: ```shell python tools/torchserve/test_torchserve.py \ demo/demo.png \ configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \ checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \ fcn ``` ## 模型集成 我们提供了`tools/model_ensemble.py` 完成对多个模型的预测概率进行集成的脚本 ### 使用方法 ```bash python tools/model_ensemble.py \ --config ${CONFIG_FILE1} ${CONFIG_FILE2} ... \ --checkpoint ${CHECKPOINT_FILE1} ${CHECKPOINT_FILE2} ...\ --aug-test \ --out ${OUTPUT_DIR}\ --gpus ${GPU_USED}\ ``` ### 各个参数的描述: - `--config`: 集成模型的配置文件的路径 - `--checkpoint`: 集成模型的权重文件的路径 - `--aug-test`: 是否使用翻转和多尺度预测 - `--out`: 模型集成结果的保存文件夹路径 - `--gpus`: 模型集成使用的gpu-id ### 模型集成结果 - 模型集成会对每一张输入,形状为`[H, W]`,产生一张未渲染的分割掩膜文件(segmentation mask),形状为`[H, W]`,分割掩膜中的每个像素点的值代表该位置分割后的像素类别. - 模型集成结果的文件名会采用和`Ground Truth`一致的文件命名,如`Ground Truth`文件名称为`1.png`,则模型集成结果文件也会被命名为`1.png`,并放置在`--out`指定的文件夹中.
11,291
27.443325
272
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/_static/css/readthedocs.css
.header-logo { background-image: url("../images/mmsegmentation.png"); background-size: 201px 40px; height: 40px; width: 201px; }
145
19.857143
58
css
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/config.md
# 教程 1: 学习配置文件 我们整合了模块和继承设计到我们的配置里,这便于做很多实验。如果您想查看配置文件,您可以运行 `python tools/print_config.py /PATH/TO/CONFIG` 去查看完整的配置文件。您还可以传递参数 `--cfg-options xxx.yyy=zzz` 去查看更新的配置。 ## 配置文件的结构 在 `config/_base_` 文件夹下面有4种基本组件类型: 数据集(dataset),模型(model),训练策略(schedule)和运行时的默认设置(default runtime)。许多方法都可以方便地通过组合这些组件进行实现。 这样,像 DeepLabV3, PSPNet 这样的模型可以容易地被构造。被来自 `_base_` 下的组件来构建的配置叫做 _原始配置 (primitive)_。 对于所有在同一个文件夹下的配置文件,推荐**只有一个**对应的**原始配置**文件。所有其他的配置文件都应该继承自这个**原始配置**文件。这样就能保证配置文件的最大继承深度为 3。 为了便于理解,我们推荐社区贡献者继承已有的方法配置文件。 例如,如果一些修改是基于 DeepLabV3,使用者首先应该通过指定 `_base_ = ../deeplabv3/deeplabv3_r50_512x1024_40ki_cityscapes.py`来继承基础 DeepLabV3 结构,再去修改配置文件里其他内容以完成继承。 如果您正在构建一个完整的新模型,它完全没有和已有的方法共享一些结构,您可能需要在 `configs` 下面创建一个文件夹 `xxxnet`。 更详细的文档,请参照 [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html) 。 ## 配置文件命名风格 我们按照下面的风格去命名配置文件,社区贡献者被建议使用同样的风格。 ``` {model}_{backbone}_[misc]_[gpu x batch_per_gpu]_{resolution}_{iterations}_{dataset} ``` `{xxx}` 是被要求的文件 `[yyy]` 是可选的。 - `{model}`: 模型种类,例如 `psp`, `deeplabv3` 等等 - `{backbone}`: 主干网络种类,例如 `r50` (ResNet-50), `x101` (ResNeXt-101) - `[misc]`: 模型中各式各样的设置/插件,例如 `dconv`, `gcb`, `attention`, `mstrain` - `[gpu x batch_per_gpu]`: GPU数目 和每个 GPU 的样本数, 默认为 `8x2` - `{iterations}`: 训练迭代轮数,如`160k` - `{dataset}`: 数据集,如 `cityscapes`, `voc12aug`, `ade` ## PSPNet 的一个例子 为了帮助使用者熟悉这个流行的语义分割框架的完整配置文件和模块,我们在下面对使用 ResNet50V1c 的 PSPNet 的配置文件做了详细的注释说明。 更多的详细使用和其他模块的替代项请参考 API 文档。 ```python norm_cfg = dict(type='SyncBN', requires_grad=True) # 分割框架通常使用 SyncBN model = dict( type='EncoderDecoder', # 分割器(segmentor)的名字 pretrained='open-mmlab://resnet50_v1c', # 将被加载的 ImageNet 预训练主干网络 backbone=dict( type='ResNetV1c', # 主干网络的类别。 可用选项请参考 mmseg/models/backbones/resnet.py depth=50, # 主干网络的深度。通常为 50 和 101。 num_stages=4, # 主干网络状态(stages)的数目,这些状态产生的特征图作为后续的 head 的输入。 out_indices=(0, 1, 2, 3), # 每个状态产生的特征图输出的索引。 dilations=(1, 1, 2, 4), # 每一层(layer)的空心率(dilation rate)。 strides=(1, 2, 1, 1), # 每一层(layer)的步长(stride)。 norm_cfg=dict( # 归一化层(norm layer)的配置项。 type='SyncBN', # 归一化层的类别。通常是 SyncBN。 requires_grad=True), # 是否训练归一化里的 gamma 和 beta。 norm_eval=False, # 是否冻结 BN 里的统计项。 style='pytorch', # 主干网络的风格,'pytorch' 意思是步长为2的层为 3x3 卷积, 'caffe' 意思是步长为2的层为 1x1 卷积。 contract_dilation=True), # 当空洞 > 1, 是否压缩第一个空洞层。 decode_head=dict( type='PSPHead', # 解码头(decode head)的类别。 可用选项请参考 mmseg/models/decode_heads。 in_channels=2048, # 解码头的输入通道数。 in_index=3, # 被选择的特征图(feature map)的索引。 channels=512, # 解码头中间态(intermediate)的通道数。 pool_scales=(1, 2, 3, 6), # PSPHead 平均池化(avg pooling)的规模(scales)。 细节请参考文章内容。 dropout_ratio=0.1, # 进入最后分类层(classification layer)之前的 dropout 比例。 num_classes=19, # 分割前景的种类数目。 通常情况下,cityscapes 为19,VOC为21,ADE20k 为150。 norm_cfg=dict(type='SyncBN', requires_grad=True), # 归一化层的配置项。 align_corners=False, # 解码里调整大小(resize)的 align_corners 参数。 loss_decode=dict( # 解码头(decode_head)里的损失函数的配置项。 type='CrossEntropyLoss', # 在分割里使用的损失函数的类别。 use_sigmoid=False, # 在分割里是否使用 sigmoid 激活。 loss_weight=1.0)), # 解码头里损失的权重。 auxiliary_head=dict( type='FCNHead', # 辅助头(auxiliary head)的种类。可用选项请参考 mmseg/models/decode_heads。 in_channels=1024, # 辅助头的输入通道数。 in_index=2, # 被选择的特征图(feature map)的索引。 channels=256, # 辅助头中间态(intermediate)的通道数。 num_convs=1, # FCNHead 里卷积(convs)的数目. 辅助头里通常为1。 concat_input=False, # 在分类层(classification layer)之前是否连接(concat)输入和卷积的输出。 dropout_ratio=0.1, # 进入最后分类层(classification layer)之前的 dropout 比例。 num_classes=19, # 分割前景的种类数目。 通常情况下,cityscapes 为19,VOC为21,ADE20k 为150。 norm_cfg=dict(type='SyncBN', requires_grad=True), # 归一化层的配置项。 align_corners=False, # 解码里调整大小(resize)的 align_corners 参数。 loss_decode=dict( # 辅助头(auxiliary head)里的损失函数的配置项。 type='CrossEntropyLoss', # 在分割里使用的损失函数的类别。 use_sigmoid=False, # 在分割里是否使用 sigmoid 激活。 loss_weight=0.4))) # 辅助头里损失的权重。默认设置为0.4。 train_cfg = dict() # train_cfg 当前仅是一个占位符。 test_cfg = dict(mode='whole') # 测试模式, 选项是 'whole' 和 'sliding'. 'whole': 整张图像全卷积(fully-convolutional)测试。 'sliding': 图像上做滑动裁剪窗口(sliding crop window)。 dataset_type = 'CityscapesDataset' # 数据集类型,这将被用来定义数据集。 data_root = 'data/cityscapes/' # 数据的根路径。 img_norm_cfg = dict( # 图像归一化配置,用来归一化输入的图像。 mean=[123.675, 116.28, 103.53], # 预训练里用于预训练主干网络模型的平均值。 std=[58.395, 57.12, 57.375], # 预训练里用于预训练主干网络模型的标准差。 to_rgb=True) # 预训练里用于预训练主干网络的图像的通道顺序。 crop_size = (512, 1024) # 训练时的裁剪大小 train_pipeline = [ #训练流程 dict(type='LoadImageFromFile'), # 第1个流程,从文件路径里加载图像。 dict(type='LoadAnnotations'), # 第2个流程,对于当前图像,加载它的注释信息。 dict(type='Resize', # 变化图像和其注释大小的数据增广的流程。 img_scale=(2048, 1024), # 图像的最大规模。 ratio_range=(0.5, 2.0)), # 数据增广的比例范围。 dict(type='RandomCrop', # 随机裁剪当前图像和其注释大小的数据增广的流程。 crop_size=(512, 1024), # 随机裁剪图像生成 patch 的大小。 cat_max_ratio=0.75), # 单个类别可以填充的最大区域的比例。 dict( type='RandomFlip', # 翻转图像和其注释大小的数据增广的流程。 flip_ratio=0.5), # 翻转图像的概率 dict(type='PhotoMetricDistortion'), # 光学上使用一些方法扭曲当前图像和其注释的数据增广的流程。 dict( type='Normalize', # 归一化当前图像的数据增广的流程。 mean=[123.675, 116.28, 103.53], # 这些键与 img_norm_cfg 一致,因为 img_norm_cfg 被 std=[58.395, 57.12, 57.375], # 用作参数。 to_rgb=True), dict(type='Pad', # 填充当前图像到指定大小的数据增广的流程。 size=(512, 1024), # 填充的图像大小。 pad_val=0, # 图像的填充值。 seg_pad_val=255), # 'gt_semantic_seg'的填充值。 dict(type='DefaultFormatBundle'), # 流程里收集数据的默认格式捆。 dict(type='Collect', # 决定数据里哪些键被传递到分割器里的流程。 keys=['img', 'gt_semantic_seg']) ] test_pipeline = [ dict(type='LoadImageFromFile'), # 第1个流程,从文件路径里加载图像。 dict( type='MultiScaleFlipAug', # 封装测试时数据增广(test time augmentations)。 img_scale=(2048, 1024), # 决定测试时可改变图像的最大规模。用于改变图像大小的流程。 flip=False, # 测试时是否翻转图像。 transforms=[ dict(type='Resize', # 使用改变图像大小的数据增广。 keep_ratio=True), # 是否保持宽和高的比例,这里的图像比例设置将覆盖上面的图像规模大小的设置。 dict(type='RandomFlip'), # 考虑到 RandomFlip 已经被添加到流程里,当 flip=False 时它将不被使用。 dict( type='Normalize', # 归一化配置项,值来自 img_norm_cfg。 mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='ImageToTensor', # 将图像转为张量 keys=['img']), dict(type='Collect', # 收集测试时必须的键的收集流程。 keys=['img']) ]) ] data = dict( samples_per_gpu=2, # 单个 GPU 的 Batch size workers_per_gpu=2, # 单个 GPU 分配的数据加载线程数 train=dict( # 训练数据集配置 type='CityscapesDataset', # 数据集的类别, 细节参考自 mmseg/datasets/。 data_root='data/cityscapes/', # 数据集的根目录。 img_dir='leftImg8bit/train', # 数据集图像的文件夹。 ann_dir='gtFine/train', # 数据集注释的文件夹。 pipeline=[ # 流程, 由之前创建的 train_pipeline 传递进来。 dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), dict(type='RandomCrop', crop_size=(512, 1024), cat_max_ratio=0.75), dict(type='RandomFlip', flip_ratio=0.5), dict(type='PhotoMetricDistortion'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size=(512, 1024), pad_val=0, seg_pad_val=255), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_semantic_seg']) ]), val=dict( # 验证数据集的配置 type='CityscapesDataset', data_root='data/cityscapes/', img_dir='leftImg8bit/val', ann_dir='gtFine/val', pipeline=[ # 由之前创建的 test_pipeline 传递的流程。 dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(2048, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ]), test=dict( type='CityscapesDataset', data_root='data/cityscapes/', img_dir='leftImg8bit/val', ann_dir='gtFine/val', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(2048, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ])) log_config = dict( # 注册日志钩 (register logger hook) 的配置文件。 interval=50, # 打印日志的间隔 hooks=[ # 训练期间执行的钩子 dict(type='TextLoggerHook', by_epoch=False), dict(type='TensorboardLoggerHook', by_epoch=False), dict(type='MMSegWandbHook', by_epoch=False, # 还支持 Wandb 记录器,它需要安装 `wandb`。 init_kwargs={'entity': "OpenMMLab", # 用于登录wandb的实体 'project': "mmseg", # WandB中的项目名称 'config': cfg_dict}), # 检查 https://docs.wandb.ai/ref/python/init 以获取更多初始化参数 ]) dist_params = dict(backend='nccl') # 用于设置分布式训练的参数,端口也同样可被设置。 log_level = 'INFO' # 日志的级别。 load_from = None # 从一个给定路径里加载模型作为预训练模型,它并不会消耗训练时间。 resume_from = None # 从给定路径里恢复检查点(checkpoints),训练模式将从检查点保存的轮次开始恢复训练。 workflow = [('train', 1)] # runner 的工作流程。 [('train', 1)] 意思是只有一个工作流程而且工作流程 'train' 仅执行一次。根据 `runner.max_iters` 工作流程训练模型的迭代轮数为40000次。 cudnn_benchmark = True # 是否是使用 cudnn_benchmark 去加速,它对于固定输入大小的可以提高训练速度。 optimizer = dict( # 用于构建优化器的配置文件。支持 PyTorch 中的所有优化器,同时它们的参数与PyTorch里的优化器参数一致。 type='SGD', # 优化器种类,更多细节可参考 https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/optimizer/default_constructor.py#L13。 lr=0.01, # 优化器的学习率,参数的使用细节请参照对应的 PyTorch 文档。 momentum=0.9, # 动量 (Momentum) weight_decay=0.0005) # SGD 的衰减权重 (weight decay)。 optimizer_config = dict() # 用于构建优化器钩 (optimizer hook) 的配置文件,执行细节请参考 https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8。 lr_config = dict( policy='poly', # 调度流程的策略,同样支持 Step, CosineAnnealing, Cyclic 等. 请从 https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9 参考 LrUpdater 的细节。 power=0.9, # 多项式衰减 (polynomial decay) 的幂。 min_lr=0.0001, # 用来稳定训练的最小学习率。 by_epoch=False) # 是否按照每个 epoch 去算学习率。 runner = dict( type='IterBasedRunner', # 将使用的 runner 的类别 (例如 IterBasedRunner 或 EpochBasedRunner)。 max_iters=40000) # 全部迭代轮数大小,对于 EpochBasedRunner 使用 `max_epochs` 。 checkpoint_config = dict( # 设置检查点钩子 (checkpoint hook) 的配置文件。执行时请参考 https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py。 by_epoch=False, # 是否按照每个 epoch 去算 runner。 interval=4000) # 保存的间隔 evaluation = dict( # 构建评估钩 (evaluation hook) 的配置文件。细节请参考 mmseg/core/evaluation/eval_hook.py。 interval=4000, # 评估的间歇点 metric='mIoU') # 评估的指标 ``` ## FAQ ### 忽略基础配置文件里的一些域内容。 有时,您也许会设置 `_delete_=True` 去忽略基础配置文件里的一些域内容。 您也许可以参照 [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#inherit-from-base-config-with-ignored-fields) 来获得一些简单的指导。 在 MMSegmentation 里,例如为了改变 PSPNet 的主干网络的某些内容: ```python norm_cfg = dict(type='SyncBN', requires_grad=True) model = dict( type='MaskRCNN', pretrained='torchvision://resnet50', backbone=dict( type='ResNetV1c', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), dilations=(1, 1, 2, 4), strides=(1, 2, 1, 1), norm_cfg=norm_cfg, norm_eval=False, style='pytorch', contract_dilation=True), decode_head=dict(...), auxiliary_head=dict(...)) ``` `ResNet` 和 `HRNet` 使用不同的关键词去构建。 ```python _base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py' norm_cfg = dict(type='SyncBN', requires_grad=True) model = dict( pretrained='open-mmlab://msra/hrnetv2_w32', backbone=dict( _delete_=True, type='HRNet', norm_cfg=norm_cfg, extra=dict( stage1=dict( num_modules=1, num_branches=1, block='BOTTLENECK', num_blocks=(4, ), num_channels=(64, )), stage2=dict( num_modules=1, num_branches=2, block='BASIC', num_blocks=(4, 4), num_channels=(32, 64)), stage3=dict( num_modules=4, num_branches=3, block='BASIC', num_blocks=(4, 4, 4), num_channels=(32, 64, 128)), stage4=dict( num_modules=3, num_branches=4, block='BASIC', num_blocks=(4, 4, 4, 4), num_channels=(32, 64, 128, 256)))), decode_head=dict(...), auxiliary_head=dict(...)) ``` `_delete_=True` 将用新的键去替换 `backbone` 域内所有老的键。 ### 使用配置文件里的中间变量 配置文件里会使用一些中间变量,例如数据集里的 `train_pipeline`/`test_pipeline`。 需要注意的是,在子配置文件里修改中间变量时,使用者需要再次传递这些变量给对应的域。 例如,我们想改变在训练或测试时,PSPNet 的多尺度策略 (multi scale strategy),`train_pipeline`/`test_pipeline` 是我们想要修改的中间变量。 ```python _base_ = '../pspnet/psp_r50_512x1024_40ki_cityscapes.py' crop_size = (512, 1024) img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict(type='Resize', img_scale=(2048, 1024), ratio_range=(1.0, 2.0)), # 改成 [1., 2.] dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), dict(type='RandomFlip', flip_ratio=0.5), dict(type='PhotoMetricDistortion'), dict(type='Normalize', **img_norm_cfg), dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_semantic_seg']), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(2048, 1024), img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], # 改成多尺度测试 (multi scale testing)。 flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ]) ] data = dict( train=dict(pipeline=train_pipeline), val=dict(pipeline=test_pipeline), test=dict(pipeline=test_pipeline)) ``` 我们首先定义新的 `train_pipeline`/`test_pipeline` 然后传递到 `data` 里。 同样的,如果我们想从 `SyncBN` 切换到 `BN` 或者 `MMSyncBN`,我们需要配置文件里的每一个 `norm_cfg`。 ```python _base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py' norm_cfg = dict(type='BN', requires_grad=True) model = dict( backbone=dict(norm_cfg=norm_cfg), decode_head=dict(norm_cfg=norm_cfg), auxiliary_head=dict(norm_cfg=norm_cfg)) ```
15,785
40.21671
170
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/customize_datasets.md
# 教程 2: 自定义数据集 ## 通过重新组织数据来定制数据集 最简单的方法是将您的数据集进行转化,并组织成文件夹的形式。 如下的文件结构就是一个例子。 ```none ├── data │ ├── my_dataset │ │ ├── img_dir │ │ │ ├── train │ │ │ │ ├── xxx{img_suffix} │ │ │ │ ├── yyy{img_suffix} │ │ │ │ ├── zzz{img_suffix} │ │ │ ├── val │ │ ├── ann_dir │ │ │ ├── train │ │ │ │ ├── xxx{seg_map_suffix} │ │ │ │ ├── yyy{seg_map_suffix} │ │ │ │ ├── zzz{seg_map_suffix} │ │ │ ├── val ``` 一个训练对将由 img_dir/ann_dir 里同样首缀的文件组成。 如果给定 `split` 参数,只有部分在 img_dir/ann_dir 里的文件会被加载。 我们可以对被包括在 split 文本里的文件指定前缀。 除此以外,一个 split 文本如下所示: ```none xxx zzz ``` 只有 `data/my_dataset/img_dir/train/xxx{img_suffix}`, `data/my_dataset/img_dir/train/zzz{img_suffix}`, `data/my_dataset/ann_dir/train/xxx{seg_map_suffix}`, `data/my_dataset/ann_dir/train/zzz{seg_map_suffix}` 将被加载。 注意:标注是跟图像同样的形状 (H, W),其中的像素值的范围是 `[0, num_classes - 1]`。 您也可以使用 [pillow](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#palette) 的 `'P'` 模式去创建包含颜色的标注。 ## 通过混合数据去定制数据集 MMSegmentation 同样支持混合数据集去训练。 当前它支持拼接 (concat), 重复 (repeat) 和多图混合 (multi-image mix)数据集。 ### 重复数据集 我们使用 `RepeatDataset` 作为包装 (wrapper) 去重复数据集。 例如,假设原始数据集是 `Dataset_A`,为了重复它,配置文件如下: ```python dataset_A_train = dict( type='RepeatDataset', times=N, dataset=dict( # 这是 Dataset_A 数据集的原始配置 type='Dataset_A', ... pipeline=train_pipeline ) ) ``` ### 拼接数据集 有2种方式去拼接数据集。 1. 如果您想拼接的数据集是同样的类型,但有不同的标注文件, 您可以按如下操作去拼接数据集的配置文件: 1. 您也许可以拼接两个标注文件夹 `ann_dir` ```python dataset_A_train = dict( type='Dataset_A', img_dir = 'img_dir', ann_dir = ['anno_dir_1', 'anno_dir_2'], pipeline=train_pipeline ) ``` 2. 您也可以去拼接两个 `split` 文件列表 ```python dataset_A_train = dict( type='Dataset_A', img_dir = 'img_dir', ann_dir = 'anno_dir', split = ['split_1.txt', 'split_2.txt'], pipeline=train_pipeline ) ``` 3. 您也可以同时拼接 `ann_dir` 文件夹和 `split` 文件列表 ```python dataset_A_train = dict( type='Dataset_A', img_dir = 'img_dir', ann_dir = ['anno_dir_1', 'anno_dir_2'], split = ['split_1.txt', 'split_2.txt'], pipeline=train_pipeline ) ``` 在这样的情况下, `ann_dir_1` 和 `ann_dir_2` 分别对应于 `split_1.txt` 和 `split_2.txt` 2. 如果您想拼接不同的数据集,您可以如下去拼接数据集的配置文件: ```python dataset_A_train = dict() dataset_B_train = dict() data = dict( imgs_per_gpu=2, workers_per_gpu=2, train = [ dataset_A_train, dataset_B_train ], val = dataset_A_val, test = dataset_A_test ) ``` 一个更复杂的例子如下:分别重复 `Dataset_A` 和 `Dataset_B` N 次和 M 次,然后再去拼接重复后的数据集 ```python dataset_A_train = dict( type='RepeatDataset', times=N, dataset=dict( type='Dataset_A', ... pipeline=train_pipeline ) ) dataset_A_val = dict( ... pipeline=test_pipeline ) dataset_A_test = dict( ... pipeline=test_pipeline ) dataset_B_train = dict( type='RepeatDataset', times=M, dataset=dict( type='Dataset_B', ... pipeline=train_pipeline ) ) data = dict( imgs_per_gpu=2, workers_per_gpu=2, train = [ dataset_A_train, dataset_B_train ], val = dataset_A_val, test = dataset_A_test ) ``` ### 多图混合集 我们使用 `MultiImageMixDataset` 作为包装(wrapper)去混合多个数据集的图片。 `MultiImageMixDataset`可以被类似mosaic和mixup的多图混合数据増广使用。 `MultiImageMixDataset`与`Mosaic`数据増广一起使用的例子: ```python train_pipeline = [ dict(type='RandomMosaic', prob=1), dict(type='Resize', img_scale=(1024, 512), keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='Normalize', **img_norm_cfg), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_semantic_seg']), ] train_dataset = dict( type='MultiImageMixDataset', dataset=dict( classes=classes, palette=palette, type=dataset_type, reduce_zero_label=False, img_dir=data_root + "images/train", ann_dir=data_root + "annotations/train", pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), ] ), pipeline=train_pipeline ) ```
4,353
19.733333
109
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/customize_models.md
# 教程 4: 自定义模型 ## 自定义优化器 (optimizer) 假设您想增加一个新的叫 `MyOptimizer` 的优化器,它的参数分别为 `a`, `b`, 和 `c`。 您首先需要在一个文件里实现这个新的优化器,例如在 `mmseg/core/optimizer/my_optimizer.py` 里面: ```python from mmcv.runner import OPTIMIZERS from torch.optim import Optimizer @OPTIMIZERS.register_module class MyOptimizer(Optimizer): def __init__(self, a, b, c) ``` 然后增加这个模块到 `mmseg/core/optimizer/__init__.py` 里面,这样注册器 (registry) 将会发现这个新的模块并添加它: ```python from .my_optimizer import MyOptimizer ``` 之后您可以在配置文件的 `optimizer` 域里使用 `MyOptimizer`, 如下所示,在配置文件里,优化器被 `optimizer` 域所定义: ```python optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) ``` 为了使用您自己的优化器,域可以被修改为: ```python optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value) ``` 我们已经支持了 PyTorch 自带的全部优化器,唯一修改的地方是在配置文件里的 `optimizer` 域。例如,如果您想使用 `ADAM`,尽管数值表现会掉点,还是可以如下修改: ```python optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001) ``` 使用者可以直接按照 PyTorch [文档教程](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) 去设置参数。 ## 定制优化器的构造器 (optimizer constructor) 对于优化,一些模型可能会有一些特别定义的参数,例如批归一化 (BatchNorm) 层里面的权重衰减 (weight decay)。 使用者可以通过定制优化器的构造器来微调这些细粒度的优化器参数。 ```python from mmcv.utils import build_from_cfg from mmcv.runner import OPTIMIZER_BUILDERS from .cocktail_optimizer import CocktailOptimizer @OPTIMIZER_BUILDERS.register_module class CocktailOptimizerConstructor(object): def __init__(self, optimizer_cfg, paramwise_cfg=None): def __call__(self, model): return my_optimizer ``` ## 开发和增加新的组件(Module) MMSegmentation 里主要有2种组件: - 主干网络 (backbone): 通常是卷积网络的堆叠,来做特征提取,例如 ResNet, HRNet - 解码头 (decoder head): 用于语义分割图的解码的组件(得到分割结果) ### 添加新的主干网络 这里我们以 MobileNet 为例,展示如何增加新的主干组件: 1. 创建一个新的文件 `mmseg/models/backbones/mobilenet.py` ```python import torch.nn as nn from ..builder import BACKBONES @BACKBONES.register_module class MobileNet(nn.Module): def __init__(self, arg1, arg2): pass def forward(self, x): # should return a tuple pass def init_weights(self, pretrained=None): pass ``` 2. 在 `mmseg/models/backbones/__init__.py` 里面导入模块 ```python from .mobilenet import MobileNet ``` 3. 在您的配置文件里使用它 ```python model = dict( ... backbone=dict( type='MobileNet', arg1=xxx, arg2=xxx), ... ``` ### 增加新的解码头 (decoder head)组件 在 MMSegmentation 里面,对于所有的分割头,我们提供一个基类解码头 [BaseDecodeHead](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py) 。 所有新建的解码头都应该继承它。这里我们以 [PSPNet](https://arxiv.org/abs/1612.01105) 为例, 展示如何开发和增加一个新的解码头组件: 首先,在 `mmseg/models/decode_heads/psp_head.py` 里添加一个新的解码头。 PSPNet 中实现了一个语义分割的解码头。为了实现一个解码头,我们只需要在新构造的解码头中实现如下的3个函数: ```python @HEADS.register_module() class PSPHead(BaseDecodeHead): def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): super(PSPHead, self).__init__(**kwargs) def init_weights(self): def forward(self, inputs): ``` 接着,使用者需要在 `mmseg/models/decode_heads/__init__.py` 里面添加这个模块,这样对应的注册器 (registry) 可以查找并加载它们。 PSPNet的配置文件如下所示: ```python norm_cfg = dict(type='SyncBN', requires_grad=True) model = dict( type='EncoderDecoder', pretrained='pretrain_model/resnet50_v1c_trick-2cccc1ad.pth', backbone=dict( type='ResNetV1c', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), dilations=(1, 1, 2, 4), strides=(1, 2, 1, 1), norm_cfg=norm_cfg, norm_eval=False, style='pytorch', contract_dilation=True), decode_head=dict( type='PSPHead', in_channels=2048, in_index=3, channels=512, pool_scales=(1, 2, 3, 6), dropout_ratio=0.1, num_classes=19, norm_cfg=norm_cfg, align_corners=False, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))) ``` ### 增加新的损失函数 假设您想添加一个新的损失函数 `MyLoss` 到语义分割解码器里。 为了添加一个新的损失函数,使用者需要在 `mmseg/models/losses/my_loss.py` 里面去实现它。 `weighted_loss` 可以对计算损失时的每个样本做加权。 ```python import torch import torch.nn as nn from ..builder import LOSSES from .utils import weighted_loss @weighted_loss def my_loss(pred, target): assert pred.size() == target.size() and target.numel() > 0 loss = torch.abs(pred - target) return loss @LOSSES.register_module class MyLoss(nn.Module): def __init__(self, reduction='mean', loss_weight=1.0): super(MyLoss, self).__init__() self.reduction = reduction self.loss_weight = loss_weight def forward(self, pred, target, weight=None, avg_factor=None, reduction_override=None): assert reduction_override in (None, 'none', 'mean', 'sum') reduction = ( reduction_override if reduction_override else self.reduction) loss = self.loss_weight * my_loss( pred, target, weight, reduction=reduction, avg_factor=avg_factor) return loss ``` 然后使用者需要在 `mmseg/models/losses/__init__.py` 里面添加它: ```python from .my_loss import MyLoss, my_loss ``` 为了使用它,修改 `loss_xxx` 域。之后您需要在解码头组件里修改 `loss_decode` 域。 `loss_weight` 可以被用来对不同的损失函数做加权。 ```python loss_decode=dict(type='MyLoss', loss_weight=1.0)) ```
5,245
21.709957
158
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/customize_runtime.md
# 教程 6: 自定义运行设定 ## 自定义优化设定 ### 自定义 PyTorch 支持的优化器 我们已经支持 PyTorch 自带的所有优化器,唯一需要修改的地方是在配置文件里的 `optimizer` 域里面。 例如,如果您想使用 `ADAM` (注意如下操作可能会让模型表现下降),可以使用如下修改: ```python optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001) ``` 为了修改模型的学习率,使用者仅需要修改配置文件里 optimizer 的 `lr` 即可。 使用者可以参照 PyTorch 的 [API 文档](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) 直接设置参数。 ### 自定义自己实现的优化器 #### 1. 定义一个新的优化器 一个自定义的优化器可以按照如下去定义: 假如您想增加一个叫做 `MyOptimizer` 的优化器,它的参数分别有 `a`, `b`, 和 `c`。 您需要创建一个叫 `mmseg/core/optimizer` 的新文件夹。 然后再在文件,即 `mmseg/core/optimizer/my_optimizer.py` 里面去实现这个新优化器: ```python from .registry import OPTIMIZERS from torch.optim import Optimizer @OPTIMIZERS.register_module() class MyOptimizer(Optimizer): def __init__(self, a, b, c) ``` #### 2. 增加优化器到注册表 (registry) 为了让上述定义的模块被框架发现,首先这个模块应该被导入到主命名空间 (main namespace) 里。 有两种方式可以实现它。 - 修改 `mmseg/core/optimizer/__init__.py` 来导入它 新的被定义的模块应该被导入到 `mmseg/core/optimizer/__init__.py` 这样注册表将会发现新的模块并添加它 ```python from .my_optimizer import MyOptimizer ``` - 在配置文件里使用 `custom_imports` 去手动导入它 ```python custom_imports = dict(imports=['mmseg.core.optimizer.my_optimizer'], allow_failed_imports=False) ``` `mmseg.core.optimizer.my_optimizer` 模块将会在程序运行的开始被导入,并且 `MyOptimizer` 类将会自动注册。 需要注意只有包含 `MyOptimizer` 类的包 (package) 应当被导入。 而 `mmseg.core.optimizer.my_optimizer.MyOptimizer` **不能** 被直接导入。 事实上,使用者完全可以用另一个按这样导入方法的文件夹结构,只要模块的根路径已经被添加到 `PYTHONPATH` 里面。 #### 3. 在配置文件里定义优化器 之后您可以在配置文件的 `optimizer` 域里面使用 `MyOptimizer` 在配置文件里,优化器被定义在 `optimizer` 域里,如下所示: ```python optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) ``` 为了使用您自己的优化器,这个域可以被改成: ```python optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value) ``` ### 自定义优化器的构造器 (constructor) 有些模型可能需要在优化器里有一些特别参数的设置,例如 批归一化层 (BatchNorm layers) 的 权重衰减 (weight decay)。 使用者可以通过自定义优化器的构造器去微调这些细粒度参数。 ```python from mmcv.utils import build_from_cfg from mmcv.runner.optimizer import OPTIMIZER_BUILDERS, OPTIMIZERS from mmseg.utils import get_root_logger from .my_optimizer import MyOptimizer @OPTIMIZER_BUILDERS.register_module() class MyOptimizerConstructor(object): def __init__(self, optimizer_cfg, paramwise_cfg=None): def __call__(self, model): return my_optimizer ``` 默认的优化器构造器的实现可以参照 [这里](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/optimizer/default_constructor.py#L11) ,它也可以被用作新的优化器构造器的模板。 ### 额外的设置 优化器没有实现的一些技巧应该通过优化器构造器 (optimizer constructor) 或者钩子 (hook) 去实现,如设置基于参数的学习率 (parameter-wise learning rates)。我们列出一些常见的设置,它们可以稳定或加速模型的训练。 如果您有更多的设置,欢迎在 PR 和 issue 里面提交。 - __使用梯度截断 (gradient clip) 去稳定训练__: 一些模型需要梯度截断去稳定训练过程,如下所示 ```python optimizer_config = dict( _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) ``` 如果您的配置继承自已经设置了 `optimizer_config` 的基础配置 (base config),您可能需要 `_delete_=True` 来重写那些不需要的设置。更多细节请参照 [配置文件文档](https://mmsegmentation.readthedocs.io/en/latest/config.html) 。 - __使用动量计划表 (momentum schedule) 去加速模型收敛__: 我们支持动量计划表去让模型基于学习率修改动量,这样可能让模型收敛地更快。 动量计划表经常和学习率计划表 (LR scheduler) 一起使用,例如如下配置文件就在 3D 检测里经常使用以加速收敛。 更多细节请参考 [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327) 和 [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130) 的实现。 ```python lr_config = dict( policy='cyclic', target_ratio=(10, 1e-4), cyclic_times=1, step_ratio_up=0.4, ) momentum_config = dict( policy='cyclic', target_ratio=(0.85 / 0.95, 1), cyclic_times=1, step_ratio_up=0.4, ) ``` ## 自定义训练计划表 我们根据默认的训练迭代步数 40k/80k 来设置学习率,这在 MMCV 里叫做 [`PolyLrUpdaterHook`](https://github.com/open-mmlab/mmcv/blob/826d3a7b68596c824fa1e2cb89b6ac274f52179c/mmcv/runner/hooks/lr_updater.py#L196) 。 我们也支持许多其他的学习率计划表:[这里](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py) ,例如 `CosineAnnealing` 和 `Poly` 计划表。下面是一些例子: - 步计划表 Step schedule: ```python lr_config = dict(policy='step', step=[9, 10]) ``` - 余弦退火计划表 ConsineAnnealing schedule: ```python lr_config = dict( policy='CosineAnnealing', warmup='linear', warmup_iters=1000, warmup_ratio=1.0 / 10, min_lr_ratio=1e-5) ``` ## 自定义工作流 (workflow) 工作流是一个专门定义运行顺序和轮数 (running order and epochs) 的列表 (phase, epochs)。 默认情况下它设置成: ```python workflow = [('train', 1)] ``` 意思是训练是跑 1 个 epoch。有时候使用者可能想检查模型在验证集上的一些指标(如 损失 loss,精确性 accuracy),我们可以这样设置工作流: ```python [('train', 1), ('val', 1)] ``` 于是 1 个 epoch 训练,1 个 epoch 验证将交替运行。 **注意**: 1. 模型的参数在验证的阶段不会被自动更新 2. 配置文件里的关键词 `total_epochs` 仅控制训练的 epochs 数目,而不会影响验证时的工作流 3. 工作流 `[('train', 1), ('val', 1)]` 和 `[('train', 1)]` 将不会改变 `EvalHook` 的行为,因为 `EvalHook` 被 `after_train_epoch` 调用而且验证的工作流仅仅影响通过调用 `after_val_epoch` 的钩子 (hooks)。因此, `[('train', 1), ('val', 1)]` 和 `[('train', 1)]` 的区别仅在于 runner 将在每次训练 epoch 结束后计算在验证集上的损失 ## 自定义钩 (hooks) ### 使用 MMCV 实现的钩子 (hooks) 如果钩子已经在 MMCV 里被实现,如下所示,您可以直接修改配置文件来使用钩子: ```python custom_hooks = [ dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL') ] ``` ### 修改默认的运行时间钩子 (runtime hooks) 以下的常用的钩子没有被 `custom_hooks` 注册: - log_config - checkpoint_config - evaluation - lr_config - optimizer_config - momentum_config 在这些钩子里,只有 logger hook 有 `VERY_LOW` 优先级,其他的优先级都是 `NORMAL`。 上述提及的教程已经包括了如何修改 `optimizer_config`,`momentum_config` 和 `lr_config`。 这里我们展示我们如何处理 `log_config`, `checkpoint_config` 和 `evaluation`。 #### 检查点配置文件 (Checkpoint config) MMCV runner 将使用 `checkpoint_config` 去初始化 [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/hooks/checkpoint.py#L9). ```python checkpoint_config = dict(interval=1) ``` 使用者可以设置 `max_keep_ckpts` 来仅保存一小部分检查点或者通过 `save_optimizer` 来决定是否保存优化器的状态字典 (state dict of optimizer)。 更多使用参数的细节请参考 [这里](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook) 。 #### 日志配置文件 (Log config) `log_config` 包裹了许多日志钩 (logger hooks) 而且能去设置间隔 (intervals)。现在 MMCV 支持 `WandbLoggerHook`, `MlflowLoggerHook` 和 `TensorboardLoggerHook`。 详细的使用请参照 [文档](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.LoggerHook) 。 ```python log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook') ]) ``` #### 评估配置文件 (Evaluation config) `evaluation` 的配置文件将被用来初始化 [`EvalHook`](https://github.com/open-mmlab/mmsegmentation/blob/e3f6f655d69b777341aec2fe8829871cc0beadcb/mmseg/core/evaluation/eval_hooks.py#L7) 。 除了 `interval` 键,其他的像 `metric` 这样的参数将被传递给 `dataset.evaluate()` 。 ```python evaluation = dict(interval=1, metric='mIoU') ```
6,734
26.048193
302
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/data_pipeline.md
# 教程 3: 自定义数据流程 ## 数据流程的设计 按照通常的惯例,我们使用 `Dataset` 和 `DataLoader` 做多线程的数据加载。`Dataset` 返回一个数据内容的字典,里面对应于模型前传方法的各个参数。 因为在语义分割中,输入的图像数据具有不同的大小,我们在 MMCV 里引入一个新的 `DataContainer` 类别去帮助收集和分发不同大小的输入数据。 更多细节,请查看[这里](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py) 。 数据的准备流程和数据集是解耦的。通常一个数据集定义了如何处理标注数据(annotations)信息,而一个数据流程定义了准备一个数据字典的所有步骤。一个流程包括了一系列操作,每个操作里都把一个字典作为输入,然后再输出一个新的字典给下一个变换操作。 这些操作可分为数据加载 (data loading),预处理 (pre-processing),格式变化 (formatting) 和测试时数据增强 (test-time augmentation)。 下面的例子就是 PSPNet 的一个流程: ```python img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) crop_size = (512, 1024) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), dict(type='RandomFlip', flip_ratio=0.5), dict(type='PhotoMetricDistortion'), dict(type='Normalize', **img_norm_cfg), dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_semantic_seg']), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(2048, 1024), # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ]) ] ``` 对于每个操作,我们列出它添加、更新、移除的相关字典域 (dict fields): ### 数据加载 Data loading `LoadImageFromFile` - 增加: img, img_shape, ori_shape `LoadAnnotations` - 增加: gt_semantic_seg, seg_fields ### 预处理 Pre-processing `Resize` - 增加: scale, scale_idx, pad_shape, scale_factor, keep_ratio - 更新: img, img_shape, \*seg_fields `RandomFlip` - 增加: flip - 更新: img, \*seg_fields `Pad` - 增加: pad_fixed_size, pad_size_divisor - 更新: img, pad_shape, \*seg_fields `RandomCrop` - 更新: img, pad_shape, \*seg_fields `Normalize` - 增加: img_norm_cfg - 更新: img `SegRescale` - 更新: gt_semantic_seg `PhotoMetricDistortion` - 更新: img ### 格式 Formatting `ToTensor` - 更新: 由 `keys` 指定 `ImageToTensor` - 更新: 由 `keys` 指定 `Transpose` - 更新: 由 `keys` 指定 `ToDataContainer` - 更新: 由 `keys` 指定 `DefaultFormatBundle` - 更新: img, gt_semantic_seg `Collect` - 增加: img_meta (the keys of img_meta is specified by `meta_keys`) - 移除: all other keys except for those specified by `keys` ### 测试时数据增强 Test time augmentation `MultiScaleFlipAug` ## 拓展和使用自定义的流程 1. 在任何一个文件里写一个新的流程,例如 `my_pipeline.py`,它以一个字典作为输入并且输出一个字典 ```python from mmseg.datasets import PIPELINES @PIPELINES.register_module() class MyTransform: def __call__(self, results): results['dummy'] = True return results ``` 2. 导入一个新类 ```python from .my_pipeline import MyTransform ``` 3. 在配置文件里使用它 ```python img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) crop_size = (512, 1024) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), dict(type='RandomFlip', flip_ratio=0.5), dict(type='PhotoMetricDistortion'), dict(type='Normalize', **img_norm_cfg), dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), dict(type='MyTransform'), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_semantic_seg']), ] ```
3,811
21.826347
123
md
mmsegmentation
mmsegmentation-master/docs/zh_cn/tutorials/training_tricks.md
# 教程 5: 训练技巧 MMSegmentation 支持如下训练技巧: ## 主干网络和解码头组件使用不同的学习率 (Learning Rate, LR) 在语义分割里,一些方法会让解码头组件的学习率大于主干网络的学习率,这样可以获得更好的表现或更快的收敛。 在 MMSegmentation 里面,您也可以在配置文件里添加如下行来让解码头组件的学习率是主干组件的10倍。 ```python optimizer=dict( paramwise_cfg = dict( custom_keys={ 'head': dict(lr_mult=10.)})) ``` 通过这种修改,任何被分组到 `'head'` 的参数的学习率都将乘以10。您也可以参照 [MMCV 文档](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.DefaultOptimizerConstructor) 获取更详细的信息。 ## 在线难样本挖掘 (Online Hard Example Mining, OHEM) 对于训练时采样,我们在 [这里](https://github.com/open-mmlab/mmsegmentation/tree/master/mmseg/core/seg/sampler) 做了像素采样器。 如下例子是使用 PSPNet 训练并采用 OHEM 策略的配置: ```python _base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py' model=dict( decode_head=dict( sampler=dict(type='OHEMPixelSampler', thresh=0.7, min_kept=100000)) ) ``` 通过这种方式,只有置信分数在0.7以下的像素值点会被拿来训练。在训练时我们至少要保留100000个像素值点。如果 `thresh` 并未被指定,前 `min_kept` 个损失的像素值点才会被选择。 ## 类别平衡损失 (Class Balanced Loss) 对于不平衡类别分布的数据集,您也许可以改变每个类别的损失权重。这里以 cityscapes 数据集为例: ```python _base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py' model=dict( decode_head=dict( loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, # DeepLab 对 cityscapes 使用这种权重 class_weight=[0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754, 1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037, 1.0865, 1.0955, 1.0865, 1.1529, 1.0507]))) ``` `class_weight` 将被作为 `weight` 参数,传递给 `CrossEntropyLoss`。详细信息请参照 [PyTorch 文档](https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss) 。 ## 同时使用多种损失函数 (Multiple Losses) 对于训练时损失函数的计算,我们目前支持多个损失函数同时使用。 以 `unet` 使用 `DRIVE` 数据集训练为例, 使用 `CrossEntropyLoss` 和 `DiceLoss` 的 `1:3` 的加权和作为损失函数。配置文件写为: ```python _base_ = './fcn_unet_s5-d16_64x64_40k_drive.py' model = dict( decode_head=dict(loss_decode=[dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0), dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)]), auxiliary_head=dict(loss_decode=[dict(type='CrossEntropyLoss', loss_name='loss_ce',loss_weight=1.0), dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)]), ) ``` 通过这种方式,确定训练过程中损失函数的权重 `loss_weight` 和在训练日志里的名字 `loss_name`。 注意: `loss_name` 的名字必须带有 `loss_` 前缀,这样它才能被包括在反传的图里。 ## 在损失函数中忽略特定的 label 类别 默认设置 `avg_non_ignore=False`, 即每个像素都用来计算损失函数。尽管其中的一些像素属于需要被忽略的类别。 对于训练时损失函数的计算,我们目前支持使用 `avg_non_ignore` 和 `ignore_index` 来忽略 label 特定的类别。 这样损失函数将只在非忽略类别像素中求平均值,会获得更好的表现。这里是[相关 PR](https://github.com/open-mmlab/mmsegmentation/pull/1409)。以 `unet` 使用 `Cityscapes` 数据集训练为例, 在计算损失函数时,忽略 label 为0的背景,并且仅在不被忽略的像素上计算均值。配置文件写为: ```python _base_ = './fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py' model = dict( decode_head=dict( ignore_index=0, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, avg_non_ignore=True), auxiliary_head=dict( ignore_index=0, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, avg_non_ignore=True)), )) ``` 通过这种方式,确定训练过程中损失函数的权重 `loss_weight` 和在训练日志里的名字 `loss_name`。 注意: `loss_name` 的名字必须带有 `loss_` 前缀,这样它才能被包括在反传的图里。
3,274
33.114583
204
md
mmsegmentation
mmsegmentation-master/mmseg/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import mmcv from packaging.version import parse from .version import __version__, version_info MMCV_MIN = '1.3.13' MMCV_MAX = '1.8.0' def digit_version(version_str: str, length: int = 4): """Convert a version string into a tuple of integers. This method is usually used for comparing two versions. For pre-release versions: alpha < beta < rc. Args: version_str (str): The version string. length (int): The maximum number of version levels. Default: 4. Returns: tuple[int]: The version info in digits (integers). """ version = parse(version_str) assert version.release, f'failed to parse version {version_str}' release = list(version.release) release = release[:length] if len(release) < length: release = release + [0] * (length - len(release)) if version.is_prerelease: mapping = {'a': -3, 'b': -2, 'rc': -1} val = -4 # version.pre can be None if version.pre: if version.pre[0] not in mapping: warnings.warn(f'unknown prerelease version {version.pre[0]}, ' 'version checking may go wrong') else: val = mapping[version.pre[0]] release.extend([val, version.pre[-1]]) else: release.extend([val, 0]) elif version.is_postrelease: release.extend([1, version.post]) else: release.extend([0, 0]) return tuple(release) mmcv_min_version = digit_version(MMCV_MIN) mmcv_max_version = digit_version(MMCV_MAX) mmcv_version = digit_version(mmcv.__version__) assert (mmcv_min_version <= mmcv_version < mmcv_max_version), \ f'MMCV=={mmcv.__version__} is used but incompatible. ' \ f'Please install mmcv>={mmcv_min_version}, <{mmcv_max_version}.' __all__ = ['__version__', 'version_info', 'digit_version']
1,933
29.698413
78
py
mmsegmentation
mmsegmentation-master/mmseg/version.py
# Copyright (c) Open-MMLab. All rights reserved. __version__ = '0.30.0' def parse_version_info(version_str): version_info = [] for x in version_str.split('.'): if x.isdigit(): version_info.append(int(x)) elif x.find('rc') != -1: patch_version = x.split('rc') version_info.append(int(patch_version[0])) version_info.append(f'rc{patch_version[1]}') return tuple(version_info) version_info = parse_version_info(__version__)
502
25.473684
56
py
mmsegmentation
mmsegmentation-master/mmseg/apis/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .inference import inference_segmentor, init_segmentor, show_result_pyplot from .test import multi_gpu_test, single_gpu_test from .train import (get_root_logger, init_random_seed, set_random_seed, train_segmentor) __all__ = [ 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor', 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test', 'show_result_pyplot', 'init_random_seed' ]
489
39.833333
78
py
mmsegmentation
mmsegmentation-master/mmseg/apis/inference.py
# Copyright (c) OpenMMLab. All rights reserved. import matplotlib.pyplot as plt import mmcv import torch from mmcv.parallel import collate, scatter from mmcv.runner import load_checkpoint from mmseg.datasets.pipelines import Compose from mmseg.models import build_segmentor def init_segmentor(config, checkpoint=None, device='cuda:0'): """Initialize a segmentor from config file. Args: config (str or :obj:`mmcv.Config`): Config file path or the config object. checkpoint (str, optional): Checkpoint path. If left as None, the model will not load any weights. device (str, optional) CPU/CUDA device option. Default 'cuda:0'. Use 'cpu' for loading model on CPU. Returns: nn.Module: The constructed segmentor. """ if isinstance(config, str): config = mmcv.Config.fromfile(config) elif not isinstance(config, mmcv.Config): raise TypeError('config must be a filename or Config object, ' 'but got {}'.format(type(config))) config.model.pretrained = None config.model.train_cfg = None model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) if checkpoint is not None: checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') model.CLASSES = checkpoint['meta']['CLASSES'] model.PALETTE = checkpoint['meta']['PALETTE'] model.cfg = config # save the config in the model for convenience model.to(device) model.eval() return model class LoadImage: """A simple pipeline to load image.""" def __call__(self, results): """Call function to load images into results. Args: results (dict): A result dict contains the file name of the image to be read. Returns: dict: ``results`` will be returned containing loaded image. """ if isinstance(results['img'], str): results['filename'] = results['img'] results['ori_filename'] = results['img'] else: results['filename'] = None results['ori_filename'] = None img = mmcv.imread(results['img']) results['img'] = img results['img_shape'] = img.shape results['ori_shape'] = img.shape return results def inference_segmentor(model, imgs): """Inference image(s) with the segmentor. Args: model (nn.Module): The loaded segmentor. imgs (str/ndarray or list[str/ndarray]): Either image files or loaded images. Returns: (list[Tensor]): The segmentation result. """ cfg = model.cfg device = next(model.parameters()).device # model device # build the data pipeline test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] test_pipeline = Compose(test_pipeline) # prepare data data = [] imgs = imgs if isinstance(imgs, list) else [imgs] for img in imgs: img_data = dict(img=img) img_data = test_pipeline(img_data) data.append(img_data) data = collate(data, samples_per_gpu=len(imgs)) if next(model.parameters()).is_cuda: # scatter to specified GPU data = scatter(data, [device])[0] else: data['img_metas'] = [i.data[0] for i in data['img_metas']] # forward the model with torch.no_grad(): result = model(return_loss=False, rescale=True, **data) return result def show_result_pyplot(model, img, result, palette=None, fig_size=(15, 10), opacity=0.5, title='', block=True, out_file=None): """Visualize the segmentation results on the image. Args: model (nn.Module): The loaded segmentor. img (str or np.ndarray): Image filename or loaded image. result (list): The segmentation result. palette (list[list[int]]] | None): The palette of segmentation map. If None is given, random palette will be generated. Default: None fig_size (tuple): Figure size of the pyplot figure. opacity(float): Opacity of painted segmentation map. Default 0.5. Must be in (0, 1] range. title (str): The title of pyplot figure. Default is ''. block (bool): Whether to block the pyplot figure. Default is True. out_file (str or None): The path to write the image. Default: None. """ if hasattr(model, 'module'): model = model.module img = model.show_result( img, result, palette=palette, show=False, opacity=opacity) plt.figure(figsize=fig_size) plt.imshow(mmcv.bgr2rgb(img)) plt.title(title) plt.tight_layout() plt.show(block=block) if out_file is not None: mmcv.imwrite(img, out_file)
4,967
33.027397
79
py
mmsegmentation
mmsegmentation-master/mmseg/apis/test.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import tempfile import warnings import mmcv import numpy as np import torch from mmcv.engine import collect_results_cpu, collect_results_gpu from mmcv.image import tensor2imgs from mmcv.runner import get_dist_info def np2tmp(array, temp_file_name=None, tmpdir=None): """Save ndarray to local numpy file. Args: array (ndarray): Ndarray to save. temp_file_name (str): Numpy file name. If 'temp_file_name=None', this function will generate a file name with tempfile.NamedTemporaryFile to save ndarray. Default: None. tmpdir (str): Temporary directory to save Ndarray files. Default: None. Returns: str: The numpy file name. """ if temp_file_name is None: temp_file_name = tempfile.NamedTemporaryFile( suffix='.npy', delete=False, dir=tmpdir).name np.save(temp_file_name, array) return temp_file_name def single_gpu_test(model, data_loader, show=False, out_dir=None, efficient_test=False, opacity=0.5, pre_eval=False, format_only=False, format_args={}): """Test with single GPU by progressive mode. Args: model (nn.Module): Model to be tested. data_loader (utils.data.Dataloader): Pytorch data loader. show (bool): Whether show results during inference. Default: False. out_dir (str, optional): If specified, the results will be dumped into the directory to save output results. efficient_test (bool): Whether save the results as local numpy files to save CPU memory during evaluation. Mutually exclusive with pre_eval and format_results. Default: False. opacity(float): Opacity of painted segmentation map. Default 0.5. Must be in (0, 1] range. pre_eval (bool): Use dataset.pre_eval() function to generate pre_results for metric evaluation. Mutually exclusive with efficient_test and format_results. Default: False. format_only (bool): Only format result for results commit. Mutually exclusive with pre_eval and efficient_test. Default: False. format_args (dict): The args for format_results. Default: {}. Returns: list: list of evaluation pre-results or list of save file names. """ if efficient_test: warnings.warn( 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' 'evaluation is CPU memory friendly with pre_eval=True') mmcv.mkdir_or_exist('.efficient_test') # when none of them is set true, return segmentation results as # a list of np.array. assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ 'exclusive, only one of them could be true .' model.eval() results = [] dataset = data_loader.dataset prog_bar = mmcv.ProgressBar(len(dataset)) # The pipeline about how the data_loader retrieval samples from dataset: # sampler -> batch_sampler -> indices # The indices are passed to dataset_fetcher to get data from dataset. # data_fetcher -> collate_fn(dataset[index]) -> data_sample # we use batch_sampler to get correct data idx loader_indices = data_loader.batch_sampler for batch_indices, data in zip(loader_indices, data_loader): with torch.no_grad(): result = model(return_loss=False, **data) if show or out_dir: img_tensor = data['img'][0] img_metas = data['img_metas'][0].data[0] imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) assert len(imgs) == len(img_metas) for img, img_meta in zip(imgs, img_metas): h, w, _ = img_meta['img_shape'] img_show = img[:h, :w, :] ori_h, ori_w = img_meta['ori_shape'][:-1] img_show = mmcv.imresize(img_show, (ori_w, ori_h)) if out_dir: out_file = osp.join(out_dir, img_meta['ori_filename']) else: out_file = None model.module.show_result( img_show, result, palette=dataset.PALETTE, show=show, out_file=out_file, opacity=opacity) if efficient_test: result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] if format_only: result = dataset.format_results( result, indices=batch_indices, **format_args) if pre_eval: # TODO: adapt samples_per_gpu > 1. # only samples_per_gpu=1 valid now result = dataset.pre_eval(result, indices=batch_indices) results.extend(result) else: results.extend(result) batch_size = len(result) for _ in range(batch_size): prog_bar.update() return results def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False, efficient_test=False, pre_eval=False, format_only=False, format_args={}): """Test model with multiple gpus by progressive mode. This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to 'tmpdir' and collects them by the rank 0 worker. Args: model (nn.Module): Model to be tested. data_loader (utils.data.Dataloader): Pytorch data loader. tmpdir (str): Path of directory to save the temporary results from different gpus under cpu mode. The same path is used for efficient test. Default: None. gpu_collect (bool): Option to use either gpu or cpu to collect results. Default: False. efficient_test (bool): Whether save the results as local numpy files to save CPU memory during evaluation. Mutually exclusive with pre_eval and format_results. Default: False. pre_eval (bool): Use dataset.pre_eval() function to generate pre_results for metric evaluation. Mutually exclusive with efficient_test and format_results. Default: False. format_only (bool): Only format result for results commit. Mutually exclusive with pre_eval and efficient_test. Default: False. format_args (dict): The args for format_results. Default: {}. Returns: list: list of evaluation pre-results or list of save file names. """ if efficient_test: warnings.warn( 'DeprecationWarning: ``efficient_test`` will be deprecated, the ' 'evaluation is CPU memory friendly with pre_eval=True') mmcv.mkdir_or_exist('.efficient_test') # when none of them is set true, return segmentation results as # a list of np.array. assert [efficient_test, pre_eval, format_only].count(True) <= 1, \ '``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \ 'exclusive, only one of them could be true .' model.eval() results = [] dataset = data_loader.dataset # The pipeline about how the data_loader retrieval samples from dataset: # sampler -> batch_sampler -> indices # The indices are passed to dataset_fetcher to get data from dataset. # data_fetcher -> collate_fn(dataset[index]) -> data_sample # we use batch_sampler to get correct data idx # batch_sampler based on DistributedSampler, the indices only point to data # samples of related machine. loader_indices = data_loader.batch_sampler rank, world_size = get_dist_info() if rank == 0: prog_bar = mmcv.ProgressBar(len(dataset)) for batch_indices, data in zip(loader_indices, data_loader): with torch.no_grad(): result = model(return_loss=False, rescale=True, **data) if efficient_test: result = [np2tmp(_, tmpdir='.efficient_test') for _ in result] if format_only: result = dataset.format_results( result, indices=batch_indices, **format_args) if pre_eval: # TODO: adapt samples_per_gpu > 1. # only samples_per_gpu=1 valid now result = dataset.pre_eval(result, indices=batch_indices) results.extend(result) if rank == 0: batch_size = len(result) * world_size for _ in range(batch_size): prog_bar.update() # collect results from all ranks if gpu_collect: results = collect_results_gpu(results, len(dataset)) else: results = collect_results_cpu(results, len(dataset), tmpdir) return results
9,246
38.517094
79
py
mmsegmentation
mmsegmentation-master/mmseg/apis/train.py
# Copyright (c) OpenMMLab. All rights reserved. import os import random import warnings import mmcv import numpy as np import torch import torch.distributed as dist from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, build_runner, get_dist_info) from mmcv.utils import build_from_cfg from mmseg import digit_version from mmseg.core import DistEvalHook, EvalHook, build_optimizer from mmseg.datasets import build_dataloader, build_dataset from mmseg.utils import (build_ddp, build_dp, find_latest_checkpoint, get_root_logger) def init_random_seed(seed=None, device='cuda'): """Initialize random seed. If the seed is not set, the seed will be automatically randomized, and then broadcast to all processes to prevent some potential bugs. Args: seed (int, Optional): The seed. Default to None. device (str): The device where the seed will be put on. Default to 'cuda'. Returns: int: Seed to be used. """ if seed is not None: return seed # Make sure all ranks share the same random seed to prevent # some potential bugs. Please refer to # https://github.com/open-mmlab/mmdetection/issues/6339 rank, world_size = get_dist_info() seed = np.random.randint(2**31) if world_size == 1: return seed if rank == 0: random_num = torch.tensor(seed, dtype=torch.int32, device=device) else: random_num = torch.tensor(0, dtype=torch.int32, device=device) dist.broadcast(random_num, src=0) return random_num.item() def set_random_seed(seed, deterministic=False): """Set random seed. Args: seed (int): Seed to be used. deterministic (bool): Whether to set the deterministic option for CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` to True and `torch.backends.cudnn.benchmark` to False. Default: False. """ random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) if deterministic: torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False def train_segmentor(model, dataset, cfg, distributed=False, validate=False, timestamp=None, meta=None): """Launch segmentor training.""" logger = get_root_logger(cfg.log_level) # prepare data loaders dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] # The default loader config loader_cfg = dict( # cfg.gpus will be ignored if distributed num_gpus=len(cfg.gpu_ids), dist=distributed, seed=cfg.seed, drop_last=True) # The overall dataloader settings loader_cfg.update({ k: v for k, v in cfg.data.items() if k not in [ 'train', 'val', 'test', 'train_dataloader', 'val_dataloader', 'test_dataloader' ] }) # The specific dataloader settings train_loader_cfg = {**loader_cfg, **cfg.data.get('train_dataloader', {})} data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] # put model on devices if distributed: find_unused_parameters = cfg.get('find_unused_parameters', False) # Sets the `find_unused_parameters` parameter in # DDP wrapper model = build_ddp( model, cfg.device, device_ids=[int(os.environ['LOCAL_RANK'])], broadcast_buffers=False, find_unused_parameters=find_unused_parameters) else: if not torch.cuda.is_available(): assert digit_version(mmcv.__version__) >= digit_version('1.4.4'), \ 'Please use MMCV >= 1.4.4 for CPU training!' model = build_dp(model, cfg.device, device_ids=cfg.gpu_ids) # build runner optimizer = build_optimizer(model, cfg.optimizer) if cfg.get('runner') is None: cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} warnings.warn( 'config is now expected to have a `runner` section, ' 'please set `runner` in your config.', UserWarning) runner = build_runner( cfg.runner, default_args=dict( model=model, batch_processor=None, optimizer=optimizer, work_dir=cfg.work_dir, logger=logger, meta=meta)) if cfg.device == 'npu': optimiter_config = dict(type='Fp16OptimizerHook', loss_scale='dynamic') cfg.optimizer_config = optimiter_config if \ not cfg.optimizer_config else cfg.optimizer_config # register hooks runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, cfg.checkpoint_config, cfg.log_config, cfg.get('momentum_config', None)) if distributed: # when distributed training by epoch, using`DistSamplerSeedHook` to set # the different seed to distributed sampler for each epoch, it will # shuffle dataset at each epoch and avoid overfitting. if isinstance(runner, EpochBasedRunner): runner.register_hook(DistSamplerSeedHook()) # an ugly walkaround to make the .log and .log.json filenames the same runner.timestamp = timestamp # register eval hooks if validate: val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) # The specific dataloader settings val_loader_cfg = { **loader_cfg, 'samples_per_gpu': 1, 'shuffle': False, # Not shuffle by default **cfg.data.get('val_dataloader', {}), } val_dataloader = build_dataloader(val_dataset, **val_loader_cfg) eval_cfg = cfg.get('evaluation', {}) eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' eval_hook = DistEvalHook if distributed else EvalHook # In this PR (https://github.com/open-mmlab/mmcv/pull/1193), the # priority of IterTimerHook has been modified from 'NORMAL' to 'LOW'. runner.register_hook( eval_hook(val_dataloader, **eval_cfg), priority='LOW') # user-defined hooks if cfg.get('custom_hooks', None): custom_hooks = cfg.custom_hooks assert isinstance(custom_hooks, list), \ f'custom_hooks expect list type, but got {type(custom_hooks)}' for hook_cfg in cfg.custom_hooks: assert isinstance(hook_cfg, dict), \ 'Each item in custom_hooks expects dict type, but got ' \ f'{type(hook_cfg)}' hook_cfg = hook_cfg.copy() priority = hook_cfg.pop('priority', 'NORMAL') hook = build_from_cfg(hook_cfg, HOOKS) runner.register_hook(hook, priority=priority) if cfg.resume_from is None and cfg.get('auto_resume'): resume_from = find_latest_checkpoint(cfg.work_dir) if resume_from is not None: cfg.resume_from = resume_from if cfg.resume_from: if cfg.device == 'npu': runner.resume(cfg.resume_from, map_location='npu') else: runner.resume(cfg.resume_from) elif cfg.load_from: runner.load_checkpoint(cfg.load_from) runner.run(data_loaders, cfg.workflow)
7,455
35.54902
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import (OPTIMIZER_BUILDERS, build_optimizer, build_optimizer_constructor) from .evaluation import * # noqa: F401, F403 from .hook import * # noqa: F401, F403 from .optimizers import * # noqa: F401, F403 from .seg import * # noqa: F401, F403 from .utils import * # noqa: F401, F403 __all__ = [ 'OPTIMIZER_BUILDERS', 'build_optimizer', 'build_optimizer_constructor' ]
460
34.461538
74
py
mmsegmentation
mmsegmentation-master/mmseg/core/builder.py
# Copyright (c) OpenMMLab. All rights reserved. import copy from mmcv.runner.optimizer import OPTIMIZER_BUILDERS as MMCV_OPTIMIZER_BUILDERS from mmcv.utils import Registry, build_from_cfg OPTIMIZER_BUILDERS = Registry( 'optimizer builder', parent=MMCV_OPTIMIZER_BUILDERS) def build_optimizer_constructor(cfg): constructor_type = cfg.get('type') if constructor_type in OPTIMIZER_BUILDERS: return build_from_cfg(cfg, OPTIMIZER_BUILDERS) elif constructor_type in MMCV_OPTIMIZER_BUILDERS: return build_from_cfg(cfg, MMCV_OPTIMIZER_BUILDERS) else: raise KeyError(f'{constructor_type} is not registered ' 'in the optimizer builder registry.') def build_optimizer(model, cfg): optimizer_cfg = copy.deepcopy(cfg) constructor_type = optimizer_cfg.pop('constructor', 'DefaultOptimizerConstructor') paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) optim_constructor = build_optimizer_constructor( dict( type=constructor_type, optimizer_cfg=optimizer_cfg, paramwise_cfg=paramwise_cfg)) optimizer = optim_constructor(model) return optimizer
1,218
34.852941
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/evaluation/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .class_names import get_classes, get_palette from .eval_hooks import DistEvalHook, EvalHook from .metrics import (eval_metrics, intersect_and_union, mean_dice, mean_fscore, mean_iou, pre_eval_to_metrics) __all__ = [ 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', 'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics', 'intersect_and_union' ]
465
37.833333
72
py
mmsegmentation
mmsegmentation-master/mmseg/core/evaluation/class_names.py
# Copyright (c) OpenMMLab. All rights reserved. import mmcv def cityscapes_classes(): """Cityscapes class names for external use.""" return [ 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', 'bicycle' ] def ade_classes(): """ADE20K class names for external use.""" return [ 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', 'clock', 'flag' ] def voc_classes(): """Pascal VOC class names for external use.""" return [ 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' ] def cocostuff_classes(): """CocoStuff class names for external use.""" return [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', 'blanket', 'branch', 'bridge', 'building-other', 'bush', 'cabinet', 'cage', 'cardboard', 'carpet', 'ceiling-other', 'ceiling-tile', 'cloth', 'clothes', 'clouds', 'counter', 'cupboard', 'curtain', 'desk-stuff', 'dirt', 'door-stuff', 'fence', 'floor-marble', 'floor-other', 'floor-stone', 'floor-tile', 'floor-wood', 'flower', 'fog', 'food-other', 'fruit', 'furniture-other', 'grass', 'gravel', 'ground-other', 'hill', 'house', 'leaves', 'light', 'mat', 'metal', 'mirror-stuff', 'moss', 'mountain', 'mud', 'napkin', 'net', 'paper', 'pavement', 'pillow', 'plant-other', 'plastic', 'platform', 'playingfield', 'railing', 'railroad', 'river', 'road', 'rock', 'roof', 'rug', 'salad', 'sand', 'sea', 'shelf', 'sky-other', 'skyscraper', 'snow', 'solid-other', 'stairs', 'stone', 'straw', 'structural-other', 'table', 'tent', 'textile-other', 'towel', 'tree', 'vegetable', 'wall-brick', 'wall-concrete', 'wall-other', 'wall-panel', 'wall-stone', 'wall-tile', 'wall-wood', 'water-other', 'waterdrops', 'window-blind', 'window-other', 'wood' ] def loveda_classes(): """LoveDA class names for external use.""" return [ 'background', 'building', 'road', 'water', 'barren', 'forest', 'agricultural' ] def potsdam_classes(): """Potsdam class names for external use.""" return [ 'impervious_surface', 'building', 'low_vegetation', 'tree', 'car', 'clutter' ] def vaihingen_classes(): """Vaihingen class names for external use.""" return [ 'impervious_surface', 'building', 'low_vegetation', 'tree', 'car', 'clutter' ] def isaid_classes(): """iSAID class names for external use.""" return [ 'background', 'ship', 'store_tank', 'baseball_diamond', 'tennis_court', 'basketball_court', 'Ground_Track_Field', 'Bridge', 'Large_Vehicle', 'Small_Vehicle', 'Helicopter', 'Swimming_pool', 'Roundabout', 'Soccer_ball_field', 'plane', 'Harbor' ] def stare_classes(): """stare class names for external use.""" return ['background', 'vessel'] def occludedface_classes(): """occludedface class names for external use.""" return ['background', 'face'] def cityscapes_palette(): """Cityscapes palette for external use.""" return [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], [0, 0, 230], [119, 11, 32]] def ade_palette(): """ADE20K palette for external use.""" return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255]] def voc_palette(): """Pascal VOC palette for external use.""" return [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] def cocostuff_palette(): """CocoStuff palette for external use.""" return [[0, 192, 64], [0, 192, 64], [0, 64, 96], [128, 192, 192], [0, 64, 64], [0, 192, 224], [0, 192, 192], [128, 192, 64], [0, 192, 96], [128, 192, 64], [128, 32, 192], [0, 0, 224], [0, 0, 64], [0, 160, 192], [128, 0, 96], [128, 0, 192], [0, 32, 192], [128, 128, 224], [0, 0, 192], [128, 160, 192], [128, 128, 0], [128, 0, 32], [128, 32, 0], [128, 0, 128], [64, 128, 32], [0, 160, 0], [0, 0, 0], [192, 128, 160], [0, 32, 0], [0, 128, 128], [64, 128, 160], [128, 160, 0], [0, 128, 0], [192, 128, 32], [128, 96, 128], [0, 0, 128], [64, 0, 32], [0, 224, 128], [128, 0, 0], [192, 0, 160], [0, 96, 128], [128, 128, 128], [64, 0, 160], [128, 224, 128], [128, 128, 64], [192, 0, 32], [128, 96, 0], [128, 0, 192], [0, 128, 32], [64, 224, 0], [0, 0, 64], [128, 128, 160], [64, 96, 0], [0, 128, 192], [0, 128, 160], [192, 224, 0], [0, 128, 64], [128, 128, 32], [192, 32, 128], [0, 64, 192], [0, 0, 32], [64, 160, 128], [128, 64, 64], [128, 0, 160], [64, 32, 128], [128, 192, 192], [0, 0, 160], [192, 160, 128], [128, 192, 0], [128, 0, 96], [192, 32, 0], [128, 64, 128], [64, 128, 96], [64, 160, 0], [0, 64, 0], [192, 128, 224], [64, 32, 0], [0, 192, 128], [64, 128, 224], [192, 160, 0], [0, 192, 0], [192, 128, 96], [192, 96, 128], [0, 64, 128], [64, 0, 96], [64, 224, 128], [128, 64, 0], [192, 0, 224], [64, 96, 128], [128, 192, 128], [64, 0, 224], [192, 224, 128], [128, 192, 64], [192, 0, 96], [192, 96, 0], [128, 64, 192], [0, 128, 96], [0, 224, 0], [64, 64, 64], [128, 128, 224], [0, 96, 0], [64, 192, 192], [0, 128, 224], [128, 224, 0], [64, 192, 64], [128, 128, 96], [128, 32, 128], [64, 0, 192], [0, 64, 96], [0, 160, 128], [192, 0, 64], [128, 64, 224], [0, 32, 128], [192, 128, 192], [0, 64, 224], [128, 160, 128], [192, 128, 0], [128, 64, 32], [128, 32, 64], [192, 0, 128], [64, 192, 32], [0, 160, 64], [64, 0, 0], [192, 192, 160], [0, 32, 64], [64, 128, 128], [64, 192, 160], [128, 160, 64], [64, 128, 0], [192, 192, 32], [128, 96, 192], [64, 0, 128], [64, 64, 32], [0, 224, 192], [192, 0, 0], [192, 64, 160], [0, 96, 192], [192, 128, 128], [64, 64, 160], [128, 224, 192], [192, 128, 64], [192, 64, 32], [128, 96, 64], [192, 0, 192], [0, 192, 32], [64, 224, 64], [64, 0, 64], [128, 192, 160], [64, 96, 64], [64, 128, 192], [0, 192, 160], [192, 224, 64], [64, 128, 64], [128, 192, 32], [192, 32, 192], [64, 64, 192], [0, 64, 32], [64, 160, 192], [192, 64, 64], [128, 64, 160], [64, 32, 192], [192, 192, 192], [0, 64, 160], [192, 160, 192], [192, 192, 0], [128, 64, 96], [192, 32, 64], [192, 64, 128], [64, 192, 96], [64, 160, 64], [64, 64, 0]] def loveda_palette(): """LoveDA palette for external use.""" return [[255, 255, 255], [255, 0, 0], [255, 255, 0], [0, 0, 255], [159, 129, 183], [0, 255, 0], [255, 195, 128]] def potsdam_palette(): """Potsdam palette for external use.""" return [[255, 255, 255], [0, 0, 255], [0, 255, 255], [0, 255, 0], [255, 255, 0], [255, 0, 0]] def vaihingen_palette(): """Vaihingen palette for external use.""" return [[255, 255, 255], [0, 0, 255], [0, 255, 255], [0, 255, 0], [255, 255, 0], [255, 0, 0]] def isaid_palette(): """iSAID palette for external use.""" return [[0, 0, 0], [0, 0, 63], [0, 63, 63], [0, 63, 0], [0, 63, 127], [0, 63, 191], [0, 63, 255], [0, 127, 63], [0, 127, 127], [0, 0, 127], [0, 0, 191], [0, 0, 255], [0, 191, 127], [0, 127, 191], [0, 127, 255], [0, 100, 155]] def stare_palette(): """STARE palette for external use.""" return [[120, 120, 120], [6, 230, 230]] def occludedface_palette(): """occludedface palette for external use.""" return [[0, 0, 0], [128, 0, 0]] dataset_aliases = { 'cityscapes': ['cityscapes'], 'ade': ['ade', 'ade20k'], 'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug'], 'loveda': ['loveda'], 'potsdam': ['potsdam'], 'vaihingen': ['vaihingen'], 'cocostuff': [ 'cocostuff', 'cocostuff10k', 'cocostuff164k', 'coco-stuff', 'coco-stuff10k', 'coco-stuff164k', 'coco_stuff', 'coco_stuff10k', 'coco_stuff164k' ], 'isaid': ['isaid', 'iSAID'], 'stare': ['stare', 'STARE'], 'occludedface': ['occludedface'] } def get_classes(dataset): """Get class names of a dataset.""" alias2name = {} for name, aliases in dataset_aliases.items(): for alias in aliases: alias2name[alias] = name if mmcv.is_str(dataset): if dataset in alias2name: labels = eval(alias2name[dataset] + '_classes()') else: raise ValueError(f'Unrecognized dataset: {dataset}') else: raise TypeError(f'dataset must a str, but got {type(dataset)}') return labels def get_palette(dataset): """Get class palette (RGB) of a dataset.""" alias2name = {} for name, aliases in dataset_aliases.items(): for alias in aliases: alias2name[alias] = name if mmcv.is_str(dataset): if dataset in alias2name: labels = eval(alias2name[dataset] + '_palette()') else: raise ValueError(f'Unrecognized dataset: {dataset}') else: raise TypeError(f'dataset must a str, but got {type(dataset)}') return labels
15,375
45.878049
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/evaluation/eval_hooks.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import warnings import torch.distributed as dist from mmcv.runner import DistEvalHook as _DistEvalHook from mmcv.runner import EvalHook as _EvalHook from torch.nn.modules.batchnorm import _BatchNorm class EvalHook(_EvalHook): """Single GPU EvalHook, with efficient test support. Args: by_epoch (bool): Determine perform evaluation by epoch or by iteration. If set to True, it will perform by epoch. Otherwise, by iteration. Default: False. efficient_test (bool): Whether save the results as local numpy files to save CPU memory during evaluation. Default: False. pre_eval (bool): Whether to use progressive mode to evaluate model. Default: False. Returns: list: The prediction results. """ greater_keys = ['mIoU', 'mAcc', 'aAcc'] def __init__(self, *args, by_epoch=False, efficient_test=False, pre_eval=False, **kwargs): super().__init__(*args, by_epoch=by_epoch, **kwargs) self.pre_eval = pre_eval self.latest_results = None if efficient_test: warnings.warn( 'DeprecationWarning: ``efficient_test`` for evaluation hook ' 'is deprecated, the evaluation hook is CPU memory friendly ' 'with ``pre_eval=True`` as argument for ``single_gpu_test()`` ' 'function') def _do_evaluate(self, runner): """perform evaluation and save ckpt.""" if not self._should_evaluate(runner): return from mmseg.apis import single_gpu_test results = single_gpu_test( runner.model, self.dataloader, show=False, pre_eval=self.pre_eval) self.latest_results = results runner.log_buffer.clear() runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) key_score = self.evaluate(runner, results) if self.save_best: self._save_ckpt(runner, key_score) class DistEvalHook(_DistEvalHook): """Distributed EvalHook, with efficient test support. Args: by_epoch (bool): Determine perform evaluation by epoch or by iteration. If set to True, it will perform by epoch. Otherwise, by iteration. Default: False. efficient_test (bool): Whether save the results as local numpy files to save CPU memory during evaluation. Default: False. pre_eval (bool): Whether to use progressive mode to evaluate model. Default: False. Returns: list: The prediction results. """ greater_keys = ['mIoU', 'mAcc', 'aAcc'] def __init__(self, *args, by_epoch=False, efficient_test=False, pre_eval=False, **kwargs): super().__init__(*args, by_epoch=by_epoch, **kwargs) self.pre_eval = pre_eval self.latest_results = None if efficient_test: warnings.warn( 'DeprecationWarning: ``efficient_test`` for evaluation hook ' 'is deprecated, the evaluation hook is CPU memory friendly ' 'with ``pre_eval=True`` as argument for ``multi_gpu_test()`` ' 'function') def _do_evaluate(self, runner): """perform evaluation and save ckpt.""" # Synchronization of BatchNorm's buffer (running_mean # and running_var) is not supported in the DDP of pytorch, # which may cause the inconsistent performance of models in # different ranks, so we broadcast BatchNorm's buffers # of rank 0 to other ranks to avoid this. if self.broadcast_bn_buffer: model = runner.model for name, module in model.named_modules(): if isinstance(module, _BatchNorm) and module.track_running_stats: dist.broadcast(module.running_var, 0) dist.broadcast(module.running_mean, 0) if not self._should_evaluate(runner): return tmpdir = self.tmpdir if tmpdir is None: tmpdir = osp.join(runner.work_dir, '.eval_hook') from mmseg.apis import multi_gpu_test results = multi_gpu_test( runner.model, self.dataloader, tmpdir=tmpdir, gpu_collect=self.gpu_collect, pre_eval=self.pre_eval) self.latest_results = results runner.log_buffer.clear() if runner.rank == 0: print('\n') runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) key_score = self.evaluate(runner, results) if self.save_best: self._save_ckpt(runner, key_score)
4,900
35.849624
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/evaluation/metrics.py
# Copyright (c) OpenMMLab. All rights reserved. from collections import OrderedDict import mmcv import numpy as np import torch def f_score(precision, recall, beta=1): """calculate the f-score value. Args: precision (float | torch.Tensor): The precision value. recall (float | torch.Tensor): The recall value. beta (int): Determines the weight of recall in the combined score. Default: False. Returns: [torch.tensor]: The f-score value. """ score = (1 + beta**2) * (precision * recall) / ( (beta**2 * precision) + recall) return score def intersect_and_union(pred_label, label, num_classes, ignore_index, label_map=dict(), reduce_zero_label=False): """Calculate intersection and Union. Args: pred_label (ndarray | str): Prediction segmentation map or predict result filename. label (ndarray | str): Ground truth segmentation map or label filename. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. label_map (dict): Mapping old labels to new labels. The parameter will work only when label is str. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. The parameter will work only when label is str. Default: False. Returns: torch.Tensor: The intersection of prediction and ground truth histogram on all classes. torch.Tensor: The union of prediction and ground truth histogram on all classes. torch.Tensor: The prediction histogram on all classes. torch.Tensor: The ground truth histogram on all classes. """ if isinstance(pred_label, str): pred_label = torch.from_numpy(np.load(pred_label)) else: pred_label = torch.from_numpy((pred_label)) if isinstance(label, str): label = torch.from_numpy( mmcv.imread(label, flag='unchanged', backend='pillow')) else: label = torch.from_numpy(label) if reduce_zero_label: label[label == 0] = 255 label = label - 1 label[label == 254] = 255 if label_map is not None: label_copy = label.clone() for old_id, new_id in label_map.items(): label[label_copy == old_id] = new_id mask = (label != ignore_index) pred_label = pred_label[mask] label = label[mask] intersect = pred_label[pred_label == label] area_intersect = torch.histc( intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) area_pred_label = torch.histc( pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) area_label = torch.histc( label.float(), bins=(num_classes), min=0, max=num_classes - 1) area_union = area_pred_label + area_label - area_intersect return area_intersect, area_union, area_pred_label, area_label def total_intersect_and_union(results, gt_seg_maps, num_classes, ignore_index, label_map=dict(), reduce_zero_label=False): """Calculate Total Intersection and Union. Args: results (list[ndarray] | list[str]): List of prediction segmentation maps or list of prediction result filenames. gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground truth segmentation maps or list of label filenames. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. label_map (dict): Mapping old labels to new labels. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. Default: False. Returns: ndarray: The intersection of prediction and ground truth histogram on all classes. ndarray: The union of prediction and ground truth histogram on all classes. ndarray: The prediction histogram on all classes. ndarray: The ground truth histogram on all classes. """ total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) for result, gt_seg_map in zip(results, gt_seg_maps): area_intersect, area_union, area_pred_label, area_label = \ intersect_and_union( result, gt_seg_map, num_classes, ignore_index, label_map, reduce_zero_label) total_area_intersect += area_intersect total_area_union += area_union total_area_pred_label += area_pred_label total_area_label += area_label return total_area_intersect, total_area_union, total_area_pred_label, \ total_area_label def mean_iou(results, gt_seg_maps, num_classes, ignore_index, nan_to_num=None, label_map=dict(), reduce_zero_label=False): """Calculate Mean Intersection and Union (mIoU) Args: results (list[ndarray] | list[str]): List of prediction segmentation maps or list of prediction result filenames. gt_seg_maps (list[ndarray] | list[str]): list of ground truth segmentation maps or list of label filenames. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. label_map (dict): Mapping old labels to new labels. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. Default: False. Returns: dict[str, float | ndarray]: <aAcc> float: Overall accuracy on all images. <Acc> ndarray: Per category accuracy, shape (num_classes, ). <IoU> ndarray: Per category IoU, shape (num_classes, ). """ iou_result = eval_metrics( results=results, gt_seg_maps=gt_seg_maps, num_classes=num_classes, ignore_index=ignore_index, metrics=['mIoU'], nan_to_num=nan_to_num, label_map=label_map, reduce_zero_label=reduce_zero_label) return iou_result def mean_dice(results, gt_seg_maps, num_classes, ignore_index, nan_to_num=None, label_map=dict(), reduce_zero_label=False): """Calculate Mean Dice (mDice) Args: results (list[ndarray] | list[str]): List of prediction segmentation maps or list of prediction result filenames. gt_seg_maps (list[ndarray] | list[str]): list of ground truth segmentation maps or list of label filenames. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. label_map (dict): Mapping old labels to new labels. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. Default: False. Returns: dict[str, float | ndarray]: Default metrics. <aAcc> float: Overall accuracy on all images. <Acc> ndarray: Per category accuracy, shape (num_classes, ). <Dice> ndarray: Per category dice, shape (num_classes, ). """ dice_result = eval_metrics( results=results, gt_seg_maps=gt_seg_maps, num_classes=num_classes, ignore_index=ignore_index, metrics=['mDice'], nan_to_num=nan_to_num, label_map=label_map, reduce_zero_label=reduce_zero_label) return dice_result def mean_fscore(results, gt_seg_maps, num_classes, ignore_index, nan_to_num=None, label_map=dict(), reduce_zero_label=False, beta=1): """Calculate Mean F-Score (mFscore) Args: results (list[ndarray] | list[str]): List of prediction segmentation maps or list of prediction result filenames. gt_seg_maps (list[ndarray] | list[str]): list of ground truth segmentation maps or list of label filenames. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. label_map (dict): Mapping old labels to new labels. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. Default: False. beta (int): Determines the weight of recall in the combined score. Default: False. Returns: dict[str, float | ndarray]: Default metrics. <aAcc> float: Overall accuracy on all images. <Fscore> ndarray: Per category recall, shape (num_classes, ). <Precision> ndarray: Per category precision, shape (num_classes, ). <Recall> ndarray: Per category f-score, shape (num_classes, ). """ fscore_result = eval_metrics( results=results, gt_seg_maps=gt_seg_maps, num_classes=num_classes, ignore_index=ignore_index, metrics=['mFscore'], nan_to_num=nan_to_num, label_map=label_map, reduce_zero_label=reduce_zero_label, beta=beta) return fscore_result def eval_metrics(results, gt_seg_maps, num_classes, ignore_index, metrics=['mIoU'], nan_to_num=None, label_map=dict(), reduce_zero_label=False, beta=1): """Calculate evaluation metrics Args: results (list[ndarray] | list[str]): List of prediction segmentation maps or list of prediction result filenames. gt_seg_maps (list[ndarray] | list[str] | Iterables): list of ground truth segmentation maps or list of label filenames. num_classes (int): Number of categories. ignore_index (int): Index that will be ignored in evaluation. metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. label_map (dict): Mapping old labels to new labels. Default: dict(). reduce_zero_label (bool): Whether ignore zero label. Default: False. Returns: float: Overall accuracy on all images. ndarray: Per category accuracy, shape (num_classes, ). ndarray: Per category evaluation metrics, shape (num_classes, ). """ total_area_intersect, total_area_union, total_area_pred_label, \ total_area_label = total_intersect_and_union( results, gt_seg_maps, num_classes, ignore_index, label_map, reduce_zero_label) ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, total_area_pred_label, total_area_label, metrics, nan_to_num, beta) return ret_metrics def pre_eval_to_metrics(pre_eval_results, metrics=['mIoU'], nan_to_num=None, beta=1): """Convert pre-eval results to metrics. Args: pre_eval_results (list[tuple[torch.Tensor]]): per image eval results for computing evaluation metric metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. Returns: float: Overall accuracy on all images. ndarray: Per category accuracy, shape (num_classes, ). ndarray: Per category evaluation metrics, shape (num_classes, ). """ # convert list of tuples to tuple of lists, e.g. # [(A_1, B_1, C_1, D_1), ..., (A_n, B_n, C_n, D_n)] to # ([A_1, ..., A_n], ..., [D_1, ..., D_n]) pre_eval_results = tuple(zip(*pre_eval_results)) assert len(pre_eval_results) == 4 total_area_intersect = sum(pre_eval_results[0]) total_area_union = sum(pre_eval_results[1]) total_area_pred_label = sum(pre_eval_results[2]) total_area_label = sum(pre_eval_results[3]) ret_metrics = total_area_to_metrics(total_area_intersect, total_area_union, total_area_pred_label, total_area_label, metrics, nan_to_num, beta) return ret_metrics def total_area_to_metrics(total_area_intersect, total_area_union, total_area_pred_label, total_area_label, metrics=['mIoU'], nan_to_num=None, beta=1): """Calculate evaluation metrics Args: total_area_intersect (ndarray): The intersection of prediction and ground truth histogram on all classes. total_area_union (ndarray): The union of prediction and ground truth histogram on all classes. total_area_pred_label (ndarray): The prediction histogram on all classes. total_area_label (ndarray): The ground truth histogram on all classes. metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. nan_to_num (int, optional): If specified, NaN values will be replaced by the numbers defined by the user. Default: None. Returns: float: Overall accuracy on all images. ndarray: Per category accuracy, shape (num_classes, ). ndarray: Per category evaluation metrics, shape (num_classes, ). """ if isinstance(metrics, str): metrics = [metrics] allowed_metrics = ['mIoU', 'mDice', 'mFscore'] if not set(metrics).issubset(set(allowed_metrics)): raise KeyError('metrics {} is not supported'.format(metrics)) all_acc = total_area_intersect.sum() / total_area_label.sum() ret_metrics = OrderedDict({'aAcc': all_acc}) for metric in metrics: if metric == 'mIoU': iou = total_area_intersect / total_area_union acc = total_area_intersect / total_area_label ret_metrics['IoU'] = iou ret_metrics['Acc'] = acc elif metric == 'mDice': dice = 2 * total_area_intersect / ( total_area_pred_label + total_area_label) acc = total_area_intersect / total_area_label ret_metrics['Dice'] = dice ret_metrics['Acc'] = acc elif metric == 'mFscore': precision = total_area_intersect / total_area_pred_label recall = total_area_intersect / total_area_label f_value = torch.tensor( [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) ret_metrics['Fscore'] = f_value ret_metrics['Precision'] = precision ret_metrics['Recall'] = recall ret_metrics = { metric: value.numpy() for metric, value in ret_metrics.items() } if nan_to_num is not None: ret_metrics = OrderedDict({ metric: np.nan_to_num(metric_value, nan=nan_to_num) for metric, metric_value in ret_metrics.items() }) return ret_metrics
16,105
39.56927
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/hook/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .wandblogger_hook import MMSegWandbHook __all__ = ['MMSegWandbHook']
123
23.8
47
py
mmsegmentation
mmsegmentation-master/mmseg/core/hook/wandblogger_hook.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from mmcv.runner import HOOKS from mmcv.runner.dist_utils import master_only from mmcv.runner.hooks.checkpoint import CheckpointHook from mmcv.runner.hooks.logger.wandb import WandbLoggerHook from mmseg.core import DistEvalHook, EvalHook @HOOKS.register_module() class MMSegWandbHook(WandbLoggerHook): """Enhanced Wandb logger hook for MMSegmentation. Comparing with the :cls:`mmcv.runner.WandbLoggerHook`, this hook can not only automatically log all the metrics but also log the following extra information - saves model checkpoints as W&B Artifact, and logs model prediction as interactive W&B Tables. - Metrics: The MMSegWandbHook will automatically log training and validation metrics along with system metrics (CPU/GPU). - Checkpointing: If `log_checkpoint` is True, the checkpoint saved at every checkpoint interval will be saved as W&B Artifacts. This depends on the : class:`mmcv.runner.CheckpointHook` whose priority is higher than this hook. Please refer to https://docs.wandb.ai/guides/artifacts/model-versioning to learn more about model versioning with W&B Artifacts. - Checkpoint Metadata: If evaluation results are available for a given checkpoint artifact, it will have a metadata associated with it. The metadata contains the evaluation metrics computed on validation data with that checkpoint along with the current epoch. It depends on `EvalHook` whose priority is more than MMSegWandbHook. - Evaluation: At every evaluation interval, the `MMSegWandbHook` logs the model prediction as interactive W&B Tables. The number of samples logged is given by `num_eval_images`. Currently, the `MMSegWandbHook` logs the predicted segmentation masks along with the ground truth at every evaluation interval. This depends on the `EvalHook` whose priority is more than `MMSegWandbHook`. Also note that the data is just logged once and subsequent evaluation tables uses reference to the logged data to save memory usage. Please refer to https://docs.wandb.ai/guides/data-vis to learn more about W&B Tables. ``` Example: log_config = dict( ... hooks=[ ..., dict(type='MMSegWandbHook', init_kwargs={ 'entity': "YOUR_ENTITY", 'project': "YOUR_PROJECT_NAME" }, interval=50, log_checkpoint=True, log_checkpoint_metadata=True, num_eval_images=100, bbox_score_thr=0.3) ]) ``` Args: init_kwargs (dict): A dict passed to wandb.init to initialize a W&B run. Please refer to https://docs.wandb.ai/ref/python/init for possible key-value pairs. interval (int): Logging interval (every k iterations). Default 10. log_checkpoint (bool): Save the checkpoint at every checkpoint interval as W&B Artifacts. Use this for model versioning where each version is a checkpoint. Default: False log_checkpoint_metadata (bool): Log the evaluation metrics computed on the validation data with the checkpoint, along with current epoch as a metadata to that checkpoint. Default: True num_eval_images (int): Number of validation images to be logged. Default: 100 """ def __init__(self, init_kwargs=None, interval=50, log_checkpoint=False, log_checkpoint_metadata=False, num_eval_images=100, **kwargs): super(MMSegWandbHook, self).__init__(init_kwargs, interval, **kwargs) self.log_checkpoint = log_checkpoint self.log_checkpoint_metadata = ( log_checkpoint and log_checkpoint_metadata) self.num_eval_images = num_eval_images self.log_evaluation = (num_eval_images > 0) self.ckpt_hook: CheckpointHook = None self.eval_hook: EvalHook = None self.test_fn = None @master_only def before_run(self, runner): super(MMSegWandbHook, self).before_run(runner) # Check if EvalHook and CheckpointHook are available. for hook in runner.hooks: if isinstance(hook, CheckpointHook): self.ckpt_hook = hook if isinstance(hook, EvalHook): from mmseg.apis import single_gpu_test self.eval_hook = hook self.test_fn = single_gpu_test if isinstance(hook, DistEvalHook): from mmseg.apis import multi_gpu_test self.eval_hook = hook self.test_fn = multi_gpu_test # Check conditions to log checkpoint if self.log_checkpoint: if self.ckpt_hook is None: self.log_checkpoint = False self.log_checkpoint_metadata = False runner.logger.warning( 'To log checkpoint in MMSegWandbHook, `CheckpointHook` is' 'required, please check hooks in the runner.') else: self.ckpt_interval = self.ckpt_hook.interval # Check conditions to log evaluation if self.log_evaluation or self.log_checkpoint_metadata: if self.eval_hook is None: self.log_evaluation = False self.log_checkpoint_metadata = False runner.logger.warning( 'To log evaluation or checkpoint metadata in ' 'MMSegWandbHook, `EvalHook` or `DistEvalHook` in mmseg ' 'is required, please check whether the validation ' 'is enabled.') else: self.eval_interval = self.eval_hook.interval self.val_dataset = self.eval_hook.dataloader.dataset # Determine the number of samples to be logged. if self.num_eval_images > len(self.val_dataset): self.num_eval_images = len(self.val_dataset) runner.logger.warning( f'The num_eval_images ({self.num_eval_images}) is ' 'greater than the total number of validation samples ' f'({len(self.val_dataset)}). The complete validation ' 'dataset will be logged.') # Check conditions to log checkpoint metadata if self.log_checkpoint_metadata: assert self.ckpt_interval % self.eval_interval == 0, \ 'To log checkpoint metadata in MMSegWandbHook, the interval ' \ f'of checkpoint saving ({self.ckpt_interval}) should be ' \ 'divisible by the interval of evaluation ' \ f'({self.eval_interval}).' # Initialize evaluation table if self.log_evaluation: # Initialize data table self._init_data_table() # Add data to the data table self._add_ground_truth(runner) # Log ground truth data self._log_data_table() # for the reason of this double-layered structure, refer to # https://github.com/open-mmlab/mmdetection/issues/8145#issuecomment-1345343076 def after_train_iter(self, runner): if self.get_mode(runner) == 'train': # An ugly patch. The iter-based eval hook will call the # `after_train_iter` method of all logger hooks before evaluation. # Use this trick to skip that call. # Don't call super method at first, it will clear the log_buffer return super(MMSegWandbHook, self).after_train_iter(runner) else: super(MMSegWandbHook, self).after_train_iter(runner) self._after_train_iter(runner) @master_only def _after_train_iter(self, runner): if self.by_epoch: return # Save checkpoint and metadata if (self.log_checkpoint and self.every_n_iters(runner, self.ckpt_interval) or (self.ckpt_hook.save_last and self.is_last_iter(runner))): if self.log_checkpoint_metadata and self.eval_hook: metadata = { 'iter': runner.iter + 1, **self._get_eval_results() } else: metadata = None aliases = [f'iter_{runner.iter+1}', 'latest'] model_path = osp.join(self.ckpt_hook.out_dir, f'iter_{runner.iter+1}.pth') self._log_ckpt_as_artifact(model_path, aliases, metadata) # Save prediction table if self.log_evaluation and self.eval_hook._should_evaluate(runner): # Currently the results of eval_hook is not reused by wandb, so # wandb will run evaluation again internally. We will consider # refactoring this function afterwards results = self.test_fn(runner.model, self.eval_hook.dataloader) # Initialize evaluation table self._init_pred_table() # Log predictions self._log_predictions(results, runner) # Log the table self._log_eval_table(runner.iter + 1) @master_only def after_run(self, runner): self.wandb.finish() def _log_ckpt_as_artifact(self, model_path, aliases, metadata=None): """Log model checkpoint as W&B Artifact. Args: model_path (str): Path of the checkpoint to log. aliases (list): List of the aliases associated with this artifact. metadata (dict, optional): Metadata associated with this artifact. """ model_artifact = self.wandb.Artifact( f'run_{self.wandb.run.id}_model', type='model', metadata=metadata) model_artifact.add_file(model_path) self.wandb.log_artifact(model_artifact, aliases=aliases) def _get_eval_results(self): """Get model evaluation results.""" results = self.eval_hook.latest_results eval_results = self.val_dataset.evaluate( results, logger='silent', **self.eval_hook.eval_kwargs) return eval_results def _init_data_table(self): """Initialize the W&B Tables for validation data.""" columns = ['image_name', 'image'] self.data_table = self.wandb.Table(columns=columns) def _init_pred_table(self): """Initialize the W&B Tables for model evaluation.""" columns = ['image_name', 'ground_truth', 'prediction'] self.eval_table = self.wandb.Table(columns=columns) def _add_ground_truth(self, runner): # Get image loading pipeline from mmseg.datasets.pipelines import LoadImageFromFile img_loader = None for t in self.val_dataset.pipeline.transforms: if isinstance(t, LoadImageFromFile): img_loader = t if img_loader is None: self.log_evaluation = False runner.logger.warning( 'LoadImageFromFile is required to add images ' 'to W&B Tables.') return # Select the images to be logged. self.eval_image_indexs = np.arange(len(self.val_dataset)) # Set seed so that same validation set is logged each time. np.random.seed(42) np.random.shuffle(self.eval_image_indexs) self.eval_image_indexs = self.eval_image_indexs[:self.num_eval_images] classes = self.val_dataset.CLASSES self.class_id_to_label = {id: name for id, name in enumerate(classes)} self.class_set = self.wandb.Classes([{ 'id': id, 'name': name } for id, name in self.class_id_to_label.items()]) for idx in self.eval_image_indexs: img_info = self.val_dataset.img_infos[idx] image_name = img_info['filename'] # Get image and convert from BGR to RGB img_meta = img_loader( dict(img_info=img_info, img_prefix=self.val_dataset.img_dir)) image = mmcv.bgr2rgb(img_meta['img']) # Get segmentation mask seg_mask = self.val_dataset.get_gt_seg_map_by_idx(idx) # Dict of masks to be logged. wandb_masks = None if seg_mask.ndim == 2: wandb_masks = { 'ground_truth': { 'mask_data': seg_mask, 'class_labels': self.class_id_to_label } } # Log a row to the data table. self.data_table.add_data( image_name, self.wandb.Image( image, masks=wandb_masks, classes=self.class_set)) else: runner.logger.warning( f'The segmentation mask is {seg_mask.ndim}D which ' 'is not supported by W&B.') self.log_evaluation = False return def _log_predictions(self, results, runner): table_idxs = self.data_table_ref.get_index() assert len(table_idxs) == len(self.eval_image_indexs) assert len(results) == len(self.val_dataset) for ndx, eval_image_index in enumerate(self.eval_image_indexs): # Get the result pred_mask = results[eval_image_index] if pred_mask.ndim == 2: wandb_masks = { 'prediction': { 'mask_data': pred_mask, 'class_labels': self.class_id_to_label } } # Log a row to the data table. self.eval_table.add_data( self.data_table_ref.data[ndx][0], self.data_table_ref.data[ndx][1], self.wandb.Image( self.data_table_ref.data[ndx][1], masks=wandb_masks, classes=self.class_set)) else: runner.logger.warning( 'The predictio segmentation mask is ' f'{pred_mask.ndim}D which is not supported by W&B.') self.log_evaluation = False return def _log_data_table(self): """Log the W&B Tables for validation data as artifact and calls `use_artifact` on it so that the evaluation table can use the reference of already uploaded images. This allows the data to be uploaded just once. """ data_artifact = self.wandb.Artifact('val', type='dataset') data_artifact.add(self.data_table, 'val_data') self.wandb.run.use_artifact(data_artifact) data_artifact.wait() self.data_table_ref = data_artifact.get('val_data') def _log_eval_table(self, iter): """Log the W&B Tables for model evaluation. The table will be logged multiple times creating new version. Use this to compare models at different intervals interactively. """ pred_artifact = self.wandb.Artifact( f'run_{self.wandb.run.id}_pred', type='evaluation') pred_artifact.add(self.eval_table, 'eval_data') self.wandb.run.log_artifact(pred_artifact)
15,592
41.02965
83
py
mmsegmentation
mmsegmentation-master/mmseg/core/optimizers/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .layer_decay_optimizer_constructor import ( LayerDecayOptimizerConstructor, LearningRateDecayOptimizerConstructor) __all__ = [ 'LearningRateDecayOptimizerConstructor', 'LayerDecayOptimizerConstructor' ]
265
32.25
77
py
mmsegmentation
mmsegmentation-master/mmseg/core/optimizers/layer_decay_optimizer_constructor.py
# Copyright (c) OpenMMLab. All rights reserved. import json import warnings from mmcv.runner import DefaultOptimizerConstructor, get_dist_info from mmseg.utils import get_root_logger from ..builder import OPTIMIZER_BUILDERS def get_layer_id_for_convnext(var_name, max_layer_id): """Get the layer id to set the different learning rates in ``layer_wise`` decay_type. Args: var_name (str): The key of the model. max_layer_id (int): Maximum number of backbone layers. Returns: int: The id number corresponding to different learning rate in ``LearningRateDecayOptimizerConstructor``. """ if var_name in ('backbone.cls_token', 'backbone.mask_token', 'backbone.pos_embed'): return 0 elif var_name.startswith('backbone.downsample_layers'): stage_id = int(var_name.split('.')[2]) if stage_id == 0: layer_id = 0 elif stage_id == 1: layer_id = 2 elif stage_id == 2: layer_id = 3 elif stage_id == 3: layer_id = max_layer_id return layer_id elif var_name.startswith('backbone.stages'): stage_id = int(var_name.split('.')[2]) block_id = int(var_name.split('.')[3]) if stage_id == 0: layer_id = 1 elif stage_id == 1: layer_id = 2 elif stage_id == 2: layer_id = 3 + block_id // 3 elif stage_id == 3: layer_id = max_layer_id return layer_id else: return max_layer_id + 1 def get_stage_id_for_convnext(var_name, max_stage_id): """Get the stage id to set the different learning rates in ``stage_wise`` decay_type. Args: var_name (str): The key of the model. max_stage_id (int): Maximum number of backbone layers. Returns: int: The id number corresponding to different learning rate in ``LearningRateDecayOptimizerConstructor``. """ if var_name in ('backbone.cls_token', 'backbone.mask_token', 'backbone.pos_embed'): return 0 elif var_name.startswith('backbone.downsample_layers'): return 0 elif var_name.startswith('backbone.stages'): stage_id = int(var_name.split('.')[2]) return stage_id + 1 else: return max_stage_id - 1 def get_layer_id_for_vit(var_name, max_layer_id): """Get the layer id to set the different learning rates. Args: var_name (str): The key of the model. num_max_layer (int): Maximum number of backbone layers. Returns: int: Returns the layer id of the key. """ if var_name in ('backbone.cls_token', 'backbone.mask_token', 'backbone.pos_embed'): return 0 elif var_name.startswith('backbone.patch_embed'): return 0 elif var_name.startswith('backbone.layers'): layer_id = int(var_name.split('.')[2]) return layer_id + 1 else: return max_layer_id - 1 @OPTIMIZER_BUILDERS.register_module() class LearningRateDecayOptimizerConstructor(DefaultOptimizerConstructor): """Different learning rates are set for different layers of backbone. Note: Currently, this optimizer constructor is built for ConvNeXt, BEiT and MAE. """ def add_params(self, params, module, **kwargs): """Add all parameters of module to the params list. The parameters of the given module will be added to the list of param groups, with specific rules defined by paramwise_cfg. Args: params (list[dict]): A list of param groups, it will be modified in place. module (nn.Module): The module to be added. """ logger = get_root_logger() parameter_groups = {} logger.info(f'self.paramwise_cfg is {self.paramwise_cfg}') num_layers = self.paramwise_cfg.get('num_layers') + 2 decay_rate = self.paramwise_cfg.get('decay_rate') decay_type = self.paramwise_cfg.get('decay_type', 'layer_wise') logger.info('Build LearningRateDecayOptimizerConstructor ' f'{decay_type} {decay_rate} - {num_layers}') weight_decay = self.base_wd for name, param in module.named_parameters(): if not param.requires_grad: continue # frozen weights if len(param.shape) == 1 or name.endswith('.bias') or name in ( 'pos_embed', 'cls_token'): group_name = 'no_decay' this_weight_decay = 0. else: group_name = 'decay' this_weight_decay = weight_decay if 'layer_wise' in decay_type: if 'ConvNeXt' in module.backbone.__class__.__name__: layer_id = get_layer_id_for_convnext( name, self.paramwise_cfg.get('num_layers')) logger.info(f'set param {name} as id {layer_id}') elif 'BEiT' in module.backbone.__class__.__name__ or \ 'MAE' in module.backbone.__class__.__name__ or \ 'VisionTransformer' in module.backbone.__class__.__name__: layer_id = get_layer_id_for_vit(name, num_layers) logger.info(f'set param {name} as id {layer_id}') else: raise NotImplementedError() elif decay_type == 'stage_wise': if 'ConvNeXt' in module.backbone.__class__.__name__: layer_id = get_stage_id_for_convnext(name, num_layers) logger.info(f'set param {name} as id {layer_id}') else: raise NotImplementedError() group_name = f'layer_{layer_id}_{group_name}' if group_name not in parameter_groups: scale = decay_rate**(num_layers - layer_id - 1) parameter_groups[group_name] = { 'weight_decay': this_weight_decay, 'params': [], 'param_names': [], 'lr_scale': scale, 'group_name': group_name, 'lr': scale * self.base_lr, } parameter_groups[group_name]['params'].append(param) parameter_groups[group_name]['param_names'].append(name) rank, _ = get_dist_info() if rank == 0: to_display = {} for key in parameter_groups: to_display[key] = { 'param_names': parameter_groups[key]['param_names'], 'lr_scale': parameter_groups[key]['lr_scale'], 'lr': parameter_groups[key]['lr'], 'weight_decay': parameter_groups[key]['weight_decay'], } logger.info(f'Param groups = {json.dumps(to_display, indent=2)}') params.extend(parameter_groups.values()) @OPTIMIZER_BUILDERS.register_module() class LayerDecayOptimizerConstructor(LearningRateDecayOptimizerConstructor): """Different learning rates are set for different layers of backbone. Note: Currently, this optimizer constructor is built for BEiT, and it will be deprecated. Please use ``LearningRateDecayOptimizerConstructor`` instead. """ def __init__(self, optimizer_cfg, paramwise_cfg): warnings.warn('DeprecationWarning: Original ' 'LayerDecayOptimizerConstructor of BEiT ' 'will be deprecated. Please use ' 'LearningRateDecayOptimizerConstructor instead, ' 'and set decay_type = layer_wise_vit in paramwise_cfg.') paramwise_cfg.update({'decay_type': 'layer_wise_vit'}) warnings.warn('DeprecationWarning: Layer_decay_rate will ' 'be deleted, please use decay_rate instead.') paramwise_cfg['decay_rate'] = paramwise_cfg.pop('layer_decay_rate') super(LayerDecayOptimizerConstructor, self).__init__(optimizer_cfg, paramwise_cfg)
8,082
37.490476
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/seg/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import build_pixel_sampler from .sampler import BasePixelSampler, OHEMPixelSampler __all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler']
220
35.833333
73
py
mmsegmentation
mmsegmentation-master/mmseg/core/seg/builder.py
# Copyright (c) OpenMMLab. All rights reserved. from mmcv.utils import Registry, build_from_cfg PIXEL_SAMPLERS = Registry('pixel sampler') def build_pixel_sampler(cfg, **default_args): """Build pixel sampler for segmentation map.""" return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args)
301
29.2
60
py
mmsegmentation
mmsegmentation-master/mmseg/core/seg/sampler/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .base_pixel_sampler import BasePixelSampler from .ohem_pixel_sampler import OHEMPixelSampler __all__ = ['BasePixelSampler', 'OHEMPixelSampler']
198
32.166667
50
py
mmsegmentation
mmsegmentation-master/mmseg/core/seg/sampler/base_pixel_sampler.py
# Copyright (c) OpenMMLab. All rights reserved. from abc import ABCMeta, abstractmethod class BasePixelSampler(metaclass=ABCMeta): """Base class of pixel sampler.""" def __init__(self, **kwargs): pass @abstractmethod def sample(self, seg_logit, seg_label): """Placeholder for sample function."""
332
22.785714
47
py
mmsegmentation
mmsegmentation-master/mmseg/core/seg/sampler/ohem_pixel_sampler.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn import torch.nn.functional as F from ..builder import PIXEL_SAMPLERS from .base_pixel_sampler import BasePixelSampler @PIXEL_SAMPLERS.register_module() class OHEMPixelSampler(BasePixelSampler): """Online Hard Example Mining Sampler for segmentation. Args: context (nn.Module): The context of sampler, subclass of :obj:`BaseDecodeHead`. thresh (float, optional): The threshold for hard example selection. Below which, are prediction with low confidence. If not specified, the hard examples will be pixels of top ``min_kept`` loss. Default: None. min_kept (int, optional): The minimum number of predictions to keep. Default: 100000. """ def __init__(self, context, thresh=None, min_kept=100000): super(OHEMPixelSampler, self).__init__() self.context = context assert min_kept > 1 self.thresh = thresh self.min_kept = min_kept def sample(self, seg_logit, seg_label): """Sample pixels that have high loss or with low prediction confidence. Args: seg_logit (torch.Tensor): segmentation logits, shape (N, C, H, W) seg_label (torch.Tensor): segmentation label, shape (N, 1, H, W) Returns: torch.Tensor: segmentation weight, shape (N, H, W) """ with torch.no_grad(): assert seg_logit.shape[2:] == seg_label.shape[2:] assert seg_label.shape[1] == 1 seg_label = seg_label.squeeze(1).long() batch_kept = self.min_kept * seg_label.size(0) valid_mask = seg_label != self.context.ignore_index seg_weight = seg_logit.new_zeros(size=seg_label.size()) valid_seg_weight = seg_weight[valid_mask] if self.thresh is not None: seg_prob = F.softmax(seg_logit, dim=1) tmp_seg_label = seg_label.clone().unsqueeze(1) tmp_seg_label[tmp_seg_label == self.context.ignore_index] = 0 seg_prob = seg_prob.gather(1, tmp_seg_label).squeeze(1) sort_prob, sort_indices = seg_prob[valid_mask].sort() if sort_prob.numel() > 0: min_threshold = sort_prob[min(batch_kept, sort_prob.numel() - 1)] else: min_threshold = 0.0 threshold = max(min_threshold, self.thresh) valid_seg_weight[seg_prob[valid_mask] < threshold] = 1. else: if not isinstance(self.context.loss_decode, nn.ModuleList): losses_decode = [self.context.loss_decode] else: losses_decode = self.context.loss_decode losses = 0.0 for loss_module in losses_decode: losses += loss_module( seg_logit, seg_label, weight=None, ignore_index=self.context.ignore_index, reduction_override='none') # faster than topk according to https://github.com/pytorch/pytorch/issues/22812 # noqa _, sort_indices = losses[valid_mask].sort(descending=True) valid_seg_weight[sort_indices[:batch_kept]] = 1. seg_weight[valid_mask] = valid_seg_weight return seg_weight
3,539
40.162791
103
py
mmsegmentation
mmsegmentation-master/mmseg/core/utils/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .dist_util import check_dist_init, sync_random_seed from .misc import add_prefix __all__ = ['add_prefix', 'check_dist_init', 'sync_random_seed']
199
32.333333
63
py
mmsegmentation
mmsegmentation-master/mmseg/core/utils/dist_util.py
# Copyright (c) OpenMMLab. All rights reserved. import numpy as np import torch import torch.distributed as dist from mmcv.runner import get_dist_info def check_dist_init(): return dist.is_available() and dist.is_initialized() def sync_random_seed(seed=None, device='cuda'): """Make sure different ranks share the same seed. All workers must call this function, otherwise it will deadlock. This method is generally used in `DistributedSampler`, because the seed should be identical across all processes in the distributed group. In distributed sampling, different ranks should sample non-overlapped data in the dataset. Therefore, this function is used to make sure that each rank shuffles the data indices in the same order based on the same seed. Then different ranks could use different indices to select non-overlapped data from the same data list. Args: seed (int, Optional): The seed. Default to None. device (str): The device where the seed will be put on. Default to 'cuda'. Returns: int: Seed to be used. """ if seed is None: seed = np.random.randint(2**31) assert isinstance(seed, int) rank, world_size = get_dist_info() if world_size == 1: return seed if rank == 0: random_num = torch.tensor(seed, dtype=torch.int32, device=device) else: random_num = torch.tensor(0, dtype=torch.int32, device=device) dist.broadcast(random_num, src=0) return random_num.item()
1,533
31.638298
79
py
mmsegmentation
mmsegmentation-master/mmseg/core/utils/misc.py
# Copyright (c) OpenMMLab. All rights reserved. def add_prefix(inputs, prefix): """Add prefix for dict. Args: inputs (dict): The input dict with str keys. prefix (str): The prefix to add. Returns: dict: The dict with keys updated with ``prefix``. """ outputs = dict() for name, value in inputs.items(): outputs[f'{prefix}.{name}'] = value return outputs
419
21.105263
57
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .ade import ADE20KDataset from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset from .chase_db1 import ChaseDB1Dataset from .cityscapes import CityscapesDataset from .coco_stuff import COCOStuffDataset from .custom import CustomDataset from .dark_zurich import DarkZurichDataset from .dataset_wrappers import (ConcatDataset, MultiImageMixDataset, RepeatDataset) from .drive import DRIVEDataset from .face import FaceOccludedDataset from .hrf import HRFDataset from .imagenets import (ImageNetSDataset, LoadImageNetSAnnotations, LoadImageNetSImageFromFile) from .isaid import iSAIDDataset from .isprs import ISPRSDataset from .loveda import LoveDADataset from .night_driving import NightDrivingDataset from .pascal_context import PascalContextDataset, PascalContextDataset59 from .potsdam import PotsdamDataset from .stare import STAREDataset from .voc import PascalVOCDataset __all__ = [ 'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', 'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset', 'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset', 'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset', 'STAREDataset', 'DarkZurichDataset', 'NightDrivingDataset', 'COCOStuffDataset', 'LoveDADataset', 'MultiImageMixDataset', 'iSAIDDataset', 'ISPRSDataset', 'PotsdamDataset', 'FaceOccludedDataset', 'ImageNetSDataset', 'LoadImageNetSAnnotations', 'LoadImageNetSImageFromFile' ]
1,596
43.361111
78
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/ade.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from PIL import Image from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class ADE20KDataset(CustomDataset): """ADE20K dataset. In segmentation map annotation for ADE20K, 0 stands for background, which is not included in 150 categories. ``reduce_zero_label`` is fixed to True. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to '.png'. """ CLASSES = ( 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', 'clock', 'flag') PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], [102, 255, 0], [92, 0, 255]] def __init__(self, **kwargs): super(ADE20KDataset, self).__init__( img_suffix='.jpg', seg_map_suffix='.png', reduce_zero_label=True, **kwargs) def results2img(self, results, imgfile_prefix, to_label_id, indices=None): """Write the segmentation results to images. Args: results (list[ndarray]): Testing results of the dataset. imgfile_prefix (str): The filename prefix of the png files. If the prefix is "somepath/xxx", the png files will be named "somepath/xxx.png". to_label_id (bool): whether convert output to label_id for submission. indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: list[str: str]: result txt files which contains corresponding semantic segmentation images. """ if indices is None: indices = list(range(len(self))) mmcv.mkdir_or_exist(imgfile_prefix) result_files = [] for result, idx in zip(results, indices): filename = self.img_infos[idx]['filename'] basename = osp.splitext(osp.basename(filename))[0] png_filename = osp.join(imgfile_prefix, f'{basename}.png') # The index range of official requirement is from 0 to 150. # But the index range of output is from 0 to 149. # That is because we set reduce_zero_label=True. result = result + 1 output = Image.fromarray(result.astype(np.uint8)) output.save(png_filename) result_files.append(png_filename) return result_files def format_results(self, results, imgfile_prefix, to_label_id=True, indices=None): """Format the results into dir (standard format for ade20k evaluation). Args: results (list): Testing results of the dataset. imgfile_prefix (str | None): The prefix of images files. It includes the file path and the prefix of filename, e.g., "a/b/prefix". to_label_id (bool): whether convert output to label_id for submission. Default: False indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: tuple: (result_files, tmp_dir), result_files is a list containing the image paths, tmp_dir is the temporal directory created for saving json/png files when img_prefix is not specified. """ if indices is None: indices = list(range(len(self))) assert isinstance(results, list), 'results must be a list.' assert isinstance(indices, list), 'indices must be a list.' result_files = self.results2img(results, imgfile_prefix, to_label_id, indices) return result_files
8,358
48.755952
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/builder.py
# Copyright (c) OpenMMLab. All rights reserved. import copy import platform import random from functools import partial import numpy as np import torch from mmcv.parallel import collate from mmcv.runner import get_dist_info from mmcv.utils import Registry, build_from_cfg, digit_version from torch.utils.data import DataLoader, IterableDataset from .samplers import DistributedSampler if platform.system() != 'Windows': # https://github.com/pytorch/pytorch/issues/973 import resource rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) base_soft_limit = rlimit[0] hard_limit = rlimit[1] soft_limit = min(max(4096, base_soft_limit), hard_limit) resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) DATASETS = Registry('dataset') PIPELINES = Registry('pipeline') def _concat_dataset(cfg, default_args=None): """Build :obj:`ConcatDataset by.""" from .dataset_wrappers import ConcatDataset img_dir = cfg['img_dir'] ann_dir = cfg.get('ann_dir', None) split = cfg.get('split', None) # pop 'separate_eval' since it is not a valid key for common datasets. separate_eval = cfg.pop('separate_eval', True) num_img_dir = len(img_dir) if isinstance(img_dir, (list, tuple)) else 1 if ann_dir is not None: num_ann_dir = len(ann_dir) if isinstance(ann_dir, (list, tuple)) else 1 else: num_ann_dir = 0 if split is not None: num_split = len(split) if isinstance(split, (list, tuple)) else 1 else: num_split = 0 if num_img_dir > 1: assert num_img_dir == num_ann_dir or num_ann_dir == 0 assert num_img_dir == num_split or num_split == 0 else: assert num_split == num_ann_dir or num_ann_dir <= 1 num_dset = max(num_split, num_img_dir) datasets = [] for i in range(num_dset): data_cfg = copy.deepcopy(cfg) if isinstance(img_dir, (list, tuple)): data_cfg['img_dir'] = img_dir[i] if isinstance(ann_dir, (list, tuple)): data_cfg['ann_dir'] = ann_dir[i] if isinstance(split, (list, tuple)): data_cfg['split'] = split[i] datasets.append(build_dataset(data_cfg, default_args)) return ConcatDataset(datasets, separate_eval) def build_dataset(cfg, default_args=None): """Build datasets.""" from .dataset_wrappers import (ConcatDataset, MultiImageMixDataset, RepeatDataset) if isinstance(cfg, (list, tuple)): dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) elif cfg['type'] == 'RepeatDataset': dataset = RepeatDataset( build_dataset(cfg['dataset'], default_args), cfg['times']) elif cfg['type'] == 'MultiImageMixDataset': cp_cfg = copy.deepcopy(cfg) cp_cfg['dataset'] = build_dataset(cp_cfg['dataset']) cp_cfg.pop('type') dataset = MultiImageMixDataset(**cp_cfg) elif isinstance(cfg.get('img_dir'), (list, tuple)) or isinstance( cfg.get('split', None), (list, tuple)): dataset = _concat_dataset(cfg, default_args) else: dataset = build_from_cfg(cfg, DATASETS, default_args) return dataset def build_dataloader(dataset, samples_per_gpu, workers_per_gpu, num_gpus=1, dist=True, shuffle=True, seed=None, drop_last=False, pin_memory=True, persistent_workers=True, **kwargs): """Build PyTorch DataLoader. In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs. Args: dataset (Dataset): A PyTorch dataset. samples_per_gpu (int): Number of training samples on each GPU, i.e., batch size of each GPU. workers_per_gpu (int): How many subprocesses to use for data loading for each GPU. num_gpus (int): Number of GPUs. Only used in non-distributed training. dist (bool): Distributed training/test or not. Default: True. shuffle (bool): Whether to shuffle the data at every epoch. Default: True. seed (int | None): Seed to be used. Default: None. drop_last (bool): Whether to drop the last incomplete batch in epoch. Default: False pin_memory (bool): Whether to use pin_memory in DataLoader. Default: True persistent_workers (bool): If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. The argument also has effect in PyTorch>=1.7.0. Default: True kwargs: any keyword argument to be used to initialize DataLoader Returns: DataLoader: A PyTorch dataloader. """ rank, world_size = get_dist_info() if dist and not isinstance(dataset, IterableDataset): sampler = DistributedSampler( dataset, world_size, rank, shuffle=shuffle, seed=seed) shuffle = False batch_size = samples_per_gpu num_workers = workers_per_gpu elif dist: sampler = None shuffle = False batch_size = samples_per_gpu num_workers = workers_per_gpu else: sampler = None batch_size = num_gpus * samples_per_gpu num_workers = num_gpus * workers_per_gpu init_fn = partial( worker_init_fn, num_workers=num_workers, rank=rank, seed=seed) if seed is not None else None if digit_version(torch.__version__) >= digit_version('1.8.0'): data_loader = DataLoader( dataset, batch_size=batch_size, sampler=sampler, num_workers=num_workers, collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), pin_memory=pin_memory, shuffle=shuffle, worker_init_fn=init_fn, drop_last=drop_last, persistent_workers=persistent_workers, **kwargs) else: data_loader = DataLoader( dataset, batch_size=batch_size, sampler=sampler, num_workers=num_workers, collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), pin_memory=pin_memory, shuffle=shuffle, worker_init_fn=init_fn, drop_last=drop_last, **kwargs) return data_loader def worker_init_fn(worker_id, num_workers, rank, seed): """Worker init func for dataloader. The seed of each worker equals to num_worker * rank + worker_id + user_seed Args: worker_id (int): Worker id. num_workers (int): Number of workers. rank (int): The rank of current process. seed (int): The random seed to use. """ worker_seed = num_workers * rank + worker_id + seed np.random.seed(worker_seed) random.seed(worker_seed) torch.manual_seed(worker_seed)
7,135
35.22335
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/chase_db1.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class ChaseDB1Dataset(CustomDataset): """Chase_db1 dataset. In segmentation map annotation for Chase_db1, 0 stands for background, which is included in 2 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to '_1stHO.png'. """ CLASSES = ('background', 'vessel') PALETTE = [[120, 120, 120], [6, 230, 230]] def __init__(self, **kwargs): super(ChaseDB1Dataset, self).__init__( img_suffix='.png', seg_map_suffix='_1stHO.png', reduce_zero_label=False, **kwargs) assert self.file_client.exists(self.img_dir)
820
28.321429
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/cityscapes.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from mmcv.utils import print_log from PIL import Image from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class CityscapesDataset(CustomDataset): """Cityscapes dataset. The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. """ CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', 'bicycle') PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], [0, 0, 230], [119, 11, 32]] def __init__(self, img_suffix='_leftImg8bit.png', seg_map_suffix='_gtFine_labelTrainIds.png', **kwargs): super(CityscapesDataset, self).__init__( img_suffix=img_suffix, seg_map_suffix=seg_map_suffix, **kwargs) @staticmethod def _convert_to_label_id(result): """Convert trainId to id for cityscapes.""" if isinstance(result, str): result = np.load(result) import cityscapesscripts.helpers.labels as CSLabels result_copy = result.copy() for trainId, label in CSLabels.trainId2label.items(): result_copy[result == trainId] = label.id return result_copy def results2img(self, results, imgfile_prefix, to_label_id, indices=None): """Write the segmentation results to images. Args: results (list[ndarray]): Testing results of the dataset. imgfile_prefix (str): The filename prefix of the png files. If the prefix is "somepath/xxx", the png files will be named "somepath/xxx.png". to_label_id (bool): whether convert output to label_id for submission. indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: list[str: str]: result txt files which contains corresponding semantic segmentation images. """ if indices is None: indices = list(range(len(self))) mmcv.mkdir_or_exist(imgfile_prefix) result_files = [] for result, idx in zip(results, indices): if to_label_id: result = self._convert_to_label_id(result) filename = self.img_infos[idx]['filename'] basename = osp.splitext(osp.basename(filename))[0] png_filename = osp.join(imgfile_prefix, f'{basename}.png') output = Image.fromarray(result.astype(np.uint8)).convert('P') import cityscapesscripts.helpers.labels as CSLabels palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) for label_id, label in CSLabels.id2label.items(): palette[label_id] = label.color output.putpalette(palette) output.save(png_filename) result_files.append(png_filename) return result_files def format_results(self, results, imgfile_prefix, to_label_id=True, indices=None): """Format the results into dir (standard format for Cityscapes evaluation). Args: results (list): Testing results of the dataset. imgfile_prefix (str): The prefix of images files. It includes the file path and the prefix of filename, e.g., "a/b/prefix". to_label_id (bool): whether convert output to label_id for submission. Default: False indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: tuple: (result_files, tmp_dir), result_files is a list containing the image paths, tmp_dir is the temporal directory created for saving json/png files when img_prefix is not specified. """ if indices is None: indices = list(range(len(self))) assert isinstance(results, list), 'results must be a list.' assert isinstance(indices, list), 'indices must be a list.' result_files = self.results2img(results, imgfile_prefix, to_label_id, indices) return result_files def evaluate(self, results, metric='mIoU', logger=None, imgfile_prefix=None): """Evaluation in Cityscapes/default protocol. Args: results (list): Testing results of the dataset. metric (str | list[str]): Metrics to be evaluated. logger (logging.Logger | None | str): Logger used for printing related information during evaluation. Default: None. imgfile_prefix (str | None): The prefix of output image file, for cityscapes evaluation only. It includes the file path and the prefix of filename, e.g., "a/b/prefix". If results are evaluated with cityscapes protocol, it would be the prefix of output png files. The output files would be png images under folder "a/b/prefix/xxx.png", where "xxx" is the image name of cityscapes. If not specified, a temp file will be created for evaluation. Default: None. Returns: dict[str, float]: Cityscapes/default metrics. """ eval_results = dict() metrics = metric.copy() if isinstance(metric, list) else [metric] if 'cityscapes' in metrics: eval_results.update( self._evaluate_cityscapes(results, logger, imgfile_prefix)) metrics.remove('cityscapes') if len(metrics) > 0: eval_results.update( super(CityscapesDataset, self).evaluate(results, metrics, logger)) return eval_results def _evaluate_cityscapes(self, results, logger, imgfile_prefix): """Evaluation in Cityscapes protocol. Args: results (list): Testing results of the dataset. logger (logging.Logger | str | None): Logger used for printing related information during evaluation. Default: None. imgfile_prefix (str | None): The prefix of output image file Returns: dict[str: float]: Cityscapes evaluation results. """ try: import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa except ImportError: raise ImportError('Please run "pip install cityscapesscripts" to ' 'install cityscapesscripts first.') msg = 'Evaluating in Cityscapes style' if logger is None: msg = '\n' + msg print_log(msg, logger=logger) result_dir = imgfile_prefix eval_results = dict() print_log(f'Evaluating results under {result_dir} ...', logger=logger) CSEval.args.evalInstLevelScore = True CSEval.args.predictionPath = osp.abspath(result_dir) CSEval.args.evalPixelAccuracy = True CSEval.args.JSONOutput = False seg_map_list = [] pred_list = [] # when evaluating with official cityscapesscripts, # **_gtFine_labelIds.png is used for seg_map in mmcv.scandir( self.ann_dir, 'gtFine_labelIds.png', recursive=True): seg_map_list.append(osp.join(self.ann_dir, seg_map)) pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) eval_results.update( CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) return eval_results
8,469
38.395349
96
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/coco_stuff.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class COCOStuffDataset(CustomDataset): """COCO-Stuff dataset. In segmentation map annotation for COCO-Stuff, Train-IDs of the 10k version are from 1 to 171, where 0 is the ignore index, and Train-ID of COCO Stuff 164k is from 0 to 170, where 255 is the ignore index. So, they are all 171 semantic categories. ``reduce_zero_label`` is set to True and False for the 10k and 164k versions, respectively. The ``img_suffix`` is fixed to '.jpg', and ``seg_map_suffix`` is fixed to '.png'. """ CLASSES = ( 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner', 'blanket', 'branch', 'bridge', 'building-other', 'bush', 'cabinet', 'cage', 'cardboard', 'carpet', 'ceiling-other', 'ceiling-tile', 'cloth', 'clothes', 'clouds', 'counter', 'cupboard', 'curtain', 'desk-stuff', 'dirt', 'door-stuff', 'fence', 'floor-marble', 'floor-other', 'floor-stone', 'floor-tile', 'floor-wood', 'flower', 'fog', 'food-other', 'fruit', 'furniture-other', 'grass', 'gravel', 'ground-other', 'hill', 'house', 'leaves', 'light', 'mat', 'metal', 'mirror-stuff', 'moss', 'mountain', 'mud', 'napkin', 'net', 'paper', 'pavement', 'pillow', 'plant-other', 'plastic', 'platform', 'playingfield', 'railing', 'railroad', 'river', 'road', 'rock', 'roof', 'rug', 'salad', 'sand', 'sea', 'shelf', 'sky-other', 'skyscraper', 'snow', 'solid-other', 'stairs', 'stone', 'straw', 'structural-other', 'table', 'tent', 'textile-other', 'towel', 'tree', 'vegetable', 'wall-brick', 'wall-concrete', 'wall-other', 'wall-panel', 'wall-stone', 'wall-tile', 'wall-wood', 'water-other', 'waterdrops', 'window-blind', 'window-other', 'wood') PALETTE = [[0, 192, 64], [0, 192, 64], [0, 64, 96], [128, 192, 192], [0, 64, 64], [0, 192, 224], [0, 192, 192], [128, 192, 64], [0, 192, 96], [128, 192, 64], [128, 32, 192], [0, 0, 224], [0, 0, 64], [0, 160, 192], [128, 0, 96], [128, 0, 192], [0, 32, 192], [128, 128, 224], [0, 0, 192], [128, 160, 192], [128, 128, 0], [128, 0, 32], [128, 32, 0], [128, 0, 128], [64, 128, 32], [0, 160, 0], [0, 0, 0], [192, 128, 160], [0, 32, 0], [0, 128, 128], [64, 128, 160], [128, 160, 0], [0, 128, 0], [192, 128, 32], [128, 96, 128], [0, 0, 128], [64, 0, 32], [0, 224, 128], [128, 0, 0], [192, 0, 160], [0, 96, 128], [128, 128, 128], [64, 0, 160], [128, 224, 128], [128, 128, 64], [192, 0, 32], [128, 96, 0], [128, 0, 192], [0, 128, 32], [64, 224, 0], [0, 0, 64], [128, 128, 160], [64, 96, 0], [0, 128, 192], [0, 128, 160], [192, 224, 0], [0, 128, 64], [128, 128, 32], [192, 32, 128], [0, 64, 192], [0, 0, 32], [64, 160, 128], [128, 64, 64], [128, 0, 160], [64, 32, 128], [128, 192, 192], [0, 0, 160], [192, 160, 128], [128, 192, 0], [128, 0, 96], [192, 32, 0], [128, 64, 128], [64, 128, 96], [64, 160, 0], [0, 64, 0], [192, 128, 224], [64, 32, 0], [0, 192, 128], [64, 128, 224], [192, 160, 0], [0, 192, 0], [192, 128, 96], [192, 96, 128], [0, 64, 128], [64, 0, 96], [64, 224, 128], [128, 64, 0], [192, 0, 224], [64, 96, 128], [128, 192, 128], [64, 0, 224], [192, 224, 128], [128, 192, 64], [192, 0, 96], [192, 96, 0], [128, 64, 192], [0, 128, 96], [0, 224, 0], [64, 64, 64], [128, 128, 224], [0, 96, 0], [64, 192, 192], [0, 128, 224], [128, 224, 0], [64, 192, 64], [128, 128, 96], [128, 32, 128], [64, 0, 192], [0, 64, 96], [0, 160, 128], [192, 0, 64], [128, 64, 224], [0, 32, 128], [192, 128, 192], [0, 64, 224], [128, 160, 128], [192, 128, 0], [128, 64, 32], [128, 32, 64], [192, 0, 128], [64, 192, 32], [0, 160, 64], [64, 0, 0], [192, 192, 160], [0, 32, 64], [64, 128, 128], [64, 192, 160], [128, 160, 64], [64, 128, 0], [192, 192, 32], [128, 96, 192], [64, 0, 128], [64, 64, 32], [0, 224, 192], [192, 0, 0], [192, 64, 160], [0, 96, 192], [192, 128, 128], [64, 64, 160], [128, 224, 192], [192, 128, 64], [192, 64, 32], [128, 96, 64], [192, 0, 192], [0, 192, 32], [64, 224, 64], [64, 0, 64], [128, 192, 160], [64, 96, 64], [64, 128, 192], [0, 192, 160], [192, 224, 64], [64, 128, 64], [128, 192, 32], [192, 32, 192], [64, 64, 192], [0, 64, 32], [64, 160, 192], [192, 64, 64], [128, 64, 160], [64, 32, 192], [192, 192, 192], [0, 64, 160], [192, 160, 192], [192, 192, 0], [128, 64, 96], [192, 32, 64], [192, 64, 128], [64, 192, 96], [64, 160, 64], [64, 64, 0]] def __init__(self, **kwargs): super(COCOStuffDataset, self).__init__( img_suffix='.jpg', seg_map_suffix='_labelTrainIds.png', **kwargs)
6,158
63.831579
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/custom.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import warnings from collections import OrderedDict import mmcv import numpy as np from mmcv.utils import print_log from prettytable import PrettyTable from torch.utils.data import Dataset from mmseg.core import eval_metrics, intersect_and_union, pre_eval_to_metrics from mmseg.utils import get_root_logger from .builder import DATASETS from .pipelines import Compose, LoadAnnotations @DATASETS.register_module() class CustomDataset(Dataset): """Custom dataset for semantic segmentation. An example of file structure is as followed. .. code-block:: none ├── data │ ├── my_dataset │ │ ├── img_dir │ │ │ ├── train │ │ │ │ ├── xxx{img_suffix} │ │ │ │ ├── yyy{img_suffix} │ │ │ │ ├── zzz{img_suffix} │ │ │ ├── val │ │ ├── ann_dir │ │ │ ├── train │ │ │ │ ├── xxx{seg_map_suffix} │ │ │ │ ├── yyy{seg_map_suffix} │ │ │ │ ├── zzz{seg_map_suffix} │ │ │ ├── val The img/gt_semantic_seg pair of CustomDataset should be of the same except suffix. A valid img/gt_semantic_seg filename pair should be like ``xxx{img_suffix}`` and ``xxx{seg_map_suffix}`` (extension is also included in the suffix). If split is given, then ``xxx`` is specified in txt file. Otherwise, all files in ``img_dir/``and ``ann_dir`` will be loaded. Please refer to ``docs/en/tutorials/new_dataset.md`` for more details. Args: pipeline (list[dict]): Processing pipeline img_dir (str): Path to image directory img_suffix (str): Suffix of images. Default: '.jpg' ann_dir (str, optional): Path to annotation directory. Default: None seg_map_suffix (str): Suffix of segmentation maps. Default: '.png' split (str, optional): Split txt file. If split is specified, only file with suffix in the splits will be loaded. Otherwise, all images in img_dir/ann_dir will be loaded. Default: None data_root (str, optional): Data root for img_dir/ann_dir. Default: None. test_mode (bool): If test_mode=True, gt wouldn't be loaded. ignore_index (int): The label index to be ignored. Default: 255 reduce_zero_label (bool): Whether to mark label zero as ignored. Default: False classes (str | Sequence[str], optional): Specify classes to load. If is None, ``cls.CLASSES`` will be used. Default: None. palette (Sequence[Sequence[int]]] | np.ndarray | None): The palette of segmentation map. If None is given, and self.PALETTE is None, random palette will be generated. Default: None gt_seg_map_loader_cfg (dict): build LoadAnnotations to load gt for evaluation, load from disk by default. Default: ``dict()``. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. """ CLASSES = None PALETTE = None def __init__(self, pipeline, img_dir, img_suffix='.jpg', ann_dir=None, seg_map_suffix='.png', split=None, data_root=None, test_mode=False, ignore_index=255, reduce_zero_label=False, classes=None, palette=None, gt_seg_map_loader_cfg=dict(), file_client_args=dict(backend='disk')): self.pipeline = Compose(pipeline) self.img_dir = img_dir self.img_suffix = img_suffix self.ann_dir = ann_dir self.seg_map_suffix = seg_map_suffix self.split = split self.data_root = data_root self.test_mode = test_mode self.ignore_index = ignore_index self.reduce_zero_label = reduce_zero_label self.label_map = None self.CLASSES, self.PALETTE = self.get_classes_and_palette( classes, palette) self.gt_seg_map_loader = LoadAnnotations( reduce_zero_label=reduce_zero_label, **gt_seg_map_loader_cfg) self.file_client_args = file_client_args self.file_client = mmcv.FileClient.infer_client(self.file_client_args) if test_mode: assert self.CLASSES is not None, \ '`cls.CLASSES` or `classes` should be specified when testing' # join paths if data_root is specified if self.data_root is not None: if not osp.isabs(self.img_dir): self.img_dir = osp.join(self.data_root, self.img_dir) if not (self.ann_dir is None or osp.isabs(self.ann_dir)): self.ann_dir = osp.join(self.data_root, self.ann_dir) if not (self.split is None or osp.isabs(self.split)): self.split = osp.join(self.data_root, self.split) # load annotations self.img_infos = self.load_annotations(self.img_dir, self.img_suffix, self.ann_dir, self.seg_map_suffix, self.split) def __len__(self): """Total number of samples of data.""" return len(self.img_infos) def load_annotations(self, img_dir, img_suffix, ann_dir, seg_map_suffix, split): """Load annotation from directory. Args: img_dir (str): Path to image directory img_suffix (str): Suffix of images. ann_dir (str|None): Path to annotation directory. seg_map_suffix (str|None): Suffix of segmentation maps. split (str|None): Split txt file. If split is specified, only file with suffix in the splits will be loaded. Otherwise, all images in img_dir/ann_dir will be loaded. Default: None Returns: list[dict]: All image info of dataset. """ img_infos = [] if split is not None: lines = mmcv.list_from_file( split, file_client_args=self.file_client_args) for line in lines: img_name = line.strip() img_info = dict(filename=img_name + img_suffix) if ann_dir is not None: seg_map = img_name + seg_map_suffix img_info['ann'] = dict(seg_map=seg_map) img_infos.append(img_info) else: for img in self.file_client.list_dir_or_file( dir_path=img_dir, list_dir=False, suffix=img_suffix, recursive=True): img_info = dict(filename=img) if ann_dir is not None: seg_map = img.replace(img_suffix, seg_map_suffix) img_info['ann'] = dict(seg_map=seg_map) img_infos.append(img_info) img_infos = sorted(img_infos, key=lambda x: x['filename']) print_log(f'Loaded {len(img_infos)} images', logger=get_root_logger()) return img_infos def get_ann_info(self, idx): """Get annotation by index. Args: idx (int): Index of data. Returns: dict: Annotation info of specified index. """ return self.img_infos[idx]['ann'] def pre_pipeline(self, results): """Prepare results dict for pipeline.""" results['seg_fields'] = [] results['img_prefix'] = self.img_dir results['seg_prefix'] = self.ann_dir if self.custom_classes: results['label_map'] = self.label_map def __getitem__(self, idx): """Get training/test data after pipeline. Args: idx (int): Index of data. Returns: dict: Training/test data (with annotation if `test_mode` is set False). """ if self.test_mode: return self.prepare_test_img(idx) else: return self.prepare_train_img(idx) def prepare_train_img(self, idx): """Get training data and annotations after pipeline. Args: idx (int): Index of data. Returns: dict: Training data and annotation after pipeline with new keys introduced by pipeline. """ img_info = self.img_infos[idx] ann_info = self.get_ann_info(idx) results = dict(img_info=img_info, ann_info=ann_info) self.pre_pipeline(results) return self.pipeline(results) def prepare_test_img(self, idx): """Get testing data after pipeline. Args: idx (int): Index of data. Returns: dict: Testing data after pipeline with new keys introduced by pipeline. """ img_info = self.img_infos[idx] results = dict(img_info=img_info) self.pre_pipeline(results) return self.pipeline(results) def format_results(self, results, imgfile_prefix, indices=None, **kwargs): """Place holder to format result to dataset specific output.""" raise NotImplementedError def get_gt_seg_map_by_idx(self, index): """Get one ground truth segmentation map for evaluation.""" ann_info = self.get_ann_info(index) results = dict(ann_info=ann_info) self.pre_pipeline(results) self.gt_seg_map_loader(results) return results['gt_semantic_seg'] def get_gt_seg_maps(self, efficient_test=None): """Get ground truth segmentation maps for evaluation.""" if efficient_test is not None: warnings.warn( 'DeprecationWarning: ``efficient_test`` has been deprecated ' 'since MMSeg v0.16, the ``get_gt_seg_maps()`` is CPU memory ' 'friendly by default. ') for idx in range(len(self)): ann_info = self.get_ann_info(idx) results = dict(ann_info=ann_info) self.pre_pipeline(results) self.gt_seg_map_loader(results) yield results['gt_semantic_seg'] def pre_eval(self, preds, indices): """Collect eval result from each iteration. Args: preds (list[torch.Tensor] | torch.Tensor): the segmentation logit after argmax, shape (N, H, W). indices (list[int] | int): the prediction related ground truth indices. Returns: list[torch.Tensor]: (area_intersect, area_union, area_prediction, area_ground_truth). """ # In order to compat with batch inference if not isinstance(indices, list): indices = [indices] if not isinstance(preds, list): preds = [preds] pre_eval_results = [] for pred, index in zip(preds, indices): seg_map = self.get_gt_seg_map_by_idx(index) pre_eval_results.append( intersect_and_union( pred, seg_map, len(self.CLASSES), self.ignore_index, # as the label map has already been applied and zero label # has already been reduced by get_gt_seg_map_by_idx() i.e. # LoadAnnotations.__call__(), these operations should not # be duplicated. See the following issues/PRs: # https://github.com/open-mmlab/mmsegmentation/issues/1415 # https://github.com/open-mmlab/mmsegmentation/pull/1417 # https://github.com/open-mmlab/mmsegmentation/pull/2504 # for more details label_map=dict(), reduce_zero_label=False)) return pre_eval_results def get_classes_and_palette(self, classes=None, palette=None): """Get class names of current dataset. Args: classes (Sequence[str] | str | None): If classes is None, use default CLASSES defined by builtin dataset. If classes is a string, take it as a file name. The file contains the name of classes where each line contains one class name. If classes is a tuple or list, override the CLASSES defined by the dataset. palette (Sequence[Sequence[int]]] | np.ndarray | None): The palette of segmentation map. If None is given, random palette will be generated. Default: None """ if classes is None: self.custom_classes = False return self.CLASSES, self.PALETTE self.custom_classes = True if isinstance(classes, str): # take it as a file path class_names = mmcv.list_from_file(classes) elif isinstance(classes, (tuple, list)): class_names = classes else: raise ValueError(f'Unsupported type {type(classes)} of classes.') if self.CLASSES: if not set(class_names).issubset(self.CLASSES): raise ValueError('classes is not a subset of CLASSES.') # dictionary, its keys are the old label ids and its values # are the new label ids. # used for changing pixel labels in load_annotations. self.label_map = {} for i, c in enumerate(self.CLASSES): if c not in class_names: self.label_map[i] = 255 else: self.label_map[i] = class_names.index(c) palette = self.get_palette_for_custom_classes(class_names, palette) return class_names, palette def get_palette_for_custom_classes(self, class_names, palette=None): if self.label_map is not None: # return subset of palette palette = [] for old_id, new_id in sorted( self.label_map.items(), key=lambda x: x[1]): if new_id != 255: palette.append(self.PALETTE[old_id]) palette = type(self.PALETTE)(palette) elif palette is None: if self.PALETTE is None: # Get random state before set seed, and restore # random state later. # It will prevent loss of randomness, as the palette # may be different in each iteration if not specified. # See: https://github.com/open-mmlab/mmdetection/issues/5844 state = np.random.get_state() np.random.seed(42) # random palette palette = np.random.randint(0, 255, size=(len(class_names), 3)) np.random.set_state(state) else: palette = self.PALETTE return palette def evaluate(self, results, metric='mIoU', logger=None, gt_seg_maps=None, **kwargs): """Evaluate the dataset. Args: results (list[tuple[torch.Tensor]] | list[str]): per image pre_eval results or predict segmentation map for computing evaluation metric. metric (str | list[str]): Metrics to be evaluated. 'mIoU', 'mDice' and 'mFscore' are supported. logger (logging.Logger | None | str): Logger used for printing related information during evaluation. Default: None. gt_seg_maps (generator[ndarray]): Custom gt seg maps as input, used in ConcatDataset Returns: dict[str, float]: Default metrics. """ if isinstance(metric, str): metric = [metric] allowed_metrics = ['mIoU', 'mDice', 'mFscore'] if not set(metric).issubset(set(allowed_metrics)): raise KeyError('metric {} is not supported'.format(metric)) eval_results = {} # test a list of files if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( results, str): if gt_seg_maps is None: gt_seg_maps = self.get_gt_seg_maps() num_classes = len(self.CLASSES) ret_metrics = eval_metrics( results, gt_seg_maps, num_classes, self.ignore_index, metric, label_map=dict(), reduce_zero_label=False) # test a list of pre_eval_results else: ret_metrics = pre_eval_to_metrics(results, metric) # Because dataset.CLASSES is required for per-eval. if self.CLASSES is None: class_names = tuple(range(num_classes)) else: class_names = self.CLASSES # summary table ret_metrics_summary = OrderedDict({ ret_metric: np.round(np.nanmean(ret_metric_value) * 100, 2) for ret_metric, ret_metric_value in ret_metrics.items() }) # each class table ret_metrics.pop('aAcc', None) ret_metrics_class = OrderedDict({ ret_metric: np.round(ret_metric_value * 100, 2) for ret_metric, ret_metric_value in ret_metrics.items() }) ret_metrics_class.update({'Class': class_names}) ret_metrics_class.move_to_end('Class', last=False) # for logger class_table_data = PrettyTable() for key, val in ret_metrics_class.items(): class_table_data.add_column(key, val) summary_table_data = PrettyTable() for key, val in ret_metrics_summary.items(): if key == 'aAcc': summary_table_data.add_column(key, [val]) else: summary_table_data.add_column('m' + key, [val]) print_log('per class results:', logger) print_log('\n' + class_table_data.get_string(), logger=logger) print_log('Summary:', logger) print_log('\n' + summary_table_data.get_string(), logger=logger) # each metric dict for key, value in ret_metrics_summary.items(): if key == 'aAcc': eval_results[key] = value / 100.0 else: eval_results['m' + key] = value / 100.0 ret_metrics_class.pop('Class', None) for key, value in ret_metrics_class.items(): eval_results.update({ key + '.' + str(name): value[idx] / 100.0 for idx, name in enumerate(class_names) }) return eval_results
18,798
37.365306
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/dark_zurich.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .cityscapes import CityscapesDataset @DATASETS.register_module() class DarkZurichDataset(CityscapesDataset): """DarkZurichDataset dataset.""" def __init__(self, **kwargs): super().__init__( img_suffix='_rgb_anon.png', seg_map_suffix='_gt_labelTrainIds.png', **kwargs)
406
26.133333
51
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/dataset_wrappers.py
# Copyright (c) OpenMMLab. All rights reserved. import bisect import collections import copy from itertools import chain import mmcv import numpy as np from mmcv.utils import build_from_cfg, print_log from torch.utils.data.dataset import ConcatDataset as _ConcatDataset from .builder import DATASETS, PIPELINES from .cityscapes import CityscapesDataset @DATASETS.register_module() class ConcatDataset(_ConcatDataset): """A wrapper of concatenated dataset. Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but support evaluation and formatting results Args: datasets (list[:obj:`Dataset`]): A list of datasets. separate_eval (bool): Whether to evaluate the concatenated dataset results separately, Defaults to True. """ def __init__(self, datasets, separate_eval=True): super(ConcatDataset, self).__init__(datasets) self.CLASSES = datasets[0].CLASSES self.PALETTE = datasets[0].PALETTE self.separate_eval = separate_eval assert separate_eval in [True, False], \ f'separate_eval can only be True or False,' \ f'but get {separate_eval}' if any([isinstance(ds, CityscapesDataset) for ds in datasets]): raise NotImplementedError( 'Evaluating ConcatDataset containing CityscapesDataset' 'is not supported!') def evaluate(self, results, logger=None, **kwargs): """Evaluate the results. Args: results (list[tuple[torch.Tensor]] | list[str]]): per image pre_eval results or predict segmentation map for computing evaluation metric. logger (logging.Logger | str | None): Logger used for printing related information during evaluation. Default: None. Returns: dict[str: float]: evaluate results of the total dataset or each separate dataset if `self.separate_eval=True`. """ assert len(results) == self.cumulative_sizes[-1], \ ('Dataset and results have different sizes: ' f'{self.cumulative_sizes[-1]} v.s. {len(results)}') # Check whether all the datasets support evaluation for dataset in self.datasets: assert hasattr(dataset, 'evaluate'), \ f'{type(dataset)} does not implement evaluate function' if self.separate_eval: dataset_idx = -1 total_eval_results = dict() for size, dataset in zip(self.cumulative_sizes, self.datasets): start_idx = 0 if dataset_idx == -1 else \ self.cumulative_sizes[dataset_idx] end_idx = self.cumulative_sizes[dataset_idx + 1] results_per_dataset = results[start_idx:end_idx] print_log( f'\nEvaluateing {dataset.img_dir} with ' f'{len(results_per_dataset)} images now', logger=logger) eval_results_per_dataset = dataset.evaluate( results_per_dataset, logger=logger, **kwargs) dataset_idx += 1 for k, v in eval_results_per_dataset.items(): total_eval_results.update({f'{dataset_idx}_{k}': v}) return total_eval_results if len(set([type(ds) for ds in self.datasets])) != 1: raise NotImplementedError( 'All the datasets should have same types when ' 'self.separate_eval=False') else: if mmcv.is_list_of(results, np.ndarray) or mmcv.is_list_of( results, str): # merge the generators of gt_seg_maps gt_seg_maps = chain( *[dataset.get_gt_seg_maps() for dataset in self.datasets]) else: # if the results are `pre_eval` results, # we do not need gt_seg_maps to evaluate gt_seg_maps = None eval_results = self.datasets[0].evaluate( results, gt_seg_maps=gt_seg_maps, logger=logger, **kwargs) return eval_results def get_dataset_idx_and_sample_idx(self, indice): """Return dataset and sample index when given an indice of ConcatDataset. Args: indice (int): indice of sample in ConcatDataset Returns: int: the index of sub dataset the sample belong to int: the index of sample in its corresponding subset """ if indice < 0: if -indice > len(self): raise ValueError( 'absolute value of index should not exceed dataset length') indice = len(self) + indice dataset_idx = bisect.bisect_right(self.cumulative_sizes, indice) if dataset_idx == 0: sample_idx = indice else: sample_idx = indice - self.cumulative_sizes[dataset_idx - 1] return dataset_idx, sample_idx def format_results(self, results, imgfile_prefix, indices=None, **kwargs): """format result for every sample of ConcatDataset.""" if indices is None: indices = list(range(len(self))) assert isinstance(results, list), 'results must be a list.' assert isinstance(indices, list), 'indices must be a list.' ret_res = [] for i, indice in enumerate(indices): dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( indice) res = self.datasets[dataset_idx].format_results( [results[i]], imgfile_prefix + f'/{dataset_idx}', indices=[sample_idx], **kwargs) ret_res.append(res) return sum(ret_res, []) def pre_eval(self, preds, indices): """do pre eval for every sample of ConcatDataset.""" # In order to compat with batch inference if not isinstance(indices, list): indices = [indices] if not isinstance(preds, list): preds = [preds] ret_res = [] for i, indice in enumerate(indices): dataset_idx, sample_idx = self.get_dataset_idx_and_sample_idx( indice) res = self.datasets[dataset_idx].pre_eval(preds[i], sample_idx) ret_res.append(res) return sum(ret_res, []) @DATASETS.register_module() class RepeatDataset(object): """A wrapper of repeated dataset. The length of repeated dataset will be `times` larger than the original dataset. This is useful when the data loading time is long but the dataset is small. Using RepeatDataset can reduce the data loading time between epochs. Args: dataset (:obj:`Dataset`): The dataset to be repeated. times (int): Repeat times. """ def __init__(self, dataset, times): self.dataset = dataset self.times = times self.CLASSES = dataset.CLASSES self.PALETTE = dataset.PALETTE self._ori_len = len(self.dataset) def __getitem__(self, idx): """Get item from original dataset.""" return self.dataset[idx % self._ori_len] def __len__(self): """The length is multiplied by ``times``""" return self.times * self._ori_len @DATASETS.register_module() class MultiImageMixDataset: """A wrapper of multiple images mixed dataset. Suitable for training on multiple images mixed data augmentation like mosaic and mixup. For the augmentation pipeline of mixed image data, the `get_indexes` method needs to be provided to obtain the image indexes, and you can set `skip_flags` to change the pipeline running process. Args: dataset (:obj:`CustomDataset`): The dataset to be mixed. pipeline (Sequence[dict]): Sequence of transform object or config dict to be composed. skip_type_keys (list[str], optional): Sequence of type string to be skip pipeline. Default to None. """ def __init__(self, dataset, pipeline, skip_type_keys=None): assert isinstance(pipeline, collections.abc.Sequence) if skip_type_keys is not None: assert all([ isinstance(skip_type_key, str) for skip_type_key in skip_type_keys ]) self._skip_type_keys = skip_type_keys self.pipeline = [] self.pipeline_types = [] for transform in pipeline: if isinstance(transform, dict): self.pipeline_types.append(transform['type']) transform = build_from_cfg(transform, PIPELINES) self.pipeline.append(transform) else: raise TypeError('pipeline must be a dict') self.dataset = dataset self.CLASSES = dataset.CLASSES self.PALETTE = dataset.PALETTE self.num_samples = len(dataset) def __len__(self): return self.num_samples def __getitem__(self, idx): results = copy.deepcopy(self.dataset[idx]) for (transform, transform_type) in zip(self.pipeline, self.pipeline_types): if self._skip_type_keys is not None and \ transform_type in self._skip_type_keys: continue if hasattr(transform, 'get_indexes'): indexes = transform.get_indexes(self.dataset) if not isinstance(indexes, collections.abc.Sequence): indexes = [indexes] mix_results = [ copy.deepcopy(self.dataset[index]) for index in indexes ] results['mix_results'] = mix_results results = transform(results) if 'mix_results' in results: results.pop('mix_results') return results def update_skip_type_keys(self, skip_type_keys): """Update skip_type_keys. It is called by an external hook. Args: skip_type_keys (list[str], optional): Sequence of type string to be skip pipeline. """ assert all([ isinstance(skip_type_key, str) for skip_type_key in skip_type_keys ]) self._skip_type_keys = skip_type_keys
10,339
36.194245
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/drive.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class DRIVEDataset(CustomDataset): """DRIVE dataset. In segmentation map annotation for DRIVE, 0 stands for background, which is included in 2 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to '_manual1.png'. """ CLASSES = ('background', 'vessel') PALETTE = [[120, 120, 120], [6, 230, 230]] def __init__(self, **kwargs): super(DRIVEDataset, self).__init__( img_suffix='.png', seg_map_suffix='_manual1.png', reduce_zero_label=False, **kwargs) assert self.file_client.exists(self.img_dir)
810
27.964286
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/face.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class FaceOccludedDataset(CustomDataset): """Face Occluded dataset. Args: split (str): Split txt file for Pascal VOC. """ CLASSES = ('background', 'face') PALETTE = [[0, 0, 0], [128, 0, 0]] def __init__(self, split, **kwargs): super(FaceOccludedDataset, self).__init__( img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) assert osp.exists(self.img_dir) and self.split is not None
623
25
76
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/hrf.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class HRFDataset(CustomDataset): """HRF dataset. In segmentation map annotation for HRF, 0 stands for background, which is included in 2 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to '.png'. """ CLASSES = ('background', 'vessel') PALETTE = [[120, 120, 120], [6, 230, 230]] def __init__(self, **kwargs): super(HRFDataset, self).__init__( img_suffix='.png', seg_map_suffix='.png', reduce_zero_label=False, **kwargs) assert self.file_client.exists(self.img_dir)
786
27.107143
77
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/imagenets.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from PIL import Image from mmseg.core import intersect_and_union from mmseg.datasets.pipelines import LoadAnnotations, LoadImageFromFile from .builder import DATASETS, PIPELINES from .custom import CustomDataset @PIPELINES.register_module() class LoadImageNetSImageFromFile(LoadImageFromFile): """Load an image from the ImageNetS dataset. To avoid out of memory, images that are too large will be downsampled to the scale of 1000. Args: downsample_large_image (bool): Whether to downsample the large images. False may cause out of memory. Defaults to True. """ def __init__(self, downsample_large_image=True, **kwargs): super().__init__(**kwargs) self.downsample_large_image = downsample_large_image def __call__(self, results): """Call functions to load image and get image meta information. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded image and meta information. """ results = super().__call__(results) if not self.downsample_large_image: return results # Images that are too large # (H * W > 1000 * 100, # these images are included in ImageNetSDataset.LARGES) # will be downsampled to 1000 along the longer side. H, W = results['img_shape'][:2] if H * W > pow(1000, 2): if H > W: target_size = (int(1000 * W / H), 1000) else: target_size = (1000, int(1000 * H / W)) results['img'] = mmcv.imresize( results['img'], size=target_size, interpolation='bilinear') if self.to_float32: results['img'] = results['img'].astype(np.float32) results['img_shape'] = results['img'].shape results['ori_shape'] = results['img'].shape # Set initial values for default meta_keys results['pad_shape'] = results['img'].shape return results @PIPELINES.register_module() class LoadImageNetSAnnotations(LoadAnnotations): """Load annotations for the ImageNetS dataset. The annotations in ImageNet-S are saved as RGB images. The annotations with format of RGB should be converted to the format of Gray as R + G * 256. """ def __call__(self, results): """Call function to load multiple types annotations. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded semantic segmentation annotations. """ results = super().__call__(results) # The annotations in ImageNet-S are saved as RGB images, # due to 919 > 255 (upper bound of gray images). # For training, # the annotations with format of RGB should be # converted to the format of Gray as R + G * 256. results['gt_semantic_seg'] = \ results['gt_semantic_seg'][:, :, 1] * 256 + \ results['gt_semantic_seg'][:, :, 2] results['gt_semantic_seg'] = results['gt_semantic_seg'].astype( np.int32) return results @DATASETS.register_module() class ImageNetSDataset(CustomDataset): """ImageNet-S dataset. In segmentation map annotation for ImageNet-S, 0 stands for others, which is not included in 50/300/919 categories. ``ignore_index`` is fixed to 1000. The ``img_suffix`` is fixed to '.JPEG' and ``seg_map_suffix`` is fixed to '.png'. """ CLASSES50 = ('others', 'goldfish', 'tiger shark', 'goldfinch', 'tree frog', 'kuvasz', 'red fox', 'siamese cat', 'american black bear', 'ladybug', 'sulphur butterfly', 'wood rabbit', 'hamster', 'wild boar', 'gibbon', 'african elephant', 'giant panda', 'airliner', 'ashcan', 'ballpoint', 'beach wagon', 'boathouse', 'bullet train', 'cellular telephone', 'chest', 'clog', 'container ship', 'digital watch', 'dining table', 'golf ball', 'grand piano', 'iron', 'lab coat', 'mixing bowl', 'motor scooter', 'padlock', 'park bench', 'purse', 'streetcar', 'table lamp', 'television', 'toilet seat', 'umbrella', 'vase', 'water bottle', 'water tower', 'yawl', 'street sign', 'lemon', 'carbonara', 'agaric') CLASSES300 = ( 'others', 'tench', 'goldfish', 'tiger shark', 'hammerhead', 'electric ray', 'ostrich', 'goldfinch', 'house finch', 'indigo bunting', 'kite', 'common newt', 'axolotl', 'tree frog', 'tailed frog', 'mud turtle', 'banded gecko', 'american chameleon', 'whiptail', 'african chameleon', 'komodo dragon', 'american alligator', 'triceratops', 'thunder snake', 'ringneck snake', 'king snake', 'rock python', 'horned viper', 'harvestman', 'scorpion', 'garden spider', 'tick', 'african grey', 'lorikeet', 'red-breasted merganser', 'wallaby', 'koala', 'jellyfish', 'sea anemone', 'conch', 'fiddler crab', 'american lobster', 'spiny lobster', 'isopod', 'bittern', 'crane', 'limpkin', 'bustard', 'albatross', 'toy terrier', 'afghan hound', 'bluetick', 'borzoi', 'irish wolfhound', 'whippet', 'ibizan hound', 'staffordshire ' 'bullterrier', 'border terrier', 'yorkshire terrier', 'lakeland terrier', 'giant schnauzer', 'standard schnauzer', 'scotch terrier', 'lhasa', 'english setter', 'clumber', 'english springer', 'welsh springer spaniel', 'kuvasz', 'kelpie', 'doberman', 'miniature pinscher', 'malamute', 'pug', 'leonberg', 'great pyrenees', 'samoyed', 'brabancon griffon', 'cardigan', 'coyote', 'red fox', 'kit fox', 'grey fox', 'persian cat', 'siamese cat', 'cougar', 'lynx', 'tiger', 'american black bear', 'sloth bear', 'ladybug', 'leaf beetle', 'weevil', 'bee', 'cicada', 'leafhopper', 'damselfly', 'ringlet', 'cabbage butterfly', 'sulphur butterfly', 'sea cucumber', 'wood rabbit', 'hare', 'hamster', 'wild boar', 'hippopotamus', 'bighorn', 'ibex', 'badger', 'three-toed sloth', 'orangutan', 'gibbon', 'colobus', 'spider monkey', 'squirrel monkey', 'madagascar cat', 'indian elephant', 'african elephant', 'giant panda', 'barracouta', 'eel', 'coho', 'academic gown', 'accordion', 'airliner', 'ambulance', 'analog clock', 'ashcan', 'backpack', 'balloon', 'ballpoint', 'barbell', 'barn', 'bassoon', 'bath towel', 'beach wagon', 'bicycle-built-for-two', 'binoculars', 'boathouse', 'bonnet', 'bookcase', 'bow', 'brass', 'breastplate', 'bullet train', 'cannon', 'can opener', "carpenter's kit", 'cassette', 'cellular telephone', 'chain saw', 'chest', 'china cabinet', 'clog', 'combination lock', 'container ship', 'corkscrew', 'crate', 'crock pot', 'digital watch', 'dining table', 'dishwasher', 'doormat', 'dutch oven', 'electric fan', 'electric locomotive', 'envelope', 'file', 'folding chair', 'football helmet', 'freight car', 'french horn', 'fur coat', 'garbage truck', 'goblet', 'golf ball', 'grand piano', 'half track', 'hamper', 'hard disc', 'harmonica', 'harvester', 'hook', 'horizontal bar', 'horse cart', 'iron', "jack-o'-lantern", 'lab coat', 'ladle', 'letter opener', 'liner', 'mailbox', 'megalith', 'military uniform', 'milk can', 'mixing bowl', 'monastery', 'mortar', 'mosquito net', 'motor scooter', 'mountain bike', 'mountain tent', 'mousetrap', 'necklace', 'nipple', 'ocarina', 'padlock', 'palace', 'parallel bars', 'park bench', 'pedestal', 'pencil sharpener', 'pickelhaube', 'pillow', 'planetarium', 'plastic bag', 'polaroid camera', 'pole', 'pot', 'purse', 'quilt', 'radiator', 'radio', 'radio telescope', 'rain barrel', 'reflex camera', 'refrigerator', 'rifle', 'rocking chair', 'rubber eraser', 'rule', 'running shoe', 'sewing machine', 'shield', 'shoji', 'ski', 'ski mask', 'slot', 'soap dispenser', 'soccer ball', 'sock', 'soup bowl', 'space heater', 'spider web', 'spindle', 'sports car', 'steel arch bridge', 'stethoscope', 'streetcar', 'submarine', 'swimming trunks', 'syringe', 'table lamp', 'tank', 'teddy', 'television', 'throne', 'tile roof', 'toilet seat', 'trench coat', 'trimaran', 'typewriter keyboard', 'umbrella', 'vase', 'volleyball', 'wardrobe', 'warplane', 'washer', 'water bottle', 'water tower', 'whiskey jug', 'wig', 'wine bottle', 'wok', 'wreck', 'yawl', 'yurt', 'street sign', 'traffic light', 'consomme', 'ice cream', 'bagel', 'cheeseburger', 'hotdog', 'mashed potato', 'spaghetti squash', 'bell pepper', 'cardoon', 'granny smith', 'strawberry', 'lemon', 'carbonara', 'burrito', 'cup', 'coral reef', "yellow lady's slipper", 'buckeye', 'agaric', 'gyromitra', 'earthstar', 'bolete') CLASSES919 = ( 'others', 'house finch', 'stupa', 'agaric', 'hen-of-the-woods', 'wild boar', 'kit fox', 'desk', 'beaker', 'spindle', 'lipstick', 'cardoon', 'ringneck snake', 'daisy', 'sturgeon', 'scorpion', 'pelican', 'bustard', 'rock crab', 'rock beauty', 'minivan', 'menu', 'thunder snake', 'zebra', 'partridge', 'lacewing', 'starfish', 'italian greyhound', 'marmot', 'cardigan', 'plate', 'ballpoint', 'chesapeake bay retriever', 'pirate', 'potpie', 'keeshond', 'dhole', 'waffle iron', 'cab', 'american egret', 'colobus', 'radio telescope', 'gordon setter', 'mousetrap', 'overskirt', 'hamster', 'wine bottle', 'bluetick', 'macaque', 'bullfrog', 'junco', 'tusker', 'scuba diver', 'pool table', 'samoyed', 'mailbox', 'purse', 'monastery', 'bathtub', 'window screen', 'african crocodile', 'traffic light', 'tow truck', 'radio', 'recreational vehicle', 'grey whale', 'crayfish', 'rottweiler', 'racer', 'whistle', 'pencil box', 'barometer', 'cabbage butterfly', 'sloth bear', 'rhinoceros beetle', 'guillotine', 'rocking chair', 'sports car', 'bouvier des flandres', 'border collie', 'fiddler crab', 'slot', 'go-kart', 'cocker spaniel', 'plate rack', 'common newt', 'tile roof', 'marimba', 'moped', 'terrapin', 'oxcart', 'lionfish', 'bassinet', 'rain barrel', 'american black bear', 'goose', 'half track', 'kite', 'microphone', 'shield', 'mexican hairless', 'measuring cup', 'bubble', 'platypus', 'saint bernard', 'police van', 'vase', 'lhasa', 'wardrobe', 'teapot', 'hummingbird', 'revolver', 'jinrikisha', 'mailbag', 'red-breasted merganser', 'assault rifle', 'loudspeaker', 'fig', 'american lobster', 'can opener', 'arctic fox', 'broccoli', 'long-horned beetle', 'television', 'airship', 'black stork', 'marmoset', 'panpipe', 'drumstick', 'knee pad', 'lotion', 'french loaf', 'throne', 'jeep', 'jersey', 'tiger cat', 'cliff', 'sealyham terrier', 'strawberry', 'minibus', 'goldfinch', 'goblet', 'burrito', 'harp', 'tractor', 'cornet', 'leopard', 'fly', 'fireboat', 'bolete', 'barber chair', 'consomme', 'tripod', 'breastplate', 'pineapple', 'wok', 'totem pole', 'alligator lizard', 'common iguana', 'digital clock', 'bighorn', 'siamese cat', 'bobsled', 'irish setter', 'zucchini', 'crock pot', 'loggerhead', 'irish wolfhound', 'nipple', 'rubber eraser', 'impala', 'barbell', 'snow leopard', 'siberian husky', 'necklace', 'manhole cover', 'electric fan', 'hippopotamus', 'entlebucher', 'prison', 'doberman', 'ruffed grouse', 'coyote', 'toaster', 'puffer', 'black swan', 'schipperke', 'file', 'prairie chicken', 'hourglass', 'greater swiss mountain dog', 'pajama', 'ear', 'pedestal', 'viaduct', 'shoji', 'snowplow', 'puck', 'gyromitra', 'birdhouse', 'flatworm', 'pier', 'coral reef', 'pot', 'mortar', 'polaroid camera', 'passenger car', 'barracouta', 'banded gecko', 'black-and-tan coonhound', 'safe', 'ski', 'torch', 'green lizard', 'volleyball', 'brambling', 'solar dish', 'lawn mower', 'swing', 'hyena', 'staffordshire bullterrier', 'screw', 'toilet tissue', 'velvet', 'scale', 'stopwatch', 'sock', 'koala', 'garbage truck', 'spider monkey', 'afghan hound', 'chain', 'upright', 'flagpole', 'tree frog', 'cuirass', 'chest', 'groenendael', 'christmas stocking', 'lakeland terrier', 'perfume', 'neck brace', 'lab coat', 'carbonara', 'porcupine', 'shower curtain', 'slug', 'pitcher', 'flat-coated retriever', 'pekinese', 'oscilloscope', 'church', 'lynx', 'cowboy hat', 'table lamp', 'pug', 'crate', 'water buffalo', 'labrador retriever', 'weimaraner', 'giant schnauzer', 'stove', 'sea urchin', 'banjo', 'tiger', 'miniskirt', 'eft', 'european gallinule', 'vending machine', 'miniature schnauzer', 'maypole', 'bull mastiff', 'hoopskirt', 'coffeepot', 'four-poster', 'safety pin', 'monarch', 'beer glass', 'grasshopper', 'head cabbage', 'parking meter', 'bonnet', 'chiffonier', 'great dane', 'spider web', 'electric locomotive', 'scotch terrier', 'australian terrier', 'honeycomb', 'leafhopper', 'beer bottle', 'mud turtle', 'lifeboat', 'cassette', "potter's wheel", 'oystercatcher', 'space heater', 'coral fungus', 'sunglass', 'quail', 'triumphal arch', 'collie', 'walker hound', 'bucket', 'bee', 'komodo dragon', 'dugong', 'gibbon', 'trailer truck', 'king crab', 'cheetah', 'rifle', 'stingray', 'bison', 'ipod', 'modem', 'box turtle', 'motor scooter', 'container ship', 'vestment', 'dingo', 'radiator', 'giant panda', 'nail', 'sea slug', 'indigo bunting', 'trimaran', 'jacamar', 'chimpanzee', 'comic book', 'odometer', 'dishwasher', 'bolo tie', 'barn', 'paddlewheel', 'appenzeller', 'great white shark', 'green snake', 'jackfruit', 'llama', 'whippet', 'hay', 'leaf beetle', 'sombrero', 'ram', 'washbasin', 'cup', 'wall clock', 'acorn squash', 'spotted salamander', 'boston bull', 'border terrier', 'doormat', 'cicada', 'kimono', 'hand blower', 'ox', 'meerkat', 'space shuttle', 'african hunting dog', 'violin', 'artichoke', 'toucan', 'bulbul', 'coucal', 'red wolf', 'seat belt', 'bicycle-built-for-two', 'bow tie', 'pretzel', 'bedlington terrier', 'albatross', 'punching bag', 'cocktail shaker', 'diamondback', 'corn', 'ant', 'mountain bike', 'walking stick', 'standard schnauzer', 'power drill', 'cardigan', 'accordion', 'wire-haired fox terrier', 'streetcar', 'beach wagon', 'ibizan hound', 'hair spray', 'car mirror', 'mountain tent', 'trench coat', 'studio couch', 'pomeranian', 'dough', 'corkscrew', 'broom', 'parachute', 'band aid', 'water tower', 'teddy', 'fire engine', 'hornbill', 'hotdog', 'theater curtain', 'crane', 'malinois', 'lion', 'african elephant', 'handkerchief', 'caldron', 'shopping basket', 'gown', 'wolf spider', 'vizsla', 'electric ray', 'freight car', 'pembroke', 'feather boa', 'wallet', 'agama', 'hard disc', 'stretcher', 'sorrel', 'trilobite', 'basset', 'vulture', 'tarantula', 'hermit crab', 'king snake', 'robin', 'bernese mountain dog', 'ski mask', 'fountain pen', 'combination lock', 'yurt', 'clumber', 'park bench', 'baboon', 'kuvasz', 'centipede', 'tabby', 'steam locomotive', 'badger', 'irish water spaniel', 'picket fence', 'gong', 'canoe', 'swimming trunks', 'submarine', 'echidna', 'bib', 'refrigerator', 'hammer', 'lemon', 'admiral', 'chihuahua', 'basenji', 'pinwheel', 'golfcart', 'bullet train', 'crib', 'muzzle', 'eggnog', 'old english sheepdog', 'tray', 'tiger beetle', 'electric guitar', 'peacock', 'soup bowl', 'wallaby', 'abacus', 'dalmatian', 'harvester', 'aircraft carrier', 'snowmobile', 'welsh springer spaniel', 'affenpinscher', 'oboe', 'cassette player', 'pencil sharpener', 'japanese spaniel', 'plunger', 'black widow', 'norfolk terrier', 'reflex camera', 'ice bear', 'redbone', 'mongoose', 'warthog', 'arabian camel', 'bittern', 'mixing bowl', 'tailed frog', 'scabbard', 'castle', 'curly-coated retriever', 'garden spider', 'folding chair', 'mouse', 'prayer rug', 'red fox', 'toy terrier', 'leonberg', 'lycaenid', 'poncho', 'goldfish', 'red-backed sandpiper', 'holster', 'hair slide', 'coho', 'komondor', 'macaw', 'maltese dog', 'megalith', 'sarong', 'green mamba', 'sea lion', 'water ouzel', 'bulletproof vest', 'sulphur-crested cockatoo', 'scottish deerhound', 'steel arch bridge', 'catamaran', 'brittany spaniel', 'redshank', 'otter', 'brabancon griffon', 'balloon', 'rule', 'planetarium', 'trombone', 'mitten', 'abaya', 'crash helmet', 'milk can', 'hartebeest', 'windsor tie', 'irish terrier', 'african chameleon', 'matchstick', 'water bottle', 'cloak', 'ground beetle', 'ashcan', 'crane', 'gila monster', 'unicycle', 'gazelle', 'wombat', 'brain coral', 'projector', 'custard apple', 'proboscis monkey', 'tibetan mastiff', 'mosque', 'plastic bag', 'backpack', 'drum', 'norwich terrier', 'pizza', 'carton', 'plane', 'gorilla', 'jigsaw puzzle', 'forklift', 'isopod', 'otterhound', 'vacuum', 'european fire salamander', 'apron', 'langur', 'boxer', 'african grey', 'ice lolly', 'toilet seat', 'golf ball', 'titi', 'drake', 'ostrich', 'magnetic compass', 'great pyrenees', 'rhodesian ridgeback', 'buckeye', 'dungeness crab', 'toy poodle', 'ptarmigan', 'amphibian', 'monitor', 'school bus', 'schooner', 'spatula', 'weevil', 'speedboat', 'sundial', 'borzoi', 'bassoon', 'bath towel', 'pill bottle', 'acorn', 'tick', 'briard', 'thimble', 'brass', 'white wolf', 'boathouse', 'yawl', 'miniature pinscher', 'barn spider', 'jean', 'water snake', 'dishrag', 'yorkshire terrier', 'hammerhead', 'typewriter keyboard', 'papillon', 'ocarina', 'washer', 'standard poodle', 'china cabinet', 'steel drum', 'swab', 'mobile home', 'german short-haired pointer', 'saluki', 'bee eater', 'rock python', 'vine snake', 'kelpie', 'harmonica', 'military uniform', 'reel', 'thatch', 'maraca', 'tricycle', 'sidewinder', 'parallel bars', 'banana', 'flute', 'paintbrush', 'sleeping bag', "yellow lady's slipper", 'three-toed sloth', 'white stork', 'notebook', 'weasel', 'tiger shark', 'football helmet', 'madagascar cat', 'dowitcher', 'wreck', 'king penguin', 'lighter', 'timber wolf', 'racket', 'digital watch', 'liner', 'hen', 'suspension bridge', 'pillow', "carpenter's kit", 'butternut squash', 'sandal', 'sussex spaniel', 'hip', 'american staffordshire terrier', 'flamingo', 'analog clock', 'black and gold garden spider', 'sea cucumber', 'indian elephant', 'syringe', 'lens cap', 'missile', 'cougar', 'diaper', 'chambered nautilus', 'garter snake', 'anemone fish', 'organ', 'limousine', 'horse cart', 'jaguar', 'frilled lizard', 'crutch', 'sea anemone', 'guenon', 'meat loaf', 'slide rule', 'saltshaker', 'pomegranate', 'acoustic guitar', 'shopping cart', 'drilling platform', 'nematode', 'chickadee', 'academic gown', 'candle', 'norwegian elkhound', 'armadillo', 'horizontal bar', 'orangutan', 'obelisk', 'stone wall', 'cannon', 'rugby ball', 'ping-pong ball', 'window shade', 'trolleybus', 'ice cream', 'pop bottle', 'cock', 'harvestman', 'leatherback turtle', 'killer whale', 'spaghetti squash', 'chain saw', 'stinkhorn', 'espresso maker', 'loafer', 'bagel', 'ballplayer', 'skunk', 'chainlink fence', 'earthstar', 'whiptail', 'barrel', 'kerry blue terrier', 'triceratops', 'chow', 'grey fox', 'sax', 'binoculars', 'ladybug', 'silky terrier', 'gas pump', 'cradle', 'whiskey jug', 'french bulldog', 'eskimo dog', 'hog', 'hognose snake', 'pickup', 'indian cobra', 'hand-held computer', 'printer', 'pole', 'bald eagle', 'american alligator', 'dumbbell', 'umbrella', 'mink', 'shower cap', 'tank', 'quill', 'fox squirrel', 'ambulance', 'lesser panda', 'frying pan', 'letter opener', 'hook', 'strainer', 'pick', 'dragonfly', 'gar', 'piggy bank', 'envelope', 'stole', 'ibex', 'american chameleon', 'bearskin', 'microwave', 'petri dish', 'wood rabbit', 'beacon', 'dung beetle', 'warplane', 'ruddy turnstone', 'knot', 'fur coat', 'hamper', 'beagle', 'ringlet', 'mask', 'persian cat', 'cellular telephone', 'american coot', 'apiary', 'shovel', 'coffee mug', 'sewing machine', 'spoonbill', 'padlock', 'bell pepper', 'great grey owl', 'squirrel monkey', 'sulphur butterfly', 'scoreboard', 'bow', 'malamute', 'siamang', 'snail', 'remote control', 'sea snake', 'loupe', 'model t', 'english setter', 'dining table', 'face powder', 'tench', "jack-o'-lantern", 'croquet ball', 'water jug', 'airedale', 'airliner', 'guinea pig', 'hare', 'damselfly', 'thresher', 'limpkin', 'buckle', 'english springer', 'boa constrictor', 'french horn', 'black-footed ferret', 'shetland sheepdog', 'capuchin', 'cheeseburger', 'miniature poodle', 'spotlight', 'wooden spoon', 'west highland white terrier', 'wig', 'running shoe', 'cowboy boot', 'brown bear', 'iron', 'brassiere', 'magpie', 'gondola', 'grand piano', 'granny smith', 'mashed potato', 'german shepherd', 'stethoscope', 'cauliflower', 'soccer ball', 'pay-phone', 'jellyfish', 'cairn', 'polecat', 'trifle', 'photocopier', 'shih-tzu', 'orange', 'guacamole', 'hatchet', 'cello', 'egyptian cat', 'basketball', 'moving van', 'mortarboard', 'dial telephone', 'street sign', 'oil filter', 'beaver', 'spiny lobster', 'chime', 'bookcase', 'chiton', 'black grouse', 'jay', 'axolotl', 'oxygen mask', 'cricket', 'worm fence', 'indri', 'cockroach', 'mushroom', 'dandie dinmont', 'tennis ball', 'howler monkey', 'rapeseed', 'tibetan terrier', 'newfoundland', 'dutch oven', 'paddle', 'joystick', 'golden retriever', 'blenheim spaniel', 'mantis', 'soft-coated wheaten terrier', 'little blue heron', 'convertible', 'bloodhound', 'palace', 'medicine chest', 'english foxhound', 'cleaver', 'sweatshirt', 'mosquito net', 'soap dispenser', 'ladle', 'screwdriver', 'fire screen', 'binder', 'suit', 'barrow', 'clog', 'cucumber', 'baseball', 'lorikeet', 'conch', 'quilt', 'eel', 'horned viper', 'night snake', 'angora', 'pickelhaube', 'gasmask', 'patas') # Some too large images are downsampled in LoadImageNetSImageFromFile. # These images should be upsampled back in results2img. LARGES = { '00022800': [1225, 900], '00037230': [2082, 2522], '00011749': [1000, 1303], '00040173': [1280, 960], '00027045': [1880, 1330], '00019424': [2304, 3072], '00015496': [1728, 2304], '00025715': [1083, 1624], '00008260': [1400, 1400], '00047233': [850, 1540], '00043667': [2066, 1635], '00024274': [1920, 2560], '00028437': [1920, 2560], '00018910': [1536, 2048], '00046074': [1600, 1164], '00021215': [1024, 1540], '00034174': [960, 1362], '00007361': [960, 1280], '00030207': [1512, 1016], '00015637': [1600, 1200], '00013665': [2100, 1500], '00028501': [1200, 852], '00047237': [1624, 1182], '00026950': [1200, 1600], '00041704': [1920, 2560], '00027074': [1200, 1600], '00016473': [1200, 1200], '00012206': [2448, 3264], '00019622': [960, 1280], '00008728': [2806, 750], '00027712': [1128, 1700], '00007195': [1290, 1824], '00002942': [2560, 1920], '00037032': [1954, 2613], '00018543': [1067, 1600], '00041570': [1536, 2048], '00004422': [1728, 2304], '00044827': [800, 1280], '00046674': [1200, 1600], '00017711': [1200, 1600], '00048488': [1889, 2834], '00000706': [1501, 2001], '00032736': [1200, 1600], '00024348': [1536, 2048], '00023430': [1051, 1600], '00030496': [1350, 900], '00026543': [1280, 960], '00010969': [2560, 1920], '00025272': [1294, 1559], '00019950': [1536, 1024], '00004466': [1182, 1722], '00029917': [3072, 2304], '00014683': [1145, 1600], '00013084': [1281, 2301], '00039792': [1760, 1034], '00046246': [2448, 3264], '00004280': [984, 1440], '00009435': [1127, 1502], '00012860': [1673, 2500], '00016702': [1444, 1000], '00011278': [2048, 3072], '00048174': [1605, 2062], '00035451': [1225, 1636], '00024769': [1200, 900], '00032797': [1251, 1664], '00027924': [1453, 1697], '00010965': [1536, 2048], '00020735': [1200, 1600], '00027789': [853, 1280], '00015113': [1324, 1999], '00037571': [1251, 1586], '00030120': [1536, 2048], '00044219': [2448, 3264], '00024604': [1535, 1955], '00010926': [1200, 900], '00017509': [1536, 2048], '00042373': [924, 1104], '00037066': [1536, 2048], '00025494': [1880, 1060], '00028610': [1377, 2204], '00007196': [1202, 1600], '00030788': [2592, 1944], '00046865': [1920, 2560], '00027141': [1600, 1200], '00023215': [1200, 1600], '00000218': [1439, 1652], '00048126': [1516, 927], '00030408': [1600, 2400], '00038582': [1600, 1200], '00046959': [1304, 900], '00016988': [1242, 1656], '00017201': [1629, 1377], '00017658': [1000, 1035], '00002766': [1495, 2383], '00038573': [1600, 1071], '00042297': [1200, 1200], '00010564': [995, 1234], '00001189': [1600, 1200], '00007018': [1858, 2370], '00043554': [1200, 1600], '00000746': [1200, 1600], '00001386': [960, 1280], '00029975': [1600, 1200], '00016221': [2877, 2089], '00003152': [1200, 1600], '00002552': [1200, 1600], '00009402': [1125, 1500], '00040672': [960, 1280], '00024540': [960, 1280], '00049770': [1457, 1589], '00014533': [841, 1261], '00006228': [1417, 1063], '00034688': [1354, 2032], '00032897': [1071, 1600], '00024356': [2043, 3066], '00019656': [1318, 1984], '00035802': [2288, 2001], '00017499': [1502, 1162], '00046898': [1200, 1600], '00040883': [1024, 1280], '00031353': [1544, 1188], '00028419': [1600, 1200], '00048897': [2304, 3072], '00040683': [1296, 1728], '00042406': [848, 1200], '00036007': [900, 1200], '00010515': [1688, 1387], '00048409': [5005, 3646], '00032654': [1200, 1600], '00037955': [1200, 1600], '00038471': [3072, 2048], '00036201': [913, 1328], '00038619': [1728, 2304], '00038165': [926, 2503], '00033240': [1061, 1158], '00023086': [1200, 1600], '00041385': [1200, 1600], '00014066': [2304, 3072], '00049973': [1211, 1261], '00043188': [2000, 3000], '00047186': [1535, 1417], '00046975': [1560, 2431], '00034402': [1776, 2700], '00017033': [1392, 1630], '00041068': [1280, 960], '00011024': [1317, 900], '00048035': [1800, 1200], '00033286': [994, 1500], '00016613': [1152, 1536], '00044160': [888, 1200], '00021138': [902, 1128], '00022300': [798, 1293], '00034300': [1920, 2560], '00008603': [1661, 1160], '00045173': [2312, 903], '00048616': [960, 1280], '00048317': [3872, 2592], '00045470': [1920, 1800], '00043934': [1667, 2500], '00010699': [2240, 1488], '00030550': [1200, 1600], '00010516': [1704, 2272], '00001779': [1536, 2048], '00018389': [1084, 1433], '00013889': [3072, 2304], '00022440': [2112, 2816], '00024005': [2592, 1944], '00046620': [960, 1280], '00035227': [960, 1280], '00033636': [1110, 1973], '00003624': [1165, 1600], '00033400': [1200, 1600], '00013891': [1200, 1600], '00022593': [1472, 1456], '00009546': [1936, 2592], '00022022': [1182, 1740], '00022982': [1200, 1600], '00039569': [1600, 1067], '00009276': [930, 1240], '00026777': [960, 1280], '00047680': [1425, 882], '00040785': [853, 1280], '00002037': [1944, 2592], '00005813': [1098, 987], '00018328': [1128, 1242], '00022318': [1500, 1694], '00026654': [790, 1285], '00012895': [1600, 1067], '00007882': [980, 1024], '00043771': [1008, 1043], '00032990': [3621, 2539], '00034094': [1175, 1600], '00034302': [1463, 1134], '00025021': [1503, 1520], '00000771': [900, 1200], '00025149': [1600, 1200], '00005211': [1063, 1600], '00049544': [1063, 1417], '00025378': [1800, 2400], '00024287': [1200, 1600], '00013550': [2448, 3264], '00008076': [1200, 1600], '00039536': [1000, 1500], '00020331': [1024, 1280], '00002623': [1050, 1400], '00031071': [873, 1320], '00025266': [1024, 1536], '00015109': [1213, 1600], '00027390': [1200, 1600], '00018894': [1584, 901], '00049009': [900, 1203], '00026671': [1201, 1601], '00018668': [1024, 990], '00016942': [1024, 1024], '00046430': [1944, 3456], '00033261': [1341, 1644], '00017363': [2304, 2898], '00045935': [2112, 2816], '00027084': [900, 1200], '00037716': [1611, 981], '00030879': [1200, 1600], '00027539': [1534, 1024], '00030052': [1280, 852], '00011015': [2808, 2060], '00037004': [1920, 2560], '00044012': [2240, 1680], '00049818': [1704, 2272], '00003541': [1200, 1600], '00000520': [2448, 3264], '00028331': [3264, 2448], '00030244': [1200, 1600], '00039079': [1600, 1200], '00033432': [1600, 1200], '00010533': [1200, 1600], '00005916': [899, 1200], '00038903': [1052, 1592], '00025169': [1895, 850], '00049042': [1200, 1600], '00021828': [1280, 988], '00013420': [3648, 2736], '00045201': [1381, 1440], '00021857': [776, 1296], '00048810': [1168, 1263], '00047860': [2592, 3888], '00046960': [2304, 3072], '00039357': [1200, 1600], '00019620': [1536, 2048], '00026710': [1944, 2592], '00021277': [1079, 1151], '00028387': [1128, 1585], '00028796': [990, 1320], '00035149': [1064, 1600], '00020182': [1843, 1707], '00018286': [2592, 1944], '00035658': [1488, 1984], '00008180': [1024, 1633], '00018740': [1200, 1600], '00044356': [1536, 2048], '00038857': [1252, 1676], '00035014': [1200, 1600], '00044824': [1200, 1600], '00009912': [1200, 1600], '00014572': [2400, 1800], '00001585': [1600, 1067], '00047704': [1200, 1600], '00038537': [920, 1200], '00027941': [2200, 3000], '00028526': [2592, 1944], '00042353': [1280, 1024], '00043409': [2000, 1500], '00002209': [2592, 1944], '00040841': [1613, 1974], '00038889': [900, 1200], '00046941': [1200, 1600], '00014029': [846, 1269], '00023091': [900, 1200], '00036184': [877, 1350], '00006165': [1200, 1600], '00033991': [868, 2034], '00035078': [1680, 2240], '00045681': [1467, 1134], '00043867': [1200, 1600], '00003586': [1200, 1600], '00039024': [1283, 2400], '00048990': [1200, 1200], '00044334': [960, 1280], '00020939': [960, 1280], '00031529': [1302, 1590], '00014867': [2112, 2816], '00034239': [1536, 2048], '00031845': [1200, 1600], '00045721': [1536, 2048], '00025336': [1441, 1931], '00040323': [900, 1152], '00009133': [876, 1247], '00033687': [2357, 3657], '00038351': [1306, 1200], '00022618': [1060, 1192], '00001626': [777, 1329], '00039137': [1071, 1600], '00034896': [1426, 1590], '00048502': [1187, 1837], '00048077': [1712, 2288], '00026239': [1200, 1600], '00032687': [857, 1280], '00006639': [1498, 780], '00037738': [2112, 2816], '00035760': [1123, 1447], '00004897': [1083, 1393], '00012141': [3584, 2016], '00016278': [3234, 2281], '00006661': [1787, 3276], '00033040': [1200, 1800], '00009881': [960, 1280], '00008240': [2592, 1944], '00023506': [960, 1280], '00046982': [1693, 2480], '00049632': [2310, 1638], '00005473': [960, 1280], '00013491': [2000, 3008], '00005581': [1593, 1200], '00005196': [1417, 2133], '00049433': [1207, 1600], '00012323': [1200, 1800], '00021883': [1600, 2400], '00031877': [2448, 3264], '00046428': [1200, 1600], '00000725': [881, 1463], '00044936': [894, 1344], '00012054': [3040, 4048], '00025447': [900, 1200], '00005290': [1520, 2272], '00023326': [984, 1312], '00047891': [1067, 1600], '00026115': [1067, 1600], '00010051': [1062, 1275], '00005999': [1123, 1600], '00021752': [1071, 1600], '00041559': [1200, 1600], '00025931': [836, 1410], '00009327': [2848, 4288], '00029735': [1905, 1373], '00012922': [1024, 1547], '00042259': [1548, 1024], '00024949': [1050, 956], '00014669': [900, 1200], '00028028': [1170, 1730], '00003183': [1152, 1535], '00039304': [1050, 1680], '00014939': [1904, 1240], '00048366': [1600, 1200], '00022406': [3264, 2448], '00033363': [1125, 1500], '00041230': [1125, 1500], '00044222': [2105, 2472], '00021950': [1200, 1200], '00028475': [2691, 3515], '00002149': [900, 1600], '00033356': [1080, 1920], '00041158': [960, 1280], '00029672': [1536, 2048], '00045816': [1023, 1153], '00020471': [2076, 2716], '00012398': [1067, 1600], '00017884': [2048, 3072], '00025132': [1200, 1600], '00042429': [1362, 1980], '00021285': [1127, 1200], '00045113': [2792, 2528], '00047915': [1200, 891], '00009481': [1097, 924], '00025448': [1760, 2400], '00033911': [1759, 2197], '00044684': [1200, 1600], '00033754': [2304, 1728], '00002733': [1536, 2048], '00027371': [936, 1128], '00019941': [685, 1591], '00028479': [1944, 2592], '00018451': [1028, 1028], '00024067': [1000, 1352], '00016524': [1704, 2272], '00048926': [1944, 2592], '00020992': [1024, 1280], '00044576': [1024, 1280], '00031796': [960, 1280], '00043540': [2448, 3264], '00049250': [1056, 1408], '00030602': [2592, 3872], '00046571': [1118, 1336], '00024908': [1442, 1012], '00018903': [3072, 2304], '00032370': [1944, 2592], '00043445': [1050, 1680], '00030791': [2228, 3168], '00046866': [2057, 3072], '00047293': [1800, 2400], '00024853': [1296, 1936], '00014344': [1125, 1500], '00041327': [960, 1280], '00017867': [2592, 3872], '00037615': [1664, 2496], '00011247': [1605, 2934], '00034664': [2304, 1728], '00013733': [1024, 1280], '00009125': [1200, 1600], '00035163': [1654, 1233], '00017537': [1200, 1600], '00043423': [1536, 2048], '00035755': [1154, 900], '00021712': [1600, 1200], '00000597': [2792, 1908], '00033579': [882, 1181], '00035830': [2112, 2816], '00005917': [920, 1380], '00029722': [2736, 3648], '00039979': [1200, 1600], '00040854': [1606, 2400], '00039884': [2848, 4288], '00003508': [1128, 1488], '00019862': [1200, 1600], '00041813': [1226, 1160], '00007121': [985, 1072], '00013315': [883, 1199], '00049822': [922, 1382], '00027622': [1434, 1680], '00047689': [1536, 2048], '00017415': [1491, 2283], '00023713': [927, 1287], '00001632': [1200, 1600], '00033104': [1200, 1600], '00017643': [1002, 1200], '00038396': [1330, 1999], '00027614': [2166, 2048], '00025962': [1600, 1200], '00015915': [1067, 1600], '00008940': [1942, 2744], '00012468': [2000, 2000], '00046953': [828, 1442], '00002084': [1067, 1600], '00040245': [2657, 1898], '00023718': [900, 1440], '00022770': [924, 1280], '00028957': [960, 1280], '00001054': [2048, 3072], '00040541': [1369, 1809], '00024869': [960, 1280], '00037655': [900, 1440], '00037200': [2171, 2575], '00037390': [1394, 1237], '00025318': [1054, 1024], '00021634': [1800, 2400], '00044217': [1003, 1024], '00014877': [1200, 1600], '00029504': [1224, 1632], '00016422': [960, 1280], '00028015': [1944, 2592], '00006235': [967, 1291], '00045909': [2272, 1704] } def __init__(self, subset=919, **kwargs): assert subset in (50, 300, 919), \ 'ImageNet-S has three subsets, i.e., '\ 'ImageNet-S50, ImageNet-S300 and ImageNet-S919.' if subset == 50: self.CLASSES = self.CLASSES50 elif subset == 300: self.CLASSES = self.CLASSES300 else: self.CLASSES = self.CLASSES919 super(ImageNetSDataset, self).__init__( img_suffix='.JPEG', seg_map_suffix='.png', reduce_zero_label=False, ignore_index=1000, **kwargs) self.subset = subset gt_seg_map_loader_cfg = kwargs.get('gt_seg_map_loader_cfg', None) self.gt_seg_map_loader = LoadImageNetSAnnotations( ) if gt_seg_map_loader_cfg is None else LoadImageNetSAnnotations( **gt_seg_map_loader_cfg) def pre_eval(self, preds, indices): """Collect eval result for ImageNet-S. In LoadImageNetSImageFromFile, the too large images have been downsampled. Here the preds should be upsampled back after argmax. Args: preds (list[torch.Tensor] | torch.Tensor): the segmentation logit after argmax, shape (N, H, W). indices (list[int] | int): the prediction related ground truth indices. Returns: list[torch.Tensor]: (area_intersect, area_union, area_prediction, area_ground_truth). """ # In order to compat with batch inference if not isinstance(indices, list): indices = [indices] if not isinstance(preds, list): preds = [preds] pre_eval_results = [] for pred, index in zip(preds, indices): seg_map = self.get_gt_seg_map_by_idx(index) pred = mmcv.imresize( pred, size=(seg_map.shape[1], seg_map.shape[0]), interpolation='nearest') pre_eval_results.append( intersect_and_union( pred, seg_map, len(self.CLASSES), self.ignore_index, # as the labels has been converted when dataset initialized # in `get_palette_for_custom_classes ` this `label_map` # should be `dict()`, see # https://github.com/open-mmlab/mmsegmentation/issues/1415 # for more ditails label_map=dict(), reduce_zero_label=self.reduce_zero_label)) return pre_eval_results def results2img(self, results, imgfile_prefix, to_label_id, indices=None): """Write the segmentation results to images for ImageNetS. The results should be converted as RGB images due to 919 (>256) categroies. In LoadImageNetSImageFromFile, the too large images have been downsampled. Here the results should be upsampled back after argmax. Args: results (list[ndarray]): Testing results of the dataset. imgfile_prefix (str): The filename prefix of the png files. If the prefix is "somepath/xxx", the png files will be named "somepath/xxx.png". to_label_id (bool): whether convert output to label_id for submission. indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: list[str: str]: result txt files which contains corresponding semantic segmentation images. """ if indices is None: indices = list(range(len(self))) result_files = [] for result, idx in zip(results, indices): filename = self.img_infos[idx]['filename'] directory = filename.split('/')[-2] basename = osp.splitext(osp.basename(filename))[0] png_filename = osp.join(imgfile_prefix, directory, f'{basename}.png') # The index range of output is from 0 to 919/300/50. result_rgb = np.zeros(shape=(result.shape[0], result.shape[1], 3)) result_rgb[:, :, 0] = result % 256 result_rgb[:, :, 1] = result // 256 if basename.split('_')[2] in self.LARGES.keys(): result_rgb = mmcv.imresize( result_rgb, size=(self.LARGES[basename.split('_')[2]][1], self.LARGES[basename.split('_')[2]][0]), interpolation='nearest') mmcv.mkdir_or_exist(osp.join(imgfile_prefix, directory)) output = Image.fromarray(result_rgb.astype(np.uint8)) output.save(png_filename) result_files.append(png_filename) return result_files def format_results(self, results, imgfile_prefix, to_label_id=True, indices=None): """Format the results into dir (standard format for ImageNetS evaluation). Args: results (list): Testing results of the dataset. imgfile_prefix (str | None): The prefix of images files. It includes the file path and the prefix of filename, e.g., "a/b/prefix". to_label_id (bool): whether convert output to label_id for submission. Default: False indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: tuple: (result_files, tmp_dir), result_files is a list containing the image paths, tmp_dir is the temporal directory created for saving json/png files when img_prefix is not specified. """ if indices is None: indices = list(range(len(self))) assert isinstance(results, list), 'results must be a list.' assert isinstance(indices, list), 'indices must be a list.' result_files = self.results2img(results, imgfile_prefix, to_label_id, indices) return result_files
45,305
44.080597
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/isaid.py
# Copyright (c) OpenMMLab. All rights reserved. import mmcv from mmcv.utils import print_log from ..utils import get_root_logger from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class iSAIDDataset(CustomDataset): """ iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Images In segmentation map annotation for iSAID dataset, which is included in 16 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to '_manual1.png'. """ CLASSES = ('background', 'ship', 'store_tank', 'baseball_diamond', 'tennis_court', 'basketball_court', 'Ground_Track_Field', 'Bridge', 'Large_Vehicle', 'Small_Vehicle', 'Helicopter', 'Swimming_pool', 'Roundabout', 'Soccer_ball_field', 'plane', 'Harbor') PALETTE = [[0, 0, 0], [0, 0, 63], [0, 63, 63], [0, 63, 0], [0, 63, 127], [0, 63, 191], [0, 63, 255], [0, 127, 63], [0, 127, 127], [0, 0, 127], [0, 0, 191], [0, 0, 255], [0, 191, 127], [0, 127, 191], [0, 127, 255], [0, 100, 155]] def __init__(self, **kwargs): super(iSAIDDataset, self).__init__( img_suffix='.png', seg_map_suffix='.png', ignore_index=255, **kwargs) assert self.file_client.exists(self.img_dir) def load_annotations(self, img_dir, img_suffix, ann_dir, seg_map_suffix=None, split=None): """Load annotation from directory. Args: img_dir (str): Path to image directory img_suffix (str): Suffix of images. ann_dir (str|None): Path to annotation directory. seg_map_suffix (str|None): Suffix of segmentation maps. split (str|None): Split txt file. If split is specified, only file with suffix in the splits will be loaded. Otherwise, all images in img_dir/ann_dir will be loaded. Default: None Returns: list[dict]: All image info of dataset. """ img_infos = [] if split is not None: with open(split) as f: for line in f: name = line.strip() img_info = dict(filename=name + img_suffix) if ann_dir is not None: ann_name = name + '_instance_color_RGB' seg_map = ann_name + seg_map_suffix img_info['ann'] = dict(seg_map=seg_map) img_infos.append(img_info) else: for img in mmcv.scandir(img_dir, img_suffix, recursive=True): img_info = dict(filename=img) if ann_dir is not None: seg_img = img seg_map = seg_img.replace( img_suffix, '_instance_color_RGB' + seg_map_suffix) img_info['ann'] = dict(seg_map=seg_map) img_infos.append(img_info) print_log(f'Loaded {len(img_infos)} images', logger=get_root_logger()) return img_infos
3,283
38.566265
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/isprs.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class ISPRSDataset(CustomDataset): """ISPRS dataset. In segmentation map annotation for LoveDA, 0 is the ignore index. ``reduce_zero_label`` should be set to True. The ``img_suffix`` and ``seg_map_suffix`` are both fixed to '.png'. """ CLASSES = ('impervious_surface', 'building', 'low_vegetation', 'tree', 'car', 'clutter') PALETTE = [[255, 255, 255], [0, 0, 255], [0, 255, 255], [0, 255, 0], [255, 255, 0], [255, 0, 0]] def __init__(self, **kwargs): super(ISPRSDataset, self).__init__( img_suffix='.png', seg_map_suffix='.png', reduce_zero_label=True, **kwargs)
827
30.846154
74
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/loveda.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from PIL import Image from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class LoveDADataset(CustomDataset): """LoveDA dataset. In segmentation map annotation for LoveDA, 0 is the ignore index. ``reduce_zero_label`` should be set to True. The ``img_suffix`` and ``seg_map_suffix`` are both fixed to '.png'. """ CLASSES = ('background', 'building', 'road', 'water', 'barren', 'forest', 'agricultural') PALETTE = [[255, 255, 255], [255, 0, 0], [255, 255, 0], [0, 0, 255], [159, 129, 183], [0, 255, 0], [255, 195, 128]] def __init__(self, **kwargs): super(LoveDADataset, self).__init__( img_suffix='.png', seg_map_suffix='.png', reduce_zero_label=True, **kwargs) def results2img(self, results, imgfile_prefix, indices=None): """Write the segmentation results to images. Args: results (list[ndarray]): Testing results of the dataset. imgfile_prefix (str): The filename prefix of the png files. If the prefix is "somepath/xxx", the png files will be named "somepath/xxx.png". indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: list[str: str]: result txt files which contains corresponding semantic segmentation images. """ mmcv.mkdir_or_exist(imgfile_prefix) result_files = [] for result, idx in zip(results, indices): filename = self.img_infos[idx]['filename'] basename = osp.splitext(osp.basename(filename))[0] png_filename = osp.join(imgfile_prefix, f'{basename}.png') # The index range of official requirement is from 0 to 6. output = Image.fromarray(result.astype(np.uint8)) output.save(png_filename) result_files.append(png_filename) return result_files def format_results(self, results, imgfile_prefix, indices=None): """Format the results into dir (standard format for LoveDA evaluation). Args: results (list): Testing results of the dataset. imgfile_prefix (str): The prefix of images files. It includes the file path and the prefix of filename, e.g., "a/b/prefix". indices (list[int], optional): Indices of input results, if not set, all the indices of the dataset will be used. Default: None. Returns: tuple: (result_files, tmp_dir), result_files is a list containing the image paths, tmp_dir is the temporal directory created for saving json/png files when img_prefix is not specified. """ if indices is None: indices = list(range(len(self))) assert isinstance(results, list), 'results must be a list.' assert isinstance(indices, list), 'indices must be a list.' result_files = self.results2img(results, imgfile_prefix, indices) return result_files
3,349
35.021505
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/night_driving.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .cityscapes import CityscapesDataset @DATASETS.register_module() class NightDrivingDataset(CityscapesDataset): """NightDrivingDataset dataset.""" def __init__(self, **kwargs): super().__init__( img_suffix='_leftImg8bit.png', seg_map_suffix='_gtCoarse_labelTrainIds.png', **kwargs)
419
27
57
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pascal_context.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class PascalContextDataset(CustomDataset): """PascalContext dataset. In segmentation map annotation for PascalContext, 0 stands for background, which is included in 60 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to '.png'. Args: split (str): Split txt file for PascalContext. """ CLASSES = ('background', 'aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower', 'food', 'grass', 'ground', 'horse', 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood') PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]] def __init__(self, split, **kwargs): super(PascalContextDataset, self).__init__( img_suffix='.jpg', seg_map_suffix='.png', split=split, reduce_zero_label=False, **kwargs) assert self.file_client.exists(self.img_dir) and self.split is not None @DATASETS.register_module() class PascalContextDataset59(CustomDataset): """PascalContext dataset. In segmentation map annotation for PascalContext59, background is not included in 59 categories. ``reduce_zero_label`` is fixed to True. The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to '.png'. Args: split (str): Split txt file for PascalContext. """ CLASSES = ('aeroplane', 'bag', 'bed', 'bedclothes', 'bench', 'bicycle', 'bird', 'boat', 'book', 'bottle', 'building', 'bus', 'cabinet', 'car', 'cat', 'ceiling', 'chair', 'cloth', 'computer', 'cow', 'cup', 'curtain', 'dog', 'door', 'fence', 'floor', 'flower', 'food', 'grass', 'ground', 'horse', 'keyboard', 'light', 'motorbike', 'mountain', 'mouse', 'person', 'plate', 'platform', 'pottedplant', 'road', 'rock', 'sheep', 'shelves', 'sidewalk', 'sign', 'sky', 'snow', 'sofa', 'table', 'track', 'train', 'tree', 'truck', 'tvmonitor', 'wall', 'water', 'window', 'wood') PALETTE = [[180, 120, 120], [6, 230, 230], [80, 50, 50], [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255]] def __init__(self, split, **kwargs): super(PascalContextDataset59, self).__init__( img_suffix='.jpg', seg_map_suffix='.png', split=split, reduce_zero_label=True, **kwargs) assert self.file_client.exists(self.img_dir) and self.split is not None
5,239
49.384615
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/potsdam.py
# Copyright (c) OpenMMLab. All rights reserved. from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class PotsdamDataset(CustomDataset): """ISPRS Potsdam dataset. In segmentation map annotation for Potsdam dataset, 0 is the ignore index. ``reduce_zero_label`` should be set to True. The ``img_suffix`` and ``seg_map_suffix`` are both fixed to '.png'. """ CLASSES = ('impervious_surface', 'building', 'low_vegetation', 'tree', 'car', 'clutter') PALETTE = [[255, 255, 255], [0, 0, 255], [0, 255, 255], [0, 255, 0], [255, 255, 0], [255, 0, 0]] def __init__(self, **kwargs): super(PotsdamDataset, self).__init__( img_suffix='.png', seg_map_suffix='.png', reduce_zero_label=True, **kwargs)
848
31.653846
78
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/stare.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class STAREDataset(CustomDataset): """STARE dataset. In segmentation map annotation for STARE, 0 stands for background, which is included in 2 categories. ``reduce_zero_label`` is fixed to False. The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to '.ah.png'. """ CLASSES = ('background', 'vessel') PALETTE = [[120, 120, 120], [6, 230, 230]] def __init__(self, **kwargs): super(STAREDataset, self).__init__( img_suffix='.png', seg_map_suffix='.ah.png', reduce_zero_label=False, **kwargs) assert osp.exists(self.img_dir)
809
26.931034
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/voc.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp from .builder import DATASETS from .custom import CustomDataset @DATASETS.register_module() class PascalVOCDataset(CustomDataset): """Pascal VOC dataset. Args: split (str): Split txt file for Pascal VOC. """ CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor') PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] def __init__(self, split, **kwargs): super(PascalVOCDataset, self).__init__( img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) assert osp.exists(self.img_dir) and self.split is not None
1,178
37.032258
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .compose import Compose from .formatting import (Collect, ImageToTensor, ToDataContainer, ToTensor, Transpose, to_tensor) from .loading import LoadAnnotations, LoadImageFromFile from .test_time_aug import MultiScaleFlipAug from .transforms import (CLAHE, AdjustGamma, Albu, Normalize, Pad, PhotoMetricDistortion, RandomCrop, RandomCutOut, RandomFlip, RandomMosaic, RandomRotate, Rerange, Resize, RGB2Gray, SegRescale) __all__ = [ 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile', 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate', 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray', 'RandomCutOut', 'RandomMosaic', 'Albu' ]
966
47.35
75
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/compose.py
# Copyright (c) OpenMMLab. All rights reserved. import collections from mmcv.utils import build_from_cfg from ..builder import PIPELINES @PIPELINES.register_module() class Compose(object): """Compose multiple transforms sequentially. Args: transforms (Sequence[dict | callable]): Sequence of transform object or config dict to be composed. """ def __init__(self, transforms): assert isinstance(transforms, collections.abc.Sequence) self.transforms = [] for transform in transforms: if isinstance(transform, dict): transform = build_from_cfg(transform, PIPELINES) self.transforms.append(transform) elif callable(transform): self.transforms.append(transform) else: raise TypeError('transform must be callable or a dict') def __call__(self, data): """Call function to apply transforms sequentially. Args: data (dict): A result dict contains the data to transform. Returns: dict: Transformed data. """ for t in self.transforms: data = t(data) if data is None: return None return data def __repr__(self): format_string = self.__class__.__name__ + '(' for t in self.transforms: format_string += '\n' format_string += f' {t}' format_string += '\n)' return format_string
1,512
27.54717
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/formating.py
# Copyright (c) OpenMMLab. All rights reserved. # flake8: noqa import warnings from .formatting import * warnings.warn('DeprecationWarning: mmseg.datasets.pipelines.formating will be ' 'deprecated in 2021, please replace it with ' 'mmseg.datasets.pipelines.formatting.')
301
29.2
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/formatting.py
# Copyright (c) OpenMMLab. All rights reserved. from collections.abc import Sequence import mmcv import numpy as np import torch from mmcv.parallel import DataContainer as DC from ..builder import PIPELINES def to_tensor(data): """Convert objects of various python types to :obj:`torch.Tensor`. Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, :class:`Sequence`, :class:`int` and :class:`float`. Args: data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to be converted. """ if isinstance(data, torch.Tensor): return data elif isinstance(data, np.ndarray): return torch.from_numpy(data) elif isinstance(data, Sequence) and not mmcv.is_str(data): return torch.tensor(data) elif isinstance(data, int): return torch.LongTensor([data]) elif isinstance(data, float): return torch.FloatTensor([data]) else: raise TypeError(f'type {type(data)} cannot be converted to tensor.') @PIPELINES.register_module() class ToTensor(object): """Convert some results to :obj:`torch.Tensor` by given keys. Args: keys (Sequence[str]): Keys that need to be converted to Tensor. """ def __init__(self, keys): self.keys = keys def __call__(self, results): """Call function to convert data in results to :obj:`torch.Tensor`. Args: results (dict): Result dict contains the data to convert. Returns: dict: The result dict contains the data converted to :obj:`torch.Tensor`. """ for key in self.keys: results[key] = to_tensor(results[key]) return results def __repr__(self): return self.__class__.__name__ + f'(keys={self.keys})' @PIPELINES.register_module() class ImageToTensor(object): """Convert image to :obj:`torch.Tensor` by given keys. The dimension order of input image is (H, W, C). The pipeline will convert it to (C, H, W). If only 2 dimension (H, W) is given, the output would be (1, H, W). Args: keys (Sequence[str]): Key of images to be converted to Tensor. """ def __init__(self, keys): self.keys = keys def __call__(self, results): """Call function to convert image in results to :obj:`torch.Tensor` and transpose the channel order. Args: results (dict): Result dict contains the image data to convert. Returns: dict: The result dict contains the image converted to :obj:`torch.Tensor` and transposed to (C, H, W) order. """ for key in self.keys: img = results[key] if len(img.shape) < 3: img = np.expand_dims(img, -1) results[key] = to_tensor(img.transpose(2, 0, 1)) return results def __repr__(self): return self.__class__.__name__ + f'(keys={self.keys})' @PIPELINES.register_module() class Transpose(object): """Transpose some results by given keys. Args: keys (Sequence[str]): Keys of results to be transposed. order (Sequence[int]): Order of transpose. """ def __init__(self, keys, order): self.keys = keys self.order = order def __call__(self, results): """Call function to convert image in results to :obj:`torch.Tensor` and transpose the channel order. Args: results (dict): Result dict contains the image data to convert. Returns: dict: The result dict contains the image converted to :obj:`torch.Tensor` and transposed to (C, H, W) order. """ for key in self.keys: results[key] = results[key].transpose(self.order) return results def __repr__(self): return self.__class__.__name__ + \ f'(keys={self.keys}, order={self.order})' @PIPELINES.register_module() class ToDataContainer(object): """Convert results to :obj:`mmcv.DataContainer` by given fields. Args: fields (Sequence[dict]): Each field is a dict like ``dict(key='xxx', **kwargs)``. The ``key`` in result will be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. Default: ``(dict(key='img', stack=True), dict(key='gt_semantic_seg'))``. """ def __init__(self, fields=(dict(key='img', stack=True), dict(key='gt_semantic_seg'))): self.fields = fields def __call__(self, results): """Call function to convert data in results to :obj:`mmcv.DataContainer`. Args: results (dict): Result dict contains the data to convert. Returns: dict: The result dict contains the data converted to :obj:`mmcv.DataContainer`. """ for field in self.fields: field = field.copy() key = field.pop('key') results[key] = DC(results[key], **field) return results def __repr__(self): return self.__class__.__name__ + f'(fields={self.fields})' @PIPELINES.register_module() class DefaultFormatBundle(object): """Default formatting bundle. It simplifies the pipeline of formatting common fields, including "img" and "gt_semantic_seg". These fields are formatted as follows. - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, (3)to DataContainer (stack=True) """ def __call__(self, results): """Call function to transform and format common fields in results. Args: results (dict): Result dict contains the data to convert. Returns: dict: The result dict contains the data that is formatted with default bundle. """ if 'img' in results: img = results['img'] if len(img.shape) < 3: img = np.expand_dims(img, -1) img = np.ascontiguousarray(img.transpose(2, 0, 1)) results['img'] = DC(to_tensor(img), stack=True) if 'gt_semantic_seg' in results: # convert to long results['gt_semantic_seg'] = DC( to_tensor(results['gt_semantic_seg'][None, ...].astype(np.int64)), stack=True) return results def __repr__(self): return self.__class__.__name__ @PIPELINES.register_module() class Collect(object): """Collect data from the loader relevant to the specific task. This is usually the last stage of the data loader pipeline. Typically keys is set to some subset of "img", "gt_semantic_seg". The "img_meta" item is always populated. The contents of the "img_meta" dictionary depends on "meta_keys". By default this includes: - "img_shape": shape of the image input to the network as a tuple (h, w, c). Note that images may be zero padded on the bottom/right if the batch tensor is larger than this shape. - "scale_factor": a float indicating the preprocessing scale - "flip": a boolean indicating if image flip transform was used - "filename": path to the image file - "ori_shape": original shape of the image as a tuple (h, w, c) - "pad_shape": image shape after padding - "img_norm_cfg": a dict of normalization information: - mean - per channel mean subtraction - std - per channel std divisor - to_rgb - bool indicating if bgr was converted to rgb Args: keys (Sequence[str]): Keys of results to be collected in ``data``. meta_keys (Sequence[str], optional): Meta keys to be converted to ``mmcv.DataContainer`` and collected in ``data[img_metas]``. Default: (``filename``, ``ori_filename``, ``ori_shape``, ``img_shape``, ``pad_shape``, ``scale_factor``, ``flip``, ``flip_direction``, ``img_norm_cfg``) """ def __init__(self, keys, meta_keys=('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg')): self.keys = keys self.meta_keys = meta_keys def __call__(self, results): """Call function to collect keys in results. The keys in ``meta_keys`` will be converted to :obj:mmcv.DataContainer. Args: results (dict): Result dict contains the data to collect. Returns: dict: The result dict contains the following keys - keys in``self.keys`` - ``img_metas`` """ data = {} img_meta = {} for key in self.meta_keys: img_meta[key] = results[key] data['img_metas'] = DC(img_meta, cpu_only=True) for key in self.keys: data[key] = results[key] return data def __repr__(self): return self.__class__.__name__ + \ f'(keys={self.keys}, meta_keys={self.meta_keys})'
9,290
31.037931
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/loading.py
# Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import mmcv import numpy as np from ..builder import PIPELINES @PIPELINES.register_module() class LoadImageFromFile(object): """Load an image from file. Required keys are "img_prefix" and "img_info" (a dict that must contain the key "filename"). Added or updated keys are "filename", "img", "img_shape", "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). Args: to_float32 (bool): Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False. color_type (str): The flag argument for :func:`mmcv.imfrombytes`. Defaults to 'color'. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: 'cv2' """ def __init__(self, to_float32=False, color_type='color', file_client_args=dict(backend='disk'), imdecode_backend='cv2'): self.to_float32 = to_float32 self.color_type = color_type self.file_client_args = file_client_args.copy() self.file_client = None self.imdecode_backend = imdecode_backend def __call__(self, results): """Call functions to load image and get image meta information. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded image and meta information. """ if self.file_client is None: self.file_client = mmcv.FileClient(**self.file_client_args) if results.get('img_prefix') is not None: filename = osp.join(results['img_prefix'], results['img_info']['filename']) else: filename = results['img_info']['filename'] img_bytes = self.file_client.get(filename) img = mmcv.imfrombytes( img_bytes, flag=self.color_type, backend=self.imdecode_backend) if self.to_float32: img = img.astype(np.float32) results['filename'] = filename results['ori_filename'] = results['img_info']['filename'] results['img'] = img results['img_shape'] = img.shape results['ori_shape'] = img.shape # Set initial values for default meta_keys results['pad_shape'] = img.shape results['scale_factor'] = 1.0 num_channels = 1 if len(img.shape) < 3 else img.shape[2] results['img_norm_cfg'] = dict( mean=np.zeros(num_channels, dtype=np.float32), std=np.ones(num_channels, dtype=np.float32), to_rgb=False) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(to_float32={self.to_float32},' repr_str += f"color_type='{self.color_type}'," repr_str += f"imdecode_backend='{self.imdecode_backend}')" return repr_str @PIPELINES.register_module() class LoadAnnotations(object): """Load annotations for semantic segmentation. Args: reduce_zero_label (bool): Whether reduce all label value by 1. Usually used for datasets where 0 is background label. Default: False. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: 'pillow' """ def __init__(self, reduce_zero_label=False, file_client_args=dict(backend='disk'), imdecode_backend='pillow'): self.reduce_zero_label = reduce_zero_label self.file_client_args = file_client_args.copy() self.file_client = None self.imdecode_backend = imdecode_backend def __call__(self, results): """Call function to load multiple types annotations. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded semantic segmentation annotations. """ if self.file_client is None: self.file_client = mmcv.FileClient(**self.file_client_args) if results.get('seg_prefix', None) is not None: filename = osp.join(results['seg_prefix'], results['ann_info']['seg_map']) else: filename = results['ann_info']['seg_map'] img_bytes = self.file_client.get(filename) gt_semantic_seg = mmcv.imfrombytes( img_bytes, flag='unchanged', backend=self.imdecode_backend).squeeze().astype(np.uint8) # reduce zero_label if self.reduce_zero_label: # avoid using underflow conversion gt_semantic_seg[gt_semantic_seg == 0] = 255 gt_semantic_seg = gt_semantic_seg - 1 gt_semantic_seg[gt_semantic_seg == 254] = 255 # modify if custom classes if results.get('label_map', None) is not None: # Add deep copy to solve bug of repeatedly # replace `gt_semantic_seg`, which is reported in # https://github.com/open-mmlab/mmsegmentation/pull/1445/ gt_semantic_seg_copy = gt_semantic_seg.copy() for old_id, new_id in results['label_map'].items(): gt_semantic_seg[gt_semantic_seg_copy == old_id] = new_id results['gt_semantic_seg'] = gt_semantic_seg results['seg_fields'].append('gt_semantic_seg') return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(reduce_zero_label={self.reduce_zero_label},' repr_str += f"imdecode_backend='{self.imdecode_backend}')" return repr_str
6,171
37.81761
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/test_time_aug.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import mmcv from ..builder import PIPELINES from .compose import Compose @PIPELINES.register_module() class MultiScaleFlipAug(object): """Test-time augmentation with multiple scales and flipping. An example configuration is as followed: .. code-block:: img_scale=(2048, 1024), img_ratios=[0.5, 1.0], flip=True, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', **img_norm_cfg), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ] After MultiScaleFLipAug with above configuration, the results are wrapped into lists of the same length as followed: .. code-block:: dict( img=[...], img_shape=[...], scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] flip=[False, True, False, True] ... ) Args: transforms (list[dict]): Transforms to apply in each augmentation. img_scale (None | tuple | list[tuple]): Images scales for resizing. img_ratios (float | list[float]): Image ratios for resizing flip (bool): Whether apply flip augmentation. Default: False. flip_direction (str | list[str]): Flip augmentation directions, options are "horizontal" and "vertical". If flip_direction is list, multiple flip augmentations will be applied. It has no effect when flip == False. Default: "horizontal". """ def __init__(self, transforms, img_scale, img_ratios=None, flip=False, flip_direction='horizontal'): if flip: trans_index = { key['type']: index for index, key in enumerate(transforms) } if 'RandomFlip' in trans_index and 'Pad' in trans_index: assert trans_index['RandomFlip'] < trans_index['Pad'], \ 'Pad must be executed after RandomFlip when flip is True' self.transforms = Compose(transforms) if img_ratios is not None: img_ratios = img_ratios if isinstance(img_ratios, list) else [img_ratios] assert mmcv.is_list_of(img_ratios, float) if img_scale is None: # mode 1: given img_scale=None and a range of image ratio self.img_scale = None assert mmcv.is_list_of(img_ratios, float) elif isinstance(img_scale, tuple) and mmcv.is_list_of( img_ratios, float): assert len(img_scale) == 2 # mode 2: given a scale and a range of image ratio self.img_scale = [(int(img_scale[0] * ratio), int(img_scale[1] * ratio)) for ratio in img_ratios] else: # mode 3: given multiple scales self.img_scale = img_scale if isinstance(img_scale, list) else [img_scale] assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None self.flip = flip self.img_ratios = img_ratios self.flip_direction = flip_direction if isinstance( flip_direction, list) else [flip_direction] assert mmcv.is_list_of(self.flip_direction, str) if not self.flip and self.flip_direction != ['horizontal']: warnings.warn( 'flip_direction has no effect when flip is set to False') if (self.flip and not any([t['type'] == 'RandomFlip' for t in transforms])): warnings.warn( 'flip has no effect when RandomFlip is not in transforms') def __call__(self, results): """Call function to apply test time augment transforms on results. Args: results (dict): Result dict contains the data to transform. Returns: dict[str: list]: The augmented data, where each value is wrapped into a list. """ aug_data = [] if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): h, w = results['img'].shape[:2] img_scale = [(int(w * ratio), int(h * ratio)) for ratio in self.img_ratios] else: img_scale = self.img_scale flip_aug = [False, True] if self.flip else [False] for scale in img_scale: for flip in flip_aug: for direction in self.flip_direction: _results = results.copy() _results['scale'] = scale _results['flip'] = flip _results['flip_direction'] = direction data = self.transforms(_results) aug_data.append(data) # list of dict to dict of list aug_data_dict = {key: [] for key in aug_data[0]} for data in aug_data: for key, val in data.items(): aug_data_dict[key].append(val) return aug_data_dict def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(transforms={self.transforms}, ' repr_str += f'img_scale={self.img_scale}, flip={self.flip})' repr_str += f'flip_direction={self.flip_direction}' return repr_str
5,591
38.104895
79
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/pipelines/transforms.py
# Copyright (c) OpenMMLab. All rights reserved. import copy import inspect import cv2 import mmcv import numpy as np from mmcv.utils import deprecated_api_warning, is_tuple_of from numpy import random from ..builder import PIPELINES try: import albumentations from albumentations import Compose except ImportError: albumentations = None Compose = None @PIPELINES.register_module() class ResizeToMultiple(object): """Resize images & seg to multiple of divisor. Args: size_divisor (int): images and gt seg maps need to resize to multiple of size_divisor. Default: 32. interpolation (str, optional): The interpolation mode of image resize. Default: None """ def __init__(self, size_divisor=32, interpolation=None): self.size_divisor = size_divisor self.interpolation = interpolation def __call__(self, results): """Call function to resize images, semantic segmentation map to multiple of size divisor. Args: results (dict): Result dict from loading pipeline. Returns: dict: Resized results, 'img_shape', 'pad_shape' keys are updated. """ # Align image to multiple of size divisor. img = results['img'] img = mmcv.imresize_to_multiple( img, self.size_divisor, scale_factor=1, interpolation=self.interpolation if self.interpolation else 'bilinear') results['img'] = img results['img_shape'] = img.shape results['pad_shape'] = img.shape # Align segmentation map to multiple of size divisor. for key in results.get('seg_fields', []): gt_seg = results[key] gt_seg = mmcv.imresize_to_multiple( gt_seg, self.size_divisor, scale_factor=1, interpolation='nearest') results[key] = gt_seg return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += (f'(size_divisor={self.size_divisor}, ' f'interpolation={self.interpolation})') return repr_str @PIPELINES.register_module() class Resize(object): """Resize images & seg. This transform resizes the input image to some scale. If the input dict contains the key "scale", then the scale in the input dict is used, otherwise the specified scale in the init method is used. ``img_scale`` can be None, a tuple (single-scale) or a list of tuple (multi-scale). There are 4 multiscale modes: - ``ratio_range is not None``: 1. When img_scale is None, img_scale is the shape of image in results (img_scale = results['img'].shape[:2]) and the image is resized based on the original size. (mode 1) 2. When img_scale is a tuple (single-scale), randomly sample a ratio from the ratio range and multiply it with the image scale. (mode 2) - ``ratio_range is None and multiscale_mode == "range"``: randomly sample a scale from the a range. (mode 3) - ``ratio_range is None and multiscale_mode == "value"``: randomly sample a scale from multiple scales. (mode 4) Args: img_scale (tuple or list[tuple]): Images scales for resizing. Default:None. multiscale_mode (str): Either "range" or "value". Default: 'range' ratio_range (tuple[float]): (min_ratio, max_ratio). Default: None keep_ratio (bool): Whether to keep the aspect ratio when resizing the image. Default: True min_size (int, optional): The minimum size for input and the shape of the image and seg map will not be less than ``min_size``. As the shape of model input is fixed like 'SETR' and 'BEiT'. Following the setting in these models, resized images must be bigger than the crop size in ``slide_inference``. Default: None """ def __init__(self, img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True, min_size=None): if img_scale is None: self.img_scale = None else: if isinstance(img_scale, list): self.img_scale = img_scale else: self.img_scale = [img_scale] assert mmcv.is_list_of(self.img_scale, tuple) if ratio_range is not None: # mode 1: given img_scale=None and a range of image ratio # mode 2: given a scale and a range of image ratio assert self.img_scale is None or len(self.img_scale) == 1 else: # mode 3 and 4: given multiple scales or a range of scales assert multiscale_mode in ['value', 'range'] self.multiscale_mode = multiscale_mode self.ratio_range = ratio_range self.keep_ratio = keep_ratio self.min_size = min_size @staticmethod def random_select(img_scales): """Randomly select an img_scale from given candidates. Args: img_scales (list[tuple]): Images scales for selection. Returns: (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, where ``img_scale`` is the selected image scale and ``scale_idx`` is the selected index in the given candidates. """ assert mmcv.is_list_of(img_scales, tuple) scale_idx = np.random.randint(len(img_scales)) img_scale = img_scales[scale_idx] return img_scale, scale_idx @staticmethod def random_sample(img_scales): """Randomly sample an img_scale when ``multiscale_mode=='range'``. Args: img_scales (list[tuple]): Images scale range for sampling. There must be two tuples in img_scales, which specify the lower and upper bound of image scales. Returns: (tuple, None): Returns a tuple ``(img_scale, None)``, where ``img_scale`` is sampled scale and None is just a placeholder to be consistent with :func:`random_select`. """ assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 img_scale_long = [max(s) for s in img_scales] img_scale_short = [min(s) for s in img_scales] long_edge = np.random.randint( min(img_scale_long), max(img_scale_long) + 1) short_edge = np.random.randint( min(img_scale_short), max(img_scale_short) + 1) img_scale = (long_edge, short_edge) return img_scale, None @staticmethod def random_sample_ratio(img_scale, ratio_range): """Randomly sample an img_scale when ``ratio_range`` is specified. A ratio will be randomly sampled from the range specified by ``ratio_range``. Then it would be multiplied with ``img_scale`` to generate sampled scale. Args: img_scale (tuple): Images scale base to multiply with ratio. ratio_range (tuple[float]): The minimum and maximum ratio to scale the ``img_scale``. Returns: (tuple, None): Returns a tuple ``(scale, None)``, where ``scale`` is sampled ratio multiplied with ``img_scale`` and None is just a placeholder to be consistent with :func:`random_select`. """ assert isinstance(img_scale, tuple) and len(img_scale) == 2 min_ratio, max_ratio = ratio_range assert min_ratio <= max_ratio ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) return scale, None def _random_scale(self, results): """Randomly sample an img_scale according to ``ratio_range`` and ``multiscale_mode``. If ``ratio_range`` is specified, a ratio will be sampled and be multiplied with ``img_scale``. If multiple scales are specified by ``img_scale``, a scale will be sampled according to ``multiscale_mode``. Otherwise, single scale will be used. Args: results (dict): Result dict from :obj:`dataset`. Returns: dict: Two new keys 'scale` and 'scale_idx` are added into ``results``, which would be used by subsequent pipelines. """ if self.ratio_range is not None: if self.img_scale is None: h, w = results['img'].shape[:2] scale, scale_idx = self.random_sample_ratio((w, h), self.ratio_range) else: scale, scale_idx = self.random_sample_ratio( self.img_scale[0], self.ratio_range) elif len(self.img_scale) == 1: scale, scale_idx = self.img_scale[0], 0 elif self.multiscale_mode == 'range': scale, scale_idx = self.random_sample(self.img_scale) elif self.multiscale_mode == 'value': scale, scale_idx = self.random_select(self.img_scale) else: raise NotImplementedError results['scale'] = scale results['scale_idx'] = scale_idx def _resize_img(self, results): """Resize images with ``results['scale']``.""" if self.keep_ratio: if self.min_size is not None: # TODO: Now 'min_size' is an 'int' which means the minimum # shape of images is (min_size, min_size, 3). 'min_size' # with tuple type will be supported, i.e. the width and # height are not equal. if min(results['scale']) < self.min_size: new_short = self.min_size else: new_short = min(results['scale']) h, w = results['img'].shape[:2] if h > w: new_h, new_w = new_short * h / w, new_short else: new_h, new_w = new_short, new_short * w / h results['scale'] = (new_h, new_w) img, scale_factor = mmcv.imrescale( results['img'], results['scale'], return_scale=True) # the w_scale and h_scale has minor difference # a real fix should be done in the mmcv.imrescale in the future new_h, new_w = img.shape[:2] h, w = results['img'].shape[:2] w_scale = new_w / w h_scale = new_h / h else: img, w_scale, h_scale = mmcv.imresize( results['img'], results['scale'], return_scale=True) scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], dtype=np.float32) results['img'] = img results['img_shape'] = img.shape results['pad_shape'] = img.shape # in case that there is no padding results['scale_factor'] = scale_factor results['keep_ratio'] = self.keep_ratio def _resize_seg(self, results): """Resize semantic segmentation map with ``results['scale']``.""" for key in results.get('seg_fields', []): if self.keep_ratio: gt_seg = mmcv.imrescale( results[key], results['scale'], interpolation='nearest') else: gt_seg = mmcv.imresize( results[key], results['scale'], interpolation='nearest') results[key] = gt_seg def __call__(self, results): """Call function to resize images, bounding boxes, masks, semantic segmentation map. Args: results (dict): Result dict from loading pipeline. Returns: dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', 'keep_ratio' keys are added into result dict. """ if 'scale' not in results: self._random_scale(results) self._resize_img(results) self._resize_seg(results) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += (f'(img_scale={self.img_scale}, ' f'multiscale_mode={self.multiscale_mode}, ' f'ratio_range={self.ratio_range}, ' f'keep_ratio={self.keep_ratio})') return repr_str @PIPELINES.register_module() class RandomFlip(object): """Flip the image & seg. If the input dict contains the key "flip", then the flag will be used, otherwise it will be randomly decided by a ratio specified in the init method. Args: prob (float, optional): The flipping probability. Default: None. direction(str, optional): The flipping direction. Options are 'horizontal' and 'vertical'. Default: 'horizontal'. """ @deprecated_api_warning({'flip_ratio': 'prob'}, cls_name='RandomFlip') def __init__(self, prob=None, direction='horizontal'): self.prob = prob self.direction = direction if prob is not None: assert prob >= 0 and prob <= 1 assert direction in ['horizontal', 'vertical'] def __call__(self, results): """Call function to flip bounding boxes, masks, semantic segmentation maps. Args: results (dict): Result dict from loading pipeline. Returns: dict: Flipped results, 'flip', 'flip_direction' keys are added into result dict. """ if 'flip' not in results: flip = True if np.random.rand() < self.prob else False results['flip'] = flip if 'flip_direction' not in results: results['flip_direction'] = self.direction if results['flip']: # flip image results['img'] = mmcv.imflip( results['img'], direction=results['flip_direction']) # flip segs for key in results.get('seg_fields', []): # use copy() to make numpy stride positive results[key] = mmcv.imflip( results[key], direction=results['flip_direction']).copy() return results def __repr__(self): return self.__class__.__name__ + f'(prob={self.prob})' @PIPELINES.register_module() class Pad(object): """Pad the image & mask. There are two padding modes: (1) pad to a fixed size and (2) pad to the minimum size that is divisible by some number. Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", Args: size (tuple, optional): Fixed padding size. size_divisor (int, optional): The divisor of padded size. pad_val (float, optional): Padding value. Default: 0. seg_pad_val (float, optional): Padding value of segmentation map. Default: 255. """ def __init__(self, size=None, size_divisor=None, pad_val=0, seg_pad_val=255): self.size = size self.size_divisor = size_divisor self.pad_val = pad_val self.seg_pad_val = seg_pad_val # only one of size and size_divisor should be valid assert size is not None or size_divisor is not None assert size is None or size_divisor is None def _pad_img(self, results): """Pad images according to ``self.size``.""" if self.size is not None: padded_img = mmcv.impad( results['img'], shape=self.size, pad_val=self.pad_val) elif self.size_divisor is not None: padded_img = mmcv.impad_to_multiple( results['img'], self.size_divisor, pad_val=self.pad_val) results['img'] = padded_img results['pad_shape'] = padded_img.shape results['pad_fixed_size'] = self.size results['pad_size_divisor'] = self.size_divisor def _pad_seg(self, results): """Pad masks according to ``results['pad_shape']``.""" for key in results.get('seg_fields', []): results[key] = mmcv.impad( results[key], shape=results['pad_shape'][:2], pad_val=self.seg_pad_val) def __call__(self, results): """Call function to pad images, masks, semantic segmentation maps. Args: results (dict): Result dict from loading pipeline. Returns: dict: Updated result dict. """ self._pad_img(results) self._pad_seg(results) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(size={self.size}, size_divisor={self.size_divisor}, ' \ f'pad_val={self.pad_val})' return repr_str @PIPELINES.register_module() class Normalize(object): """Normalize the image. Added key is "img_norm_cfg". Args: mean (sequence): Mean values of 3 channels. std (sequence): Std values of 3 channels. to_rgb (bool): Whether to convert the image from BGR to RGB, default is true. """ def __init__(self, mean, std, to_rgb=True): self.mean = np.array(mean, dtype=np.float32) self.std = np.array(std, dtype=np.float32) self.to_rgb = to_rgb def __call__(self, results): """Call function to normalize images. Args: results (dict): Result dict from loading pipeline. Returns: dict: Normalized results, 'img_norm_cfg' key is added into result dict. """ results['img'] = mmcv.imnormalize(results['img'], self.mean, self.std, self.to_rgb) results['img_norm_cfg'] = dict( mean=self.mean, std=self.std, to_rgb=self.to_rgb) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(mean={self.mean}, std={self.std}, to_rgb=' \ f'{self.to_rgb})' return repr_str @PIPELINES.register_module() class Rerange(object): """Rerange the image pixel value. Args: min_value (float or int): Minimum value of the reranged image. Default: 0. max_value (float or int): Maximum value of the reranged image. Default: 255. """ def __init__(self, min_value=0, max_value=255): assert isinstance(min_value, float) or isinstance(min_value, int) assert isinstance(max_value, float) or isinstance(max_value, int) assert min_value < max_value self.min_value = min_value self.max_value = max_value def __call__(self, results): """Call function to rerange images. Args: results (dict): Result dict from loading pipeline. Returns: dict: Reranged results. """ img = results['img'] img_min_value = np.min(img) img_max_value = np.max(img) assert img_min_value < img_max_value # rerange to [0, 1] img = (img - img_min_value) / (img_max_value - img_min_value) # rerange to [min_value, max_value] img = img * (self.max_value - self.min_value) + self.min_value results['img'] = img return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(min_value={self.min_value}, max_value={self.max_value})' return repr_str @PIPELINES.register_module() class CLAHE(object): """Use CLAHE method to process the image. See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. Graphics Gems, 1994:474-485.` for more information. Args: clip_limit (float): Threshold for contrast limiting. Default: 40.0. tile_grid_size (tuple[int]): Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. It defines the number of tiles in row and column. Default: (8, 8). """ def __init__(self, clip_limit=40.0, tile_grid_size=(8, 8)): assert isinstance(clip_limit, (float, int)) self.clip_limit = clip_limit assert is_tuple_of(tile_grid_size, int) assert len(tile_grid_size) == 2 self.tile_grid_size = tile_grid_size def __call__(self, results): """Call function to Use CLAHE method process images. Args: results (dict): Result dict from loading pipeline. Returns: dict: Processed results. """ for i in range(results['img'].shape[2]): results['img'][:, :, i] = mmcv.clahe( np.array(results['img'][:, :, i], dtype=np.uint8), self.clip_limit, self.tile_grid_size) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(clip_limit={self.clip_limit}, '\ f'tile_grid_size={self.tile_grid_size})' return repr_str @PIPELINES.register_module() class RandomCrop(object): """Random crop the image & seg. Args: crop_size (tuple): Expected size after cropping, (h, w). cat_max_ratio (float): The maximum ratio that single category could occupy. """ def __init__(self, crop_size, cat_max_ratio=1., ignore_index=255): assert crop_size[0] > 0 and crop_size[1] > 0 self.crop_size = crop_size self.cat_max_ratio = cat_max_ratio self.ignore_index = ignore_index def get_crop_bbox(self, img): """Randomly get a crop bounding box.""" margin_h = max(img.shape[0] - self.crop_size[0], 0) margin_w = max(img.shape[1] - self.crop_size[1], 0) offset_h = np.random.randint(0, margin_h + 1) offset_w = np.random.randint(0, margin_w + 1) crop_y1, crop_y2 = offset_h, offset_h + self.crop_size[0] crop_x1, crop_x2 = offset_w, offset_w + self.crop_size[1] return crop_y1, crop_y2, crop_x1, crop_x2 def crop(self, img, crop_bbox): """Crop from ``img``""" crop_y1, crop_y2, crop_x1, crop_x2 = crop_bbox img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] return img def __call__(self, results): """Call function to randomly crop images, semantic segmentation maps. Args: results (dict): Result dict from loading pipeline. Returns: dict: Randomly cropped results, 'img_shape' key in result dict is updated according to crop size. """ img = results['img'] crop_bbox = self.get_crop_bbox(img) if self.cat_max_ratio < 1.: # Repeat 10 times for _ in range(10): seg_temp = self.crop(results['gt_semantic_seg'], crop_bbox) labels, cnt = np.unique(seg_temp, return_counts=True) cnt = cnt[labels != self.ignore_index] if len(cnt) > 1 and np.max(cnt) / np.sum( cnt) < self.cat_max_ratio: break crop_bbox = self.get_crop_bbox(img) # crop the image img = self.crop(img, crop_bbox) img_shape = img.shape results['img'] = img results['img_shape'] = img_shape # crop semantic seg for key in results.get('seg_fields', []): results[key] = self.crop(results[key], crop_bbox) return results def __repr__(self): return self.__class__.__name__ + f'(crop_size={self.crop_size})' @PIPELINES.register_module() class RandomRotate(object): """Rotate the image & seg. Args: prob (float): The rotation probability. degree (float, tuple[float]): Range of degrees to select from. If degree is a number instead of tuple like (min, max), the range of degree will be (``-degree``, ``+degree``) pad_val (float, optional): Padding value of image. Default: 0. seg_pad_val (float, optional): Padding value of segmentation map. Default: 255. center (tuple[float], optional): Center point (w, h) of the rotation in the source image. If not specified, the center of the image will be used. Default: None. auto_bound (bool): Whether to adjust the image size to cover the whole rotated image. Default: False """ def __init__(self, prob, degree, pad_val=0, seg_pad_val=255, center=None, auto_bound=False): self.prob = prob assert prob >= 0 and prob <= 1 if isinstance(degree, (float, int)): assert degree > 0, f'degree {degree} should be positive' self.degree = (-degree, degree) else: self.degree = degree assert len(self.degree) == 2, f'degree {self.degree} should be a ' \ f'tuple of (min, max)' self.pal_val = pad_val self.seg_pad_val = seg_pad_val self.center = center self.auto_bound = auto_bound def __call__(self, results): """Call function to rotate image, semantic segmentation maps. Args: results (dict): Result dict from loading pipeline. Returns: dict: Rotated results. """ rotate = True if np.random.rand() < self.prob else False degree = np.random.uniform(min(*self.degree), max(*self.degree)) if rotate: # rotate image results['img'] = mmcv.imrotate( results['img'], angle=degree, border_value=self.pal_val, center=self.center, auto_bound=self.auto_bound) # rotate segs for key in results.get('seg_fields', []): results[key] = mmcv.imrotate( results[key], angle=degree, border_value=self.seg_pad_val, center=self.center, auto_bound=self.auto_bound, interpolation='nearest') return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(prob={self.prob}, ' \ f'degree={self.degree}, ' \ f'pad_val={self.pal_val}, ' \ f'seg_pad_val={self.seg_pad_val}, ' \ f'center={self.center}, ' \ f'auto_bound={self.auto_bound})' return repr_str @PIPELINES.register_module() class RGB2Gray(object): """Convert RGB image to grayscale image. This transform calculate the weighted mean of input image channels with ``weights`` and then expand the channels to ``out_channels``. When ``out_channels`` is None, the number of output channels is the same as input channels. Args: out_channels (int): Expected number of output channels after transforming. Default: None. weights (tuple[float]): The weights to calculate the weighted mean. Default: (0.299, 0.587, 0.114). """ def __init__(self, out_channels=None, weights=(0.299, 0.587, 0.114)): assert out_channels is None or out_channels > 0 self.out_channels = out_channels assert isinstance(weights, tuple) for item in weights: assert isinstance(item, (float, int)) self.weights = weights def __call__(self, results): """Call function to convert RGB image to grayscale image. Args: results (dict): Result dict from loading pipeline. Returns: dict: Result dict with grayscale image. """ img = results['img'] assert len(img.shape) == 3 assert img.shape[2] == len(self.weights) weights = np.array(self.weights).reshape((1, 1, -1)) img = (img * weights).sum(2, keepdims=True) if self.out_channels is None: img = img.repeat(weights.shape[2], axis=2) else: img = img.repeat(self.out_channels, axis=2) results['img'] = img results['img_shape'] = img.shape return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(out_channels={self.out_channels}, ' \ f'weights={self.weights})' return repr_str @PIPELINES.register_module() class AdjustGamma(object): """Using gamma correction to process the image. Args: gamma (float or int): Gamma value used in gamma correction. Default: 1.0. """ def __init__(self, gamma=1.0): assert isinstance(gamma, float) or isinstance(gamma, int) assert gamma > 0 self.gamma = gamma inv_gamma = 1.0 / gamma self.table = np.array([(i / 255.0)**inv_gamma * 255 for i in np.arange(256)]).astype('uint8') def __call__(self, results): """Call function to process the image with gamma correction. Args: results (dict): Result dict from loading pipeline. Returns: dict: Processed results. """ results['img'] = mmcv.lut_transform( np.array(results['img'], dtype=np.uint8), self.table) return results def __repr__(self): return self.__class__.__name__ + f'(gamma={self.gamma})' @PIPELINES.register_module() class SegRescale(object): """Rescale semantic segmentation maps. Args: scale_factor (float): The scale factor of the final output. """ def __init__(self, scale_factor=1): self.scale_factor = scale_factor def __call__(self, results): """Call function to scale the semantic segmentation map. Args: results (dict): Result dict from loading pipeline. Returns: dict: Result dict with semantic segmentation map scaled. """ for key in results.get('seg_fields', []): if self.scale_factor != 1: results[key] = mmcv.imrescale( results[key], self.scale_factor, interpolation='nearest') return results def __repr__(self): return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' @PIPELINES.register_module() class PhotoMetricDistortion(object): """Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last. 1. random brightness 2. random contrast (mode 0) 3. convert color from BGR to HSV 4. random saturation 5. random hue 6. convert color from HSV to BGR 7. random contrast (mode 1) Args: brightness_delta (int): delta of brightness. contrast_range (tuple): range of contrast. saturation_range (tuple): range of saturation. hue_delta (int): delta of hue. """ def __init__(self, brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18): self.brightness_delta = brightness_delta self.contrast_lower, self.contrast_upper = contrast_range self.saturation_lower, self.saturation_upper = saturation_range self.hue_delta = hue_delta def convert(self, img, alpha=1, beta=0): """Multiple with alpha and add beat with clip.""" img = img.astype(np.float32) * alpha + beta img = np.clip(img, 0, 255) return img.astype(np.uint8) def brightness(self, img): """Brightness distortion.""" if random.randint(2): return self.convert( img, beta=random.uniform(-self.brightness_delta, self.brightness_delta)) return img def contrast(self, img): """Contrast distortion.""" if random.randint(2): return self.convert( img, alpha=random.uniform(self.contrast_lower, self.contrast_upper)) return img def saturation(self, img): """Saturation distortion.""" if random.randint(2): img = mmcv.bgr2hsv(img) img[:, :, 1] = self.convert( img[:, :, 1], alpha=random.uniform(self.saturation_lower, self.saturation_upper)) img = mmcv.hsv2bgr(img) return img def hue(self, img): """Hue distortion.""" if random.randint(2): img = mmcv.bgr2hsv(img) img[:, :, 0] = (img[:, :, 0].astype(int) + random.randint(-self.hue_delta, self.hue_delta)) % 180 img = mmcv.hsv2bgr(img) return img def __call__(self, results): """Call function to perform photometric distortion on images. Args: results (dict): Result dict from loading pipeline. Returns: dict: Result dict with images distorted. """ img = results['img'] # random brightness img = self.brightness(img) # mode == 0 --> do random contrast first # mode == 1 --> do random contrast last mode = random.randint(2) if mode == 1: img = self.contrast(img) # random saturation img = self.saturation(img) # random hue img = self.hue(img) # random contrast if mode == 0: img = self.contrast(img) results['img'] = img return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += (f'(brightness_delta={self.brightness_delta}, ' f'contrast_range=({self.contrast_lower}, ' f'{self.contrast_upper}), ' f'saturation_range=({self.saturation_lower}, ' f'{self.saturation_upper}), ' f'hue_delta={self.hue_delta})') return repr_str @PIPELINES.register_module() class RandomCutOut(object): """CutOut operation. Randomly drop some regions of image used in `Cutout <https://arxiv.org/abs/1708.04552>`_. Args: prob (float): cutout probability. n_holes (int | tuple[int, int]): Number of regions to be dropped. If it is given as a list, number of holes will be randomly selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate shape of dropped regions. It can be `tuple[int, int]` to use a fixed cutout shape, or `list[tuple[int, int]]` to randomly choose shape from the list. cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The candidate ratio of dropped regions. It can be `tuple[float, float]` to use a fixed ratio or `list[tuple[float, float]]` to randomly choose ratio from the list. Please note that `cutout_shape` and `cutout_ratio` cannot be both given at the same time. fill_in (tuple[float, float, float] | tuple[int, int, int]): The value of pixel to fill in the dropped regions. Default: (0, 0, 0). seg_fill_in (int): The labels of pixel to fill in the dropped regions. If seg_fill_in is None, skip. Default: None. """ def __init__(self, prob, n_holes, cutout_shape=None, cutout_ratio=None, fill_in=(0, 0, 0), seg_fill_in=None): assert 0 <= prob and prob <= 1 assert (cutout_shape is None) ^ (cutout_ratio is None), \ 'Either cutout_shape or cutout_ratio should be specified.' assert (isinstance(cutout_shape, (list, tuple)) or isinstance(cutout_ratio, (list, tuple))) if isinstance(n_holes, tuple): assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] else: n_holes = (n_holes, n_holes) if seg_fill_in is not None: assert (isinstance(seg_fill_in, int) and 0 <= seg_fill_in and seg_fill_in <= 255) self.prob = prob self.n_holes = n_holes self.fill_in = fill_in self.seg_fill_in = seg_fill_in self.with_ratio = cutout_ratio is not None self.candidates = cutout_ratio if self.with_ratio else cutout_shape if not isinstance(self.candidates, list): self.candidates = [self.candidates] def __call__(self, results): """Call function to drop some regions of image.""" cutout = True if np.random.rand() < self.prob else False if cutout: h, w, c = results['img'].shape n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) for _ in range(n_holes): x1 = np.random.randint(0, w) y1 = np.random.randint(0, h) index = np.random.randint(0, len(self.candidates)) if not self.with_ratio: cutout_w, cutout_h = self.candidates[index] else: cutout_w = int(self.candidates[index][0] * w) cutout_h = int(self.candidates[index][1] * h) x2 = np.clip(x1 + cutout_w, 0, w) y2 = np.clip(y1 + cutout_h, 0, h) results['img'][y1:y2, x1:x2, :] = self.fill_in if self.seg_fill_in is not None: for key in results.get('seg_fields', []): results[key][y1:y2, x1:x2] = self.seg_fill_in return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(prob={self.prob}, ' repr_str += f'n_holes={self.n_holes}, ' repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio else f'cutout_shape={self.candidates}, ') repr_str += f'fill_in={self.fill_in}, ' repr_str += f'seg_fill_in={self.seg_fill_in})' return repr_str @PIPELINES.register_module() class RandomMosaic(object): """Mosaic augmentation. Given 4 images, mosaic transform combines them into one output image. The output image is composed of the parts from each sub- image. .. code:: text mosaic transform center_x +------------------------------+ | pad | pad | | +-----------+ | | | | | | | image1 |--------+ | | | | | | | | | image2 | | center_y |----+-------------+-----------| | | cropped | | |pad | image3 | image4 | | | | | +----|-------------+-----------+ | | +-------------+ The mosaic transform steps are as follows: 1. Choose the mosaic center as the intersections of 4 images 2. Get the left top image according to the index, and randomly sample another 3 images from the custom dataset. 3. Sub image will be cropped if image is larger than mosaic patch Args: prob (float): mosaic probability. img_scale (Sequence[int]): Image size after mosaic pipeline of a single image. The size of the output image is four times that of a single image. The output image comprises 4 single images. Default: (640, 640). center_ratio_range (Sequence[float]): Center ratio range of mosaic output. Default: (0.5, 1.5). pad_val (int): Pad value. Default: 0. seg_pad_val (int): Pad value of segmentation map. Default: 255. """ def __init__(self, prob, img_scale=(640, 640), center_ratio_range=(0.5, 1.5), pad_val=0, seg_pad_val=255): assert 0 <= prob and prob <= 1 assert isinstance(img_scale, tuple) self.prob = prob self.img_scale = img_scale self.center_ratio_range = center_ratio_range self.pad_val = pad_val self.seg_pad_val = seg_pad_val def __call__(self, results): """Call function to make a mosaic of image. Args: results (dict): Result dict. Returns: dict: Result dict with mosaic transformed. """ mosaic = True if np.random.rand() < self.prob else False if mosaic: results = self._mosaic_transform_img(results) results = self._mosaic_transform_seg(results) return results def get_indexes(self, dataset): """Call function to collect indexes. Args: dataset (:obj:`MultiImageMixDataset`): The dataset. Returns: list: indexes. """ indexes = [random.randint(0, len(dataset)) for _ in range(3)] return indexes def _mosaic_transform_img(self, results): """Mosaic transform function. Args: results (dict): Result dict. Returns: dict: Updated result dict. """ assert 'mix_results' in results if len(results['img'].shape) == 3: mosaic_img = np.full( (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2), 3), self.pad_val, dtype=results['img'].dtype) else: mosaic_img = np.full( (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), self.pad_val, dtype=results['img'].dtype) # mosaic center x, y self.center_x = int( random.uniform(*self.center_ratio_range) * self.img_scale[1]) self.center_y = int( random.uniform(*self.center_ratio_range) * self.img_scale[0]) center_position = (self.center_x, self.center_y) loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right') for i, loc in enumerate(loc_strs): if loc == 'top_left': result_patch = copy.deepcopy(results) else: result_patch = copy.deepcopy(results['mix_results'][i - 1]) img_i = result_patch['img'] h_i, w_i = img_i.shape[:2] # keep_ratio resize scale_ratio_i = min(self.img_scale[0] / h_i, self.img_scale[1] / w_i) img_i = mmcv.imresize( img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i))) # compute the combine parameters paste_coord, crop_coord = self._mosaic_combine( loc, center_position, img_i.shape[:2][::-1]) x1_p, y1_p, x2_p, y2_p = paste_coord x1_c, y1_c, x2_c, y2_c = crop_coord # crop and paste image mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c] results['img'] = mosaic_img results['img_shape'] = mosaic_img.shape results['ori_shape'] = mosaic_img.shape return results def _mosaic_transform_seg(self, results): """Mosaic transform function for label annotations. Args: results (dict): Result dict. Returns: dict: Updated result dict. """ assert 'mix_results' in results for key in results.get('seg_fields', []): mosaic_seg = np.full( (int(self.img_scale[0] * 2), int(self.img_scale[1] * 2)), self.seg_pad_val, dtype=results[key].dtype) # mosaic center x, y center_position = (self.center_x, self.center_y) loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right') for i, loc in enumerate(loc_strs): if loc == 'top_left': result_patch = copy.deepcopy(results) else: result_patch = copy.deepcopy(results['mix_results'][i - 1]) gt_seg_i = result_patch[key] h_i, w_i = gt_seg_i.shape[:2] # keep_ratio resize scale_ratio_i = min(self.img_scale[0] / h_i, self.img_scale[1] / w_i) gt_seg_i = mmcv.imresize( gt_seg_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i)), interpolation='nearest') # compute the combine parameters paste_coord, crop_coord = self._mosaic_combine( loc, center_position, gt_seg_i.shape[:2][::-1]) x1_p, y1_p, x2_p, y2_p = paste_coord x1_c, y1_c, x2_c, y2_c = crop_coord # crop and paste image mosaic_seg[y1_p:y2_p, x1_p:x2_p] = gt_seg_i[y1_c:y2_c, x1_c:x2_c] results[key] = mosaic_seg return results def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): """Calculate global coordinate of mosaic image and local coordinate of cropped sub-image. Args: loc (str): Index for the sub-image, loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right'). center_position_xy (Sequence[float]): Mixing center for 4 images, (x, y). img_shape_wh (Sequence[int]): Width and height of sub-image Returns: tuple[tuple[float]]: Corresponding coordinate of pasting and cropping - paste_coord (tuple): paste corner coordinate in mosaic image. - crop_coord (tuple): crop corner coordinate in mosaic image. """ assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') if loc == 'top_left': # index0 to top left part of image x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ max(center_position_xy[1] - img_shape_wh[1], 0), \ center_position_xy[0], \ center_position_xy[1] crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - ( y2 - y1), img_shape_wh[0], img_shape_wh[1] elif loc == 'top_right': # index1 to top right part of image x1, y1, x2, y2 = center_position_xy[0], \ max(center_position_xy[1] - img_shape_wh[1], 0), \ min(center_position_xy[0] + img_shape_wh[0], self.img_scale[1] * 2), \ center_position_xy[1] crop_coord = 0, img_shape_wh[1] - (y2 - y1), min( img_shape_wh[0], x2 - x1), img_shape_wh[1] elif loc == 'bottom_left': # index2 to bottom left part of image x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ center_position_xy[1], \ center_position_xy[0], \ min(self.img_scale[0] * 2, center_position_xy[1] + img_shape_wh[1]) crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min( y2 - y1, img_shape_wh[1]) else: # index3 to bottom right part of image x1, y1, x2, y2 = center_position_xy[0], \ center_position_xy[1], \ min(center_position_xy[0] + img_shape_wh[0], self.img_scale[1] * 2), \ min(self.img_scale[0] * 2, center_position_xy[1] + img_shape_wh[1]) crop_coord = 0, 0, min(img_shape_wh[0], x2 - x1), min(y2 - y1, img_shape_wh[1]) paste_coord = x1, y1, x2, y2 return paste_coord, crop_coord def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(prob={self.prob}, ' repr_str += f'img_scale={self.img_scale}, ' repr_str += f'center_ratio_range={self.center_ratio_range}, ' repr_str += f'pad_val={self.pad_val}, ' repr_str += f'seg_pad_val={self.pad_val})' return repr_str @PIPELINES.register_module() class Albu: """Albumentation augmentation. Adds custom transformations from Albumentations library. Please, visit `https://albumentations.readthedocs.io` to get more information. An example of ``transforms`` is as followed: .. code-block:: [ dict( type='ShiftScaleRotate', shift_limit=0.0625, scale_limit=0.0, rotate_limit=0, interpolation=1, p=0.5), dict( type='RandomBrightnessContrast', brightness_limit=[0.1, 0.3], contrast_limit=[0.1, 0.3], p=0.2), dict(type='ChannelShuffle', p=0.1), dict( type='OneOf', transforms=[ dict(type='Blur', blur_limit=3, p=1.0), dict(type='MedianBlur', blur_limit=3, p=1.0) ], p=0.1), ] Args: transforms (list[dict]): A list of albu transformations keymap (dict): Contains {'input key':'albumentation-style key'} update_pad_shape (bool): Whether to update padding shape according to \ the output shape of the last transform """ def __init__(self, transforms, keymap=None, update_pad_shape=False): if Compose is None: raise ImportError( 'albumentations is not installed, ' 'we suggest install albumentation by ' '"pip install albumentations>=0.3.2 --no-binary qudida,albumentations"' # noqa ) # Args will be modified later, copying it will be safer transforms = copy.deepcopy(transforms) self.transforms = transforms self.filter_lost_elements = False self.update_pad_shape = update_pad_shape self.aug = Compose([self.albu_builder(t) for t in self.transforms]) if not keymap: self.keymap_to_albu = { 'img': 'image', 'gt_masks': 'masks', } else: self.keymap_to_albu = copy.deepcopy(keymap) self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} def albu_builder(self, cfg): """Import a module from albumentations. It inherits some of :func:`build_from_cfg` logic. Args: cfg (dict): Config dict. It should at least contain the key "type". Returns: obj: The constructed object. """ assert isinstance(cfg, dict) and 'type' in cfg args = cfg.copy() obj_type = args.pop('type') if mmcv.is_str(obj_type): if albumentations is None: raise ImportError( 'albumentations is not installed, ' 'we suggest install albumentation by ' '"pip install albumentations>=0.3.2 --no-binary qudida,albumentations"' # noqa ) obj_cls = getattr(albumentations, obj_type) elif inspect.isclass(obj_type): obj_cls = obj_type else: raise TypeError( f'type must be a str or valid type, but got {type(obj_type)}') if 'transforms' in args: args['transforms'] = [ self.albu_builder(transform) for transform in args['transforms'] ] return obj_cls(**args) @staticmethod def mapper(d, keymap): """Dictionary mapper. Renames keys according to keymap provided. Args: d (dict): old dict keymap (dict): {'old_key':'new_key'} Returns: dict: new dict. """ updated_dict = {} for k, _ in zip(d.keys(), d.values()): new_k = keymap.get(k, k) updated_dict[new_k] = d[k] return updated_dict def __call__(self, results): # dict to albumentations format results = self.mapper(results, self.keymap_to_albu) # Convert to RGB since Albumentations works with RGB images results['image'] = cv2.cvtColor(results['image'], cv2.COLOR_BGR2RGB) results = self.aug(**results) # Convert back to BGR results['image'] = cv2.cvtColor(results['image'], cv2.COLOR_RGB2BGR) # back to the original format results = self.mapper(results, self.keymap_back) # update final shape if self.update_pad_shape: results['pad_shape'] = results['img'].shape return results def __repr__(self): repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' return repr_str
53,629
35.041667
99
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/samplers/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .distributed_sampler import DistributedSampler __all__ = ['DistributedSampler']
134
26
51
py
mmsegmentation
mmsegmentation-master/mmseg/datasets/samplers/distributed_sampler.py
# Copyright (c) OpenMMLab. All rights reserved. from __future__ import division from typing import Iterator, Optional import torch from torch.utils.data import Dataset from torch.utils.data import DistributedSampler as _DistributedSampler from mmseg.core.utils import sync_random_seed from mmseg.utils import get_device class DistributedSampler(_DistributedSampler): """DistributedSampler inheriting from `torch.utils.data.DistributedSampler`. Args: datasets (Dataset): the dataset will be loaded. num_replicas (int, optional): Number of processes participating in distributed training. By default, world_size is retrieved from the current distributed group. rank (int, optional): Rank of the current process within num_replicas. By default, rank is retrieved from the current distributed group. shuffle (bool): If True (default), sampler will shuffle the indices. seed (int): random seed used to shuffle the sampler if :attr:`shuffle=True`. This number should be identical across all processes in the distributed group. Default: ``0``. """ def __init__(self, dataset: Dataset, num_replicas: Optional[int] = None, rank: Optional[int] = None, shuffle: bool = True, seed=0) -> None: super().__init__( dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) # In distributed sampling, different ranks should sample # non-overlapped data in the dataset. Therefore, this function # is used to make sure that each rank shuffles the data indices # in the same order based on the same seed. Then different ranks # could use different indices to select non-overlapped data from the # same data list. device = get_device() self.seed = sync_random_seed(seed, device) def __iter__(self) -> Iterator: """ Yields: Iterator: iterator of indices for rank. """ # deterministically shuffle based on epoch if self.shuffle: g = torch.Generator() # When :attr:`shuffle=True`, this ensures all replicas # use a different random ordering for each epoch. # Otherwise, the next iteration of this sampler will # yield the same ordering. g.manual_seed(self.epoch + self.seed) indices = torch.randperm(len(self.dataset), generator=g).tolist() else: indices = torch.arange(len(self.dataset)).tolist() # add extra samples to make it evenly divisible indices += indices[:(self.total_size - len(indices))] assert len(indices) == self.total_size # subsample indices = indices[self.rank:self.total_size:self.num_replicas] assert len(indices) == self.num_samples return iter(indices)
2,975
39.216216
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .backbones import * # noqa: F401,F403 from .builder import (BACKBONES, HEADS, LOSSES, SEGMENTORS, build_backbone, build_head, build_loss, build_segmentor) from .decode_heads import * # noqa: F401,F403 from .losses import * # noqa: F401,F403 from .necks import * # noqa: F401,F403 from .segmentors import * # noqa: F401,F403 __all__ = [ 'BACKBONES', 'HEADS', 'LOSSES', 'SEGMENTORS', 'build_backbone', 'build_head', 'build_loss', 'build_segmentor' ]
537
37.428571
75
py
mmsegmentation
mmsegmentation-master/mmseg/models/builder.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings from mmcv.cnn import MODELS as MMCV_MODELS from mmcv.cnn.bricks.registry import ATTENTION as MMCV_ATTENTION from mmcv.utils import Registry MODELS = Registry('models', parent=MMCV_MODELS) ATTENTION = Registry('attention', parent=MMCV_ATTENTION) BACKBONES = MODELS NECKS = MODELS HEADS = MODELS LOSSES = MODELS SEGMENTORS = MODELS def build_backbone(cfg): """Build backbone.""" return BACKBONES.build(cfg) def build_neck(cfg): """Build neck.""" return NECKS.build(cfg) def build_head(cfg): """Build head.""" return HEADS.build(cfg) def build_loss(cfg): """Build loss.""" return LOSSES.build(cfg) def build_segmentor(cfg, train_cfg=None, test_cfg=None): """Build segmentor.""" if train_cfg is not None or test_cfg is not None: warnings.warn( 'train_cfg and test_cfg is deprecated, ' 'please specify them in model', UserWarning) assert cfg.get('train_cfg') is None or train_cfg is None, \ 'train_cfg specified in both outer field and model field ' assert cfg.get('test_cfg') is None or test_cfg is None, \ 'test_cfg specified in both outer field and model field ' return SEGMENTORS.build( cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg))
1,335
25.72
71
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .beit import BEiT from .bisenetv1 import BiSeNetV1 from .bisenetv2 import BiSeNetV2 from .cgnet import CGNet from .erfnet import ERFNet from .fast_scnn import FastSCNN from .hrnet import HRNet from .icnet import ICNet from .mae import MAE from .mit import MixVisionTransformer from .mobilenet_v2 import MobileNetV2 from .mobilenet_v3 import MobileNetV3 from .mscan import MSCAN from .resnest import ResNeSt from .resnet import ResNet, ResNetV1c, ResNetV1d from .resnext import ResNeXt from .stdc import STDCContextPathNet, STDCNet from .swin import SwinTransformer from .timm_backbone import TIMMBackbone from .twins import PCPVT, SVT from .unet import UNet from .vit import VisionTransformer __all__ = [ 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet', 'FastSCNN', 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3', 'VisionTransformer', 'SwinTransformer', 'MixVisionTransformer', 'BiSeNetV1', 'BiSeNetV2', 'ICNet', 'TIMMBackbone', 'ERFNet', 'PCPVT', 'SVT', 'STDCNet', 'STDCContextPathNet', 'BEiT', 'MAE', 'MSCAN' ]
1,104
33.53125
73
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/beit.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import build_norm_layer from mmcv.cnn.bricks.drop import build_dropout from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init, trunc_normal_) from mmcv.runner import BaseModule, ModuleList, _load_checkpoint from torch.nn.modules.batchnorm import _BatchNorm from torch.nn.modules.utils import _pair as to_2tuple from mmseg.utils import get_root_logger from ..builder import BACKBONES from ..utils import PatchEmbed from .vit import TransformerEncoderLayer as VisionTransformerEncoderLayer try: from scipy import interpolate except ImportError: interpolate = None class BEiTAttention(BaseModule): """Window based multi-head self-attention (W-MSA) module with relative position bias. Args: embed_dims (int): Number of input channels. num_heads (int): Number of attention heads. window_size (tuple[int]): The height and width of the window. bias (bool): The option to add leanable bias for q, k, v. If bias is True, it will add leanable bias. If bias is 'qv_bias', it will only add leanable bias for q, v. If bias is False, it will not add bias for q, k, v. Default to 'qv_bias'. qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. attn_drop_rate (float): Dropout ratio of attention weight. Default: 0.0 proj_drop_rate (float): Dropout ratio of output. Default: 0. init_cfg (dict | None, optional): The Config for initialization. Default: None. """ def __init__(self, embed_dims, num_heads, window_size, bias='qv_bias', qk_scale=None, attn_drop_rate=0., proj_drop_rate=0., init_cfg=None, **kwargs): super().__init__(init_cfg=init_cfg) self.embed_dims = embed_dims self.num_heads = num_heads head_embed_dims = embed_dims // num_heads self.bias = bias self.scale = qk_scale or head_embed_dims**-0.5 qkv_bias = bias if bias == 'qv_bias': self._init_qv_bias() qkv_bias = False self.window_size = window_size self._init_rel_pos_embedding() self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop_rate) self.proj = nn.Linear(embed_dims, embed_dims) self.proj_drop = nn.Dropout(proj_drop_rate) def _init_qv_bias(self): self.q_bias = nn.Parameter(torch.zeros(self.embed_dims)) self.v_bias = nn.Parameter(torch.zeros(self.embed_dims)) def _init_rel_pos_embedding(self): Wh, Ww = self.window_size # cls to token & token 2 cls & cls to cls self.num_relative_distance = (2 * Wh - 1) * (2 * Ww - 1) + 3 # relative_position_bias_table shape is (2*Wh-1 * 2*Ww-1 + 3, nH) self.relative_position_bias_table = nn.Parameter( torch.zeros(self.num_relative_distance, self.num_heads)) # get pair-wise relative position index for # each token inside the window coords_h = torch.arange(Wh) coords_w = torch.arange(Ww) # coords shape is (2, Wh, Ww) coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # coords_flatten shape is (2, Wh*Ww) coords_flatten = torch.flatten(coords, 1) relative_coords = ( coords_flatten[:, :, None] - coords_flatten[:, None, :]) # relative_coords shape is (Wh*Ww, Wh*Ww, 2) relative_coords = relative_coords.permute(1, 2, 0).contiguous() # shift to start from 0 relative_coords[:, :, 0] += Wh - 1 relative_coords[:, :, 1] += Ww - 1 relative_coords[:, :, 0] *= 2 * Ww - 1 relative_position_index = torch.zeros( size=(Wh * Ww + 1, ) * 2, dtype=relative_coords.dtype) # relative_position_index shape is (Wh*Ww, Wh*Ww) relative_position_index[1:, 1:] = relative_coords.sum(-1) relative_position_index[0, 0:] = self.num_relative_distance - 3 relative_position_index[0:, 0] = self.num_relative_distance - 2 relative_position_index[0, 0] = self.num_relative_distance - 1 self.register_buffer('relative_position_index', relative_position_index) def init_weights(self): trunc_normal_(self.relative_position_bias_table, std=0.02) def forward(self, x): """ Args: x (tensor): input features with shape of (num_windows*B, N, C). """ B, N, C = x.shape if self.bias == 'qv_bias': k_bias = torch.zeros_like(self.v_bias, requires_grad=False) qkv_bias = torch.cat((self.q_bias, k_bias, self.v_bias)) qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) else: qkv = self.qkv(x) qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) q, k, v = qkv[0], qkv[1], qkv[2] q = q * self.scale attn = (q @ k.transpose(-2, -1)) if self.relative_position_bias_table is not None: Wh = self.window_size[0] Ww = self.window_size[1] relative_position_bias = self.relative_position_bias_table[ self.relative_position_index.view(-1)].view( Wh * Ww + 1, Wh * Ww + 1, -1) relative_position_bias = relative_position_bias.permute( 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww attn = attn + relative_position_bias.unsqueeze(0) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x class BEiTTransformerEncoderLayer(VisionTransformerEncoderLayer): """Implements one encoder layer in Vision Transformer. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. attn_drop_rate (float): The drop out rate for attention layer. Default: 0.0. drop_path_rate (float): Stochastic depth rate. Default 0.0. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. bias (bool): The option to add leanable bias for q, k, v. If bias is True, it will add leanable bias. If bias is 'qv_bias', it will only add leanable bias for q, v. If bias is False, it will not add bias for q, k, v. Default to 'qv_bias'. act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). window_size (tuple[int], optional): The height and width of the window. Default: None. init_values (float, optional): Initialize the values of BEiTAttention and FFN with learnable scaling. Default: None. """ def __init__(self, embed_dims, num_heads, feedforward_channels, attn_drop_rate=0., drop_path_rate=0., num_fcs=2, bias='qv_bias', act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), window_size=None, attn_cfg=dict(), ffn_cfg=dict(add_identity=False), init_values=None): attn_cfg.update(dict(window_size=window_size, qk_scale=None)) super(BEiTTransformerEncoderLayer, self).__init__( embed_dims=embed_dims, num_heads=num_heads, feedforward_channels=feedforward_channels, attn_drop_rate=attn_drop_rate, drop_path_rate=0., drop_rate=0., num_fcs=num_fcs, qkv_bias=bias, act_cfg=act_cfg, norm_cfg=norm_cfg, attn_cfg=attn_cfg, ffn_cfg=ffn_cfg) # NOTE: drop path for stochastic depth, we shall see if # this is better than dropout here dropout_layer = dict(type='DropPath', drop_prob=drop_path_rate) self.drop_path = build_dropout( dropout_layer) if dropout_layer else nn.Identity() self.gamma_1 = nn.Parameter( init_values * torch.ones((embed_dims)), requires_grad=True) self.gamma_2 = nn.Parameter( init_values * torch.ones((embed_dims)), requires_grad=True) def build_attn(self, attn_cfg): self.attn = BEiTAttention(**attn_cfg) def forward(self, x): x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x))) x = x + self.drop_path(self.gamma_2 * self.ffn(self.norm2(x))) return x @BACKBONES.register_module() class BEiT(BaseModule): """BERT Pre-Training of Image Transformers. Args: img_size (int | tuple): Input image size. Default: 224. patch_size (int): The patch size. Default: 16. in_channels (int): Number of input channels. Default: 3. embed_dims (int): Embedding dimension. Default: 768. num_layers (int): Depth of transformer. Default: 12. num_heads (int): Number of attention heads. Default: 12. mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. Default: 4. out_indices (list | tuple | int): Output from which stages. Default: -1. qv_bias (bool): Enable bias for qv if True. Default: True. attn_drop_rate (float): The drop out rate for attention layer. Default 0.0 drop_path_rate (float): Stochastic depth rate. Default 0.0. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). patch_norm (bool): Whether to add a norm in PatchEmbed Block. Default: False. final_norm (bool): Whether to add a additional layer to normalize final feature map. Default: False. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. pretrained (str, optional): Model pretrained path. Default: None. init_values (float): Initialize the values of BEiTAttention and FFN with learnable scaling. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dims=768, num_layers=12, num_heads=12, mlp_ratio=4, out_indices=-1, qv_bias=True, attn_drop_rate=0., drop_path_rate=0., norm_cfg=dict(type='LN'), act_cfg=dict(type='GELU'), patch_norm=False, final_norm=False, num_fcs=2, norm_eval=False, pretrained=None, init_values=0.1, init_cfg=None): super(BEiT, self).__init__(init_cfg=init_cfg) if isinstance(img_size, int): img_size = to_2tuple(img_size) elif isinstance(img_size, tuple): if len(img_size) == 1: img_size = to_2tuple(img_size[0]) assert len(img_size) == 2, \ f'The size of image should have length 1 or 2, ' \ f'but got {len(img_size)}' assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be set at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is not None: raise TypeError('pretrained must be a str or None') self.in_channels = in_channels self.img_size = img_size self.patch_size = patch_size self.norm_eval = norm_eval self.pretrained = pretrained self.num_layers = num_layers self.embed_dims = embed_dims self.num_heads = num_heads self.mlp_ratio = mlp_ratio self.attn_drop_rate = attn_drop_rate self.drop_path_rate = drop_path_rate self.num_fcs = num_fcs self.qv_bias = qv_bias self.act_cfg = act_cfg self.norm_cfg = norm_cfg self.patch_norm = patch_norm self.init_values = init_values self.window_size = (img_size[0] // patch_size, img_size[1] // patch_size) self.patch_shape = self.window_size self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims)) self._build_patch_embedding() self._build_layers() if isinstance(out_indices, int): if out_indices == -1: out_indices = num_layers - 1 self.out_indices = [out_indices] elif isinstance(out_indices, list) or isinstance(out_indices, tuple): self.out_indices = out_indices else: raise TypeError('out_indices must be type of int, list or tuple') self.final_norm = final_norm if final_norm: self.norm1_name, norm1 = build_norm_layer( norm_cfg, embed_dims, postfix=1) self.add_module(self.norm1_name, norm1) def _build_patch_embedding(self): """Build patch embedding layer.""" self.patch_embed = PatchEmbed( in_channels=self.in_channels, embed_dims=self.embed_dims, conv_type='Conv2d', kernel_size=self.patch_size, stride=self.patch_size, padding=0, norm_cfg=self.norm_cfg if self.patch_norm else None, init_cfg=None) def _build_layers(self): """Build transformer encoding layers.""" dpr = [ x.item() for x in torch.linspace(0, self.drop_path_rate, self.num_layers) ] self.layers = ModuleList() for i in range(self.num_layers): self.layers.append( BEiTTransformerEncoderLayer( embed_dims=self.embed_dims, num_heads=self.num_heads, feedforward_channels=self.mlp_ratio * self.embed_dims, attn_drop_rate=self.attn_drop_rate, drop_path_rate=dpr[i], num_fcs=self.num_fcs, bias='qv_bias' if self.qv_bias else False, act_cfg=self.act_cfg, norm_cfg=self.norm_cfg, window_size=self.window_size, init_values=self.init_values)) @property def norm1(self): return getattr(self, self.norm1_name) def _geometric_sequence_interpolation(self, src_size, dst_size, sequence, num): """Get new sequence via geometric sequence interpolation. Args: src_size (int): Pos_embedding size in pre-trained model. dst_size (int): Pos_embedding size in the current model. sequence (tensor): The relative position bias of the pretrain model after removing the extra tokens. num (int): Number of attention heads. Returns: new_sequence (tensor): Geometric sequence interpolate the pre-trained relative position bias to the size of the current model. """ def geometric_progression(a, r, n): return a * (1.0 - r**n) / (1.0 - r) # Here is a binary function. left, right = 1.01, 1.5 while right - left > 1e-6: q = (left + right) / 2.0 gp = geometric_progression(1, q, src_size // 2) if gp > dst_size // 2: right = q else: left = q # The position of each interpolated point is determined # by the ratio obtained by dichotomy. dis = [] cur = 1 for i in range(src_size // 2): dis.append(cur) cur += q**(i + 1) r_ids = [-_ for _ in reversed(dis)] x = r_ids + [0] + dis y = r_ids + [0] + dis t = dst_size // 2.0 dx = np.arange(-t, t + 0.1, 1.0) dy = np.arange(-t, t + 0.1, 1.0) # Interpolation functions are being executed and called. new_sequence = [] for i in range(num): z = sequence[:, i].view(src_size, src_size).float().numpy() f = interpolate.interp2d(x, y, z, kind='cubic') new_sequence.append( torch.Tensor(f(dx, dy)).contiguous().view(-1, 1).to(sequence)) new_sequence = torch.cat(new_sequence, dim=-1) return new_sequence def resize_rel_pos_embed(self, checkpoint): """Resize relative pos_embed weights. This function is modified from https://github.com/microsoft/unilm/blob/master/beit/semantic_segmentation/mmcv_custom/checkpoint.py. # noqa: E501 Copyright (c) Microsoft Corporation Licensed under the MIT License Args: checkpoint (dict): Key and value of the pretrain model. Returns: state_dict (dict): Interpolate the relative pos_embed weights in the pre-train model to the current model size. """ if 'state_dict' in checkpoint: state_dict = checkpoint['state_dict'] else: state_dict = checkpoint all_keys = list(state_dict.keys()) for key in all_keys: if 'relative_position_index' in key: state_dict.pop(key) # In order to keep the center of pos_bias as consistent as # possible after interpolation, and vice versa in the edge # area, the geometric sequence interpolation method is adopted. if 'relative_position_bias_table' in key: rel_pos_bias = state_dict[key] src_num_pos, num_attn_heads = rel_pos_bias.size() dst_num_pos, _ = self.state_dict()[key].size() dst_patch_shape = self.patch_shape if dst_patch_shape[0] != dst_patch_shape[1]: raise NotImplementedError() # Count the number of extra tokens. num_extra_tokens = dst_num_pos - ( dst_patch_shape[0] * 2 - 1) * ( dst_patch_shape[1] * 2 - 1) src_size = int((src_num_pos - num_extra_tokens)**0.5) dst_size = int((dst_num_pos - num_extra_tokens)**0.5) if src_size != dst_size: extra_tokens = rel_pos_bias[-num_extra_tokens:, :] rel_pos_bias = rel_pos_bias[:-num_extra_tokens, :] new_rel_pos_bias = self._geometric_sequence_interpolation( src_size, dst_size, rel_pos_bias, num_attn_heads) new_rel_pos_bias = torch.cat( (new_rel_pos_bias, extra_tokens), dim=0) state_dict[key] = new_rel_pos_bias return state_dict def init_weights(self): def _init_weights(m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if isinstance(m, nn.Linear) and m.bias is not None: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.LayerNorm): nn.init.constant_(m.bias, 0) nn.init.constant_(m.weight, 1.0) self.apply(_init_weights) if (isinstance(self.init_cfg, dict) and self.init_cfg.get('type') == 'Pretrained'): logger = get_root_logger() checkpoint = _load_checkpoint( self.init_cfg['checkpoint'], logger=logger, map_location='cpu') state_dict = self.resize_rel_pos_embed(checkpoint) self.load_state_dict(state_dict, False) elif self.init_cfg is not None: super(BEiT, self).init_weights() else: # We only implement the 'jax_impl' initialization implemented at # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 # Copyright 2019 Ross Wightman # Licensed under the Apache License, Version 2.0 (the "License") trunc_normal_(self.cls_token, std=.02) for n, m in self.named_modules(): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if m.bias is not None: if 'ffn' in n: nn.init.normal_(m.bias, mean=0., std=1e-6) else: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.Conv2d): kaiming_init(m, mode='fan_in', bias=0.) elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): constant_init(m, val=1.0, bias=0.) def forward(self, inputs): B = inputs.shape[0] x, hw_shape = self.patch_embed(inputs) # stole cls_tokens impl from Phil Wang, thanks cls_tokens = self.cls_token.expand(B, -1, -1) x = torch.cat((cls_tokens, x), dim=1) outs = [] for i, layer in enumerate(self.layers): x = layer(x) if i == len(self.layers) - 1: if self.final_norm: x = self.norm1(x) if i in self.out_indices: # Remove class token and reshape token for decoder head out = x[:, 1:] B, _, C = out.shape out = out.reshape(B, hw_shape[0], hw_shape[1], C).permute(0, 3, 1, 2).contiguous() outs.append(out) return tuple(outs) def train(self, mode=True): super(BEiT, self).train(mode) if mode and self.norm_eval: for m in self.modules(): if isinstance(m, nn.LayerNorm): m.eval()
22,986
40.048214
128
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/bisenetv1.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import ConvModule from mmcv.runner import BaseModule from mmseg.ops import resize from ..builder import BACKBONES, build_backbone class SpatialPath(BaseModule): """Spatial Path to preserve the spatial size of the original input image and encode affluent spatial information. Args: in_channels(int): The number of channels of input image. Default: 3. num_channels (Tuple[int]): The number of channels of each layers in Spatial Path. Default: (64, 64, 64, 128). Returns: x (torch.Tensor): Feature map for Feature Fusion Module. """ def __init__(self, in_channels=3, num_channels=(64, 64, 64, 128), conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(SpatialPath, self).__init__(init_cfg=init_cfg) assert len(num_channels) == 4, 'Length of input channels \ of Spatial Path must be 4!' self.layers = [] for i in range(len(num_channels)): layer_name = f'layer{i + 1}' self.layers.append(layer_name) if i == 0: self.add_module( layer_name, ConvModule( in_channels=in_channels, out_channels=num_channels[i], kernel_size=7, stride=2, padding=3, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) elif i == len(num_channels) - 1: self.add_module( layer_name, ConvModule( in_channels=num_channels[i - 1], out_channels=num_channels[i], kernel_size=1, stride=1, padding=0, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) else: self.add_module( layer_name, ConvModule( in_channels=num_channels[i - 1], out_channels=num_channels[i], kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) def forward(self, x): for i, layer_name in enumerate(self.layers): layer_stage = getattr(self, layer_name) x = layer_stage(x) return x class AttentionRefinementModule(BaseModule): """Attention Refinement Module (ARM) to refine the features of each stage. Args: in_channels (int): The number of input channels. out_channels (int): The number of output channels. Returns: x_out (torch.Tensor): Feature map of Attention Refinement Module. """ def __init__(self, in_channels, out_channel, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(AttentionRefinementModule, self).__init__(init_cfg=init_cfg) self.conv_layer = ConvModule( in_channels=in_channels, out_channels=out_channel, kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.atten_conv_layer = nn.Sequential( nn.AdaptiveAvgPool2d((1, 1)), ConvModule( in_channels=out_channel, out_channels=out_channel, kernel_size=1, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None), nn.Sigmoid()) def forward(self, x): x = self.conv_layer(x) x_atten = self.atten_conv_layer(x) x_out = x * x_atten return x_out class ContextPath(BaseModule): """Context Path to provide sufficient receptive field. Args: backbone_cfg:(dict): Config of backbone of Context Path. context_channels (Tuple[int]): The number of channel numbers of various modules in Context Path. Default: (128, 256, 512). align_corners (bool, optional): The align_corners argument of resize operation. Default: False. Returns: x_16_up, x_32_up (torch.Tensor, torch.Tensor): Two feature maps undergoing upsampling from 1/16 and 1/32 downsampling feature maps. These two feature maps are used for Feature Fusion Module and Auxiliary Head. """ def __init__(self, backbone_cfg, context_channels=(128, 256, 512), align_corners=False, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(ContextPath, self).__init__(init_cfg=init_cfg) assert len(context_channels) == 3, 'Length of input channels \ of Context Path must be 3!' self.backbone = build_backbone(backbone_cfg) self.align_corners = align_corners self.arm16 = AttentionRefinementModule(context_channels[1], context_channels[0]) self.arm32 = AttentionRefinementModule(context_channels[2], context_channels[0]) self.conv_head32 = ConvModule( in_channels=context_channels[0], out_channels=context_channels[0], kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.conv_head16 = ConvModule( in_channels=context_channels[0], out_channels=context_channels[0], kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.gap_conv = nn.Sequential( nn.AdaptiveAvgPool2d((1, 1)), ConvModule( in_channels=context_channels[2], out_channels=context_channels[0], kernel_size=1, stride=1, padding=0, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) def forward(self, x): x_4, x_8, x_16, x_32 = self.backbone(x) x_gap = self.gap_conv(x_32) x_32_arm = self.arm32(x_32) x_32_sum = x_32_arm + x_gap x_32_up = resize(input=x_32_sum, size=x_16.shape[2:], mode='nearest') x_32_up = self.conv_head32(x_32_up) x_16_arm = self.arm16(x_16) x_16_sum = x_16_arm + x_32_up x_16_up = resize(input=x_16_sum, size=x_8.shape[2:], mode='nearest') x_16_up = self.conv_head16(x_16_up) return x_16_up, x_32_up class FeatureFusionModule(BaseModule): """Feature Fusion Module to fuse low level output feature of Spatial Path and high level output feature of Context Path. Args: in_channels (int): The number of input channels. out_channels (int): The number of output channels. Returns: x_out (torch.Tensor): Feature map of Feature Fusion Module. """ def __init__(self, in_channels, out_channels, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) self.conv1 = ConvModule( in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1, padding=0, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.gap = nn.AdaptiveAvgPool2d((1, 1)) self.conv_atten = nn.Sequential( ConvModule( in_channels=out_channels, out_channels=out_channels, kernel_size=1, stride=1, padding=0, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), nn.Sigmoid()) def forward(self, x_sp, x_cp): x_concat = torch.cat([x_sp, x_cp], dim=1) x_fuse = self.conv1(x_concat) x_atten = self.gap(x_fuse) # Note: No BN and more 1x1 conv in paper. x_atten = self.conv_atten(x_atten) x_atten = x_fuse * x_atten x_out = x_atten + x_fuse return x_out @BACKBONES.register_module() class BiSeNetV1(BaseModule): """BiSeNetV1 backbone. This backbone is the implementation of `BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation <https://arxiv.org/abs/1808.00897>`_. Args: backbone_cfg:(dict): Config of backbone of Context Path. in_channels (int): The number of channels of input image. Default: 3. spatial_channels (Tuple[int]): Size of channel numbers of various layers in Spatial Path. Default: (64, 64, 64, 128). context_channels (Tuple[int]): Size of channel numbers of various modules in Context Path. Default: (128, 256, 512). out_indices (Tuple[int] | int, optional): Output from which stages. Default: (0, 1, 2). align_corners (bool, optional): The align_corners argument of resize operation in Bilateral Guided Aggregation Layer. Default: False. out_channels(int): The number of channels of output. It must be the same with `in_channels` of decode_head. Default: 256. """ def __init__(self, backbone_cfg, in_channels=3, spatial_channels=(64, 64, 64, 128), context_channels=(128, 256, 512), out_indices=(0, 1, 2), align_corners=False, out_channels=256, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), act_cfg=dict(type='ReLU'), init_cfg=None): super(BiSeNetV1, self).__init__(init_cfg=init_cfg) assert len(spatial_channels) == 4, 'Length of input channels \ of Spatial Path must be 4!' assert len(context_channels) == 3, 'Length of input channels \ of Context Path must be 3!' self.out_indices = out_indices self.align_corners = align_corners self.context_path = ContextPath(backbone_cfg, context_channels, self.align_corners) self.spatial_path = SpatialPath(in_channels, spatial_channels) self.ffm = FeatureFusionModule(context_channels[1], out_channels) self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg def forward(self, x): # stole refactoring code from Coin Cheung, thanks x_context8, x_context16 = self.context_path(x) x_spatial = self.spatial_path(x) x_fuse = self.ffm(x_spatial, x_context8) outs = [x_fuse, x_context8, x_context16] outs = [outs[i] for i in self.out_indices] return tuple(outs)
12,006
35.057057
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/bisenetv2.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, build_activation_layer, build_norm_layer) from mmcv.runner import BaseModule from mmseg.ops import resize from ..builder import BACKBONES class DetailBranch(BaseModule): """Detail Branch with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation. Args: detail_channels (Tuple[int]): Size of channel numbers of each stage in Detail Branch, in paper it has 3 stages. Default: (64, 64, 128). in_channels (int): Number of channels of input image. Default: 3. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: x (torch.Tensor): Feature map of Detail Branch. """ def __init__(self, detail_channels=(64, 64, 128), in_channels=3, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(DetailBranch, self).__init__(init_cfg=init_cfg) detail_branch = [] for i in range(len(detail_channels)): if i == 0: detail_branch.append( nn.Sequential( ConvModule( in_channels=in_channels, out_channels=detail_channels[i], kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), ConvModule( in_channels=detail_channels[i], out_channels=detail_channels[i], kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg))) else: detail_branch.append( nn.Sequential( ConvModule( in_channels=detail_channels[i - 1], out_channels=detail_channels[i], kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), ConvModule( in_channels=detail_channels[i], out_channels=detail_channels[i], kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), ConvModule( in_channels=detail_channels[i], out_channels=detail_channels[i], kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg))) self.detail_branch = nn.ModuleList(detail_branch) def forward(self, x): for stage in self.detail_branch: x = stage(x) return x class StemBlock(BaseModule): """Stem Block at the beginning of Semantic Branch. Args: in_channels (int): Number of input channels. Default: 3. out_channels (int): Number of output channels. Default: 16. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: x (torch.Tensor): First feature map in Semantic Branch. """ def __init__(self, in_channels=3, out_channels=16, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(StemBlock, self).__init__(init_cfg=init_cfg) self.conv_first = ConvModule( in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.convs = nn.Sequential( ConvModule( in_channels=out_channels, out_channels=out_channels // 2, kernel_size=1, stride=1, padding=0, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), ConvModule( in_channels=out_channels // 2, out_channels=out_channels, kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) self.pool = nn.MaxPool2d( kernel_size=3, stride=2, padding=1, ceil_mode=False) self.fuse_last = ConvModule( in_channels=out_channels * 2, out_channels=out_channels, kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) def forward(self, x): x = self.conv_first(x) x_left = self.convs(x) x_right = self.pool(x) x = self.fuse_last(torch.cat([x_left, x_right], dim=1)) return x class GELayer(BaseModule): """Gather-and-Expansion Layer. Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. exp_ratio (int): Expansion ratio for middle channels. Default: 6. stride (int): Stride of GELayer. Default: 1 conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: x (torch.Tensor): Intermediate feature map in Semantic Branch. """ def __init__(self, in_channels, out_channels, exp_ratio=6, stride=1, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(GELayer, self).__init__(init_cfg=init_cfg) mid_channel = in_channels * exp_ratio self.conv1 = ConvModule( in_channels=in_channels, out_channels=in_channels, kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) if stride == 1: self.dwconv = nn.Sequential( # ReLU in ConvModule not shown in paper ConvModule( in_channels=in_channels, out_channels=mid_channel, kernel_size=3, stride=stride, padding=1, groups=in_channels, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) self.shortcut = None else: self.dwconv = nn.Sequential( ConvModule( in_channels=in_channels, out_channels=mid_channel, kernel_size=3, stride=stride, padding=1, groups=in_channels, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None), # ReLU in ConvModule not shown in paper ConvModule( in_channels=mid_channel, out_channels=mid_channel, kernel_size=3, stride=1, padding=1, groups=mid_channel, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg), ) self.shortcut = nn.Sequential( DepthwiseSeparableConvModule( in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride, padding=1, dw_norm_cfg=norm_cfg, dw_act_cfg=None, pw_norm_cfg=norm_cfg, pw_act_cfg=None, )) self.conv2 = nn.Sequential( ConvModule( in_channels=mid_channel, out_channels=out_channels, kernel_size=1, stride=1, padding=0, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None, )) self.act = build_activation_layer(act_cfg) def forward(self, x): identity = x x = self.conv1(x) x = self.dwconv(x) x = self.conv2(x) if self.shortcut is not None: shortcut = self.shortcut(identity) x = x + shortcut else: x = x + identity x = self.act(x) return x class CEBlock(BaseModule): """Context Embedding Block for large receptive filed in Semantic Branch. Args: in_channels (int): Number of input channels. Default: 3. out_channels (int): Number of output channels. Default: 16. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: x (torch.Tensor): Last feature map in Semantic Branch. """ def __init__(self, in_channels=3, out_channels=16, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(CEBlock, self).__init__(init_cfg=init_cfg) self.in_channels = in_channels self.out_channels = out_channels self.gap = nn.Sequential( nn.AdaptiveAvgPool2d((1, 1)), build_norm_layer(norm_cfg, self.in_channels)[1]) self.conv_gap = ConvModule( in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=1, stride=1, padding=0, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) # Note: in paper here is naive conv2d, no bn-relu self.conv_last = ConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) def forward(self, x): identity = x x = self.gap(x) x = self.conv_gap(x) x = identity + x x = self.conv_last(x) return x class SemanticBranch(BaseModule): """Semantic Branch which is lightweight with narrow channels and deep layers to obtain high-level semantic context. Args: semantic_channels(Tuple[int]): Size of channel numbers of various stages in Semantic Branch. Default: (16, 32, 64, 128). in_channels (int): Number of channels of input image. Default: 3. exp_ratio (int): Expansion ratio for middle channels. Default: 6. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: semantic_outs (List[torch.Tensor]): List of several feature maps for auxiliary heads (Booster) and Bilateral Guided Aggregation Layer. """ def __init__(self, semantic_channels=(16, 32, 64, 128), in_channels=3, exp_ratio=6, init_cfg=None): super(SemanticBranch, self).__init__(init_cfg=init_cfg) self.in_channels = in_channels self.semantic_channels = semantic_channels self.semantic_stages = [] for i in range(len(semantic_channels)): stage_name = f'stage{i + 1}' self.semantic_stages.append(stage_name) if i == 0: self.add_module( stage_name, StemBlock(self.in_channels, semantic_channels[i])) elif i == (len(semantic_channels) - 1): self.add_module( stage_name, nn.Sequential( GELayer(semantic_channels[i - 1], semantic_channels[i], exp_ratio, 2), GELayer(semantic_channels[i], semantic_channels[i], exp_ratio, 1), GELayer(semantic_channels[i], semantic_channels[i], exp_ratio, 1), GELayer(semantic_channels[i], semantic_channels[i], exp_ratio, 1))) else: self.add_module( stage_name, nn.Sequential( GELayer(semantic_channels[i - 1], semantic_channels[i], exp_ratio, 2), GELayer(semantic_channels[i], semantic_channels[i], exp_ratio, 1))) self.add_module(f'stage{len(semantic_channels)}_CEBlock', CEBlock(semantic_channels[-1], semantic_channels[-1])) self.semantic_stages.append(f'stage{len(semantic_channels)}_CEBlock') def forward(self, x): semantic_outs = [] for stage_name in self.semantic_stages: semantic_stage = getattr(self, stage_name) x = semantic_stage(x) semantic_outs.append(x) return semantic_outs class BGALayer(BaseModule): """Bilateral Guided Aggregation Layer to fuse the complementary information from both Detail Branch and Semantic Branch. Args: out_channels (int): Number of output channels. Default: 128. align_corners (bool): align_corners argument of F.interpolate. Default: False. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Returns: output (torch.Tensor): Output feature map for Segment heads. """ def __init__(self, out_channels=128, align_corners=False, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(BGALayer, self).__init__(init_cfg=init_cfg) self.out_channels = out_channels self.align_corners = align_corners self.detail_dwconv = nn.Sequential( DepthwiseSeparableConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=1, padding=1, dw_norm_cfg=norm_cfg, dw_act_cfg=None, pw_norm_cfg=None, pw_act_cfg=None, )) self.detail_down = nn.Sequential( ConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=2, padding=1, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None), nn.AvgPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=False)) self.semantic_conv = nn.Sequential( ConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=1, padding=1, bias=False, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None)) self.semantic_dwconv = nn.Sequential( DepthwiseSeparableConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=1, padding=1, dw_norm_cfg=norm_cfg, dw_act_cfg=None, pw_norm_cfg=None, pw_act_cfg=None, )) self.conv = ConvModule( in_channels=self.out_channels, out_channels=self.out_channels, kernel_size=3, stride=1, padding=1, inplace=True, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, ) def forward(self, x_d, x_s): detail_dwconv = self.detail_dwconv(x_d) detail_down = self.detail_down(x_d) semantic_conv = self.semantic_conv(x_s) semantic_dwconv = self.semantic_dwconv(x_s) semantic_conv = resize( input=semantic_conv, size=detail_dwconv.shape[2:], mode='bilinear', align_corners=self.align_corners) fuse_1 = detail_dwconv * torch.sigmoid(semantic_conv) fuse_2 = detail_down * torch.sigmoid(semantic_dwconv) fuse_2 = resize( input=fuse_2, size=fuse_1.shape[2:], mode='bilinear', align_corners=self.align_corners) output = self.conv(fuse_1 + fuse_2) return output @BACKBONES.register_module() class BiSeNetV2(BaseModule): """BiSeNetV2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. This backbone is the implementation of `BiSeNetV2 <https://arxiv.org/abs/2004.02147>`_. Args: in_channels (int): Number of channel of input image. Default: 3. detail_channels (Tuple[int], optional): Channels of each stage in Detail Branch. Default: (64, 64, 128). semantic_channels (Tuple[int], optional): Channels of each stage in Semantic Branch. Default: (16, 32, 64, 128). See Table 1 and Figure 3 of paper for more details. semantic_expansion_ratio (int, optional): The expansion factor expanding channel number of middle channels in Semantic Branch. Default: 6. bga_channels (int, optional): Number of middle channels in Bilateral Guided Aggregation Layer. Default: 128. out_indices (Tuple[int] | int, optional): Output from which stages. Default: (0, 1, 2, 3, 4). align_corners (bool, optional): The align_corners argument of resize operation in Bilateral Guided Aggregation Layer. Default: False. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels=3, detail_channels=(64, 64, 128), semantic_channels=(16, 32, 64, 128), semantic_expansion_ratio=6, bga_channels=128, out_indices=(0, 1, 2, 3, 4), align_corners=False, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): if init_cfg is None: init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] super(BiSeNetV2, self).__init__(init_cfg=init_cfg) self.in_channels = in_channels self.out_indices = out_indices self.detail_channels = detail_channels self.semantic_channels = semantic_channels self.semantic_expansion_ratio = semantic_expansion_ratio self.bga_channels = bga_channels self.align_corners = align_corners self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.detail = DetailBranch(self.detail_channels, self.in_channels) self.semantic = SemanticBranch(self.semantic_channels, self.in_channels, self.semantic_expansion_ratio) self.bga = BGALayer(self.bga_channels, self.align_corners) def forward(self, x): # stole refactoring code from Coin Cheung, thanks x_detail = self.detail(x) x_semantic_lst = self.semantic(x) x_head = self.bga(x_detail, x_semantic_lst[-1]) outs = [x_head] + x_semantic_lst[:-1] outs = [outs[i] for i in self.out_indices] return tuple(outs)
23,042
35.987159
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/cgnet.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import torch import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.cnn import ConvModule, build_conv_layer, build_norm_layer from mmcv.runner import BaseModule from mmcv.utils.parrots_wrapper import _BatchNorm from ..builder import BACKBONES class GlobalContextExtractor(nn.Module): """Global Context Extractor for CGNet. This class is employed to refine the joint feature of both local feature and surrounding context. Args: channel (int): Number of input feature channels. reduction (int): Reductions for global context extractor. Default: 16. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. """ def __init__(self, channel, reduction=16, with_cp=False): super(GlobalContextExtractor, self).__init__() self.channel = channel self.reduction = reduction assert reduction >= 1 and channel >= reduction self.with_cp = with_cp self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel), nn.Sigmoid()) def forward(self, x): def _inner_forward(x): num_batch, num_channel = x.size()[:2] y = self.avg_pool(x).view(num_batch, num_channel) y = self.fc(y).view(num_batch, num_channel, 1, 1) return x * y if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) return out class ContextGuidedBlock(nn.Module): """Context Guided Block for CGNet. This class consists of four components: local feature extractor, surrounding feature extractor, joint feature extractor and global context extractor. Args: in_channels (int): Number of input feature channels. out_channels (int): Number of output feature channels. dilation (int): Dilation rate for surrounding context extractor. Default: 2. reduction (int): Reduction for global context extractor. Default: 16. skip_connect (bool): Add input to output or not. Default: True. downsample (bool): Downsample the input to 1/2 or not. Default: False. conv_cfg (dict): Config dict for convolution layer. Default: None, which means using conv2d. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN', requires_grad=True). act_cfg (dict): Config dict for activation layer. Default: dict(type='PReLU'). with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. """ def __init__(self, in_channels, out_channels, dilation=2, reduction=16, skip_connect=True, downsample=False, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), act_cfg=dict(type='PReLU'), with_cp=False): super(ContextGuidedBlock, self).__init__() self.with_cp = with_cp self.downsample = downsample channels = out_channels if downsample else out_channels // 2 if 'type' in act_cfg and act_cfg['type'] == 'PReLU': act_cfg['num_parameters'] = channels kernel_size = 3 if downsample else 1 stride = 2 if downsample else 1 padding = (kernel_size - 1) // 2 self.conv1x1 = ConvModule( in_channels, channels, kernel_size, stride, padding, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.f_loc = build_conv_layer( conv_cfg, channels, channels, kernel_size=3, padding=1, groups=channels, bias=False) self.f_sur = build_conv_layer( conv_cfg, channels, channels, kernel_size=3, padding=dilation, groups=channels, dilation=dilation, bias=False) self.bn = build_norm_layer(norm_cfg, 2 * channels)[1] self.activate = nn.PReLU(2 * channels) if downsample: self.bottleneck = build_conv_layer( conv_cfg, 2 * channels, out_channels, kernel_size=1, bias=False) self.skip_connect = skip_connect and not downsample self.f_glo = GlobalContextExtractor(out_channels, reduction, with_cp) def forward(self, x): def _inner_forward(x): out = self.conv1x1(x) loc = self.f_loc(out) sur = self.f_sur(out) joi_feat = torch.cat([loc, sur], 1) # the joint feature joi_feat = self.bn(joi_feat) joi_feat = self.activate(joi_feat) if self.downsample: joi_feat = self.bottleneck(joi_feat) # channel = out_channels # f_glo is employed to refine the joint feature out = self.f_glo(joi_feat) if self.skip_connect: return x + out else: return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) return out class InputInjection(nn.Module): """Downsampling module for CGNet.""" def __init__(self, num_downsampling): super(InputInjection, self).__init__() self.pool = nn.ModuleList() for i in range(num_downsampling): self.pool.append(nn.AvgPool2d(3, stride=2, padding=1)) def forward(self, x): for pool in self.pool: x = pool(x) return x @BACKBONES.register_module() class CGNet(BaseModule): """CGNet backbone. This backbone is the implementation of `A Light-weight Context Guided Network for Semantic Segmentation <https://arxiv.org/abs/1811.08201>`_. Args: in_channels (int): Number of input image channels. Normally 3. num_channels (tuple[int]): Numbers of feature channels at each stages. Default: (32, 64, 128). num_blocks (tuple[int]): Numbers of CG blocks at stage 1 and stage 2. Default: (3, 21). dilations (tuple[int]): Dilation rate for surrounding context extractors at stage 1 and stage 2. Default: (2, 4). reductions (tuple[int]): Reductions for global context extractors at stage 1 and stage 2. Default: (8, 16). conv_cfg (dict): Config dict for convolution layer. Default: None, which means using conv2d. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN', requires_grad=True). act_cfg (dict): Config dict for activation layer. Default: dict(type='PReLU'). norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. pretrained (str, optional): model pretrained path. Default: None init_cfg (dict or list[dict], optional): Initialization config dict. Default: None """ def __init__(self, in_channels=3, num_channels=(32, 64, 128), num_blocks=(3, 21), dilations=(2, 4), reductions=(8, 16), conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), act_cfg=dict(type='PReLU'), norm_eval=False, with_cp=False, pretrained=None, init_cfg=None): super(CGNet, self).__init__(init_cfg) assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is a deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer=['Conv2d', 'Linear']), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']), dict(type='Constant', val=0, layer='PReLU') ] else: raise TypeError('pretrained must be a str or None') self.in_channels = in_channels self.num_channels = num_channels assert isinstance(self.num_channels, tuple) and len( self.num_channels) == 3 self.num_blocks = num_blocks assert isinstance(self.num_blocks, tuple) and len(self.num_blocks) == 2 self.dilations = dilations assert isinstance(self.dilations, tuple) and len(self.dilations) == 2 self.reductions = reductions assert isinstance(self.reductions, tuple) and len(self.reductions) == 2 self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg if 'type' in self.act_cfg and self.act_cfg['type'] == 'PReLU': self.act_cfg['num_parameters'] = num_channels[0] self.norm_eval = norm_eval self.with_cp = with_cp cur_channels = in_channels self.stem = nn.ModuleList() for i in range(3): self.stem.append( ConvModule( cur_channels, num_channels[0], 3, 2 if i == 0 else 1, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) cur_channels = num_channels[0] self.inject_2x = InputInjection(1) # down-sample for Input, factor=2 self.inject_4x = InputInjection(2) # down-sample for Input, factor=4 cur_channels += in_channels self.norm_prelu_0 = nn.Sequential( build_norm_layer(norm_cfg, cur_channels)[1], nn.PReLU(cur_channels)) # stage 1 self.level1 = nn.ModuleList() for i in range(num_blocks[0]): self.level1.append( ContextGuidedBlock( cur_channels if i == 0 else num_channels[1], num_channels[1], dilations[0], reductions[0], downsample=(i == 0), conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, with_cp=with_cp)) # CG block cur_channels = 2 * num_channels[1] + in_channels self.norm_prelu_1 = nn.Sequential( build_norm_layer(norm_cfg, cur_channels)[1], nn.PReLU(cur_channels)) # stage 2 self.level2 = nn.ModuleList() for i in range(num_blocks[1]): self.level2.append( ContextGuidedBlock( cur_channels if i == 0 else num_channels[2], num_channels[2], dilations[1], reductions[1], downsample=(i == 0), conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, with_cp=with_cp)) # CG block cur_channels = 2 * num_channels[2] self.norm_prelu_2 = nn.Sequential( build_norm_layer(norm_cfg, cur_channels)[1], nn.PReLU(cur_channels)) def forward(self, x): output = [] # stage 0 inp_2x = self.inject_2x(x) inp_4x = self.inject_4x(x) for layer in self.stem: x = layer(x) x = self.norm_prelu_0(torch.cat([x, inp_2x], 1)) output.append(x) # stage 1 for i, layer in enumerate(self.level1): x = layer(x) if i == 0: down1 = x x = self.norm_prelu_1(torch.cat([x, down1, inp_4x], 1)) output.append(x) # stage 2 for i, layer in enumerate(self.level2): x = layer(x) if i == 0: down2 = x x = self.norm_prelu_2(torch.cat([down2, x], 1)) output.append(x) return output def train(self, mode=True): """Convert the model into training mode will keeping the normalization layer freezed.""" super(CGNet, self).train(mode) if mode and self.norm_eval: for m in self.modules(): # trick: eval have effect on BatchNorm only if isinstance(m, _BatchNorm): m.eval()
13,412
34.959786
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/erfnet.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import build_activation_layer, build_conv_layer, build_norm_layer from mmcv.runner import BaseModule from mmseg.ops import resize from ..builder import BACKBONES class DownsamplerBlock(BaseModule): """Downsampler block of ERFNet. This module is a little different from basical ConvModule. The features from Conv and MaxPool layers are concatenated before BatchNorm. Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, out_channels, conv_cfg=None, norm_cfg=dict(type='BN', eps=1e-3), act_cfg=dict(type='ReLU'), init_cfg=None): super(DownsamplerBlock, self).__init__(init_cfg=init_cfg) self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.conv = build_conv_layer( self.conv_cfg, in_channels, out_channels - in_channels, kernel_size=3, stride=2, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] self.act = build_activation_layer(self.act_cfg) def forward(self, input): conv_out = self.conv(input) pool_out = self.pool(input) pool_out = resize( input=pool_out, size=conv_out.size()[2:], mode='bilinear', align_corners=False) output = torch.cat([conv_out, pool_out], 1) output = self.bn(output) output = self.act(output) return output class NonBottleneck1d(BaseModule): """Non-bottleneck block of ERFNet. Args: channels (int): Number of channels in Non-bottleneck block. drop_rate (float): Probability of an element to be zeroed. Default 0. dilation (int): Dilation rate for last two conv layers. Default 1. num_conv_layer (int): Number of 3x1 and 1x3 convolution layers. Default 2. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, channels, drop_rate=0, dilation=1, num_conv_layer=2, conv_cfg=None, norm_cfg=dict(type='BN', eps=1e-3), act_cfg=dict(type='ReLU'), init_cfg=None): super(NonBottleneck1d, self).__init__(init_cfg=init_cfg) self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.act = build_activation_layer(self.act_cfg) self.convs_layers = nn.ModuleList() for conv_layer in range(num_conv_layer): first_conv_padding = (1, 0) if conv_layer == 0 else (dilation, 0) first_conv_dilation = 1 if conv_layer == 0 else (dilation, 1) second_conv_padding = (0, 1) if conv_layer == 0 else (0, dilation) second_conv_dilation = 1 if conv_layer == 0 else (1, dilation) self.convs_layers.append( build_conv_layer( self.conv_cfg, channels, channels, kernel_size=(3, 1), stride=1, padding=first_conv_padding, bias=True, dilation=first_conv_dilation)) self.convs_layers.append(self.act) self.convs_layers.append( build_conv_layer( self.conv_cfg, channels, channels, kernel_size=(1, 3), stride=1, padding=second_conv_padding, bias=True, dilation=second_conv_dilation)) self.convs_layers.append( build_norm_layer(self.norm_cfg, channels)[1]) if conv_layer == 0: self.convs_layers.append(self.act) else: self.convs_layers.append(nn.Dropout(p=drop_rate)) def forward(self, input): output = input for conv in self.convs_layers: output = conv(output) output = self.act(output + input) return output class UpsamplerBlock(BaseModule): """Upsampler block of ERFNet. Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. conv_cfg (dict | None): Config of conv layers. Default: None. norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN'). act_cfg (dict): Config of activation layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, out_channels, conv_cfg=None, norm_cfg=dict(type='BN', eps=1e-3), act_cfg=dict(type='ReLU'), init_cfg=None): super(UpsamplerBlock, self).__init__(init_cfg=init_cfg) self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.conv = nn.ConvTranspose2d( in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=2, padding=1, output_padding=1, bias=True) self.bn = build_norm_layer(self.norm_cfg, out_channels)[1] self.act = build_activation_layer(self.act_cfg) def forward(self, input): output = self.conv(input) output = self.bn(output) output = self.act(output) return output @BACKBONES.register_module() class ERFNet(BaseModule): """ERFNet backbone. This backbone is the implementation of `ERFNet: Efficient Residual Factorized ConvNet for Real-time SemanticSegmentation <https://ieeexplore.ieee.org/document/8063438>`_. Args: in_channels (int): The number of channels of input image. Default: 3. enc_downsample_channels (Tuple[int]): Size of channel numbers of various Downsampler block in encoder. Default: (16, 64, 128). enc_stage_non_bottlenecks (Tuple[int]): Number of stages of Non-bottleneck block in encoder. Default: (5, 8). enc_non_bottleneck_dilations (Tuple[int]): Dilation rate of each stage of Non-bottleneck block of encoder. Default: (2, 4, 8, 16). enc_non_bottleneck_channels (Tuple[int]): Size of channel numbers of various Non-bottleneck block in encoder. Default: (64, 128). dec_upsample_channels (Tuple[int]): Size of channel numbers of various Deconvolution block in decoder. Default: (64, 16). dec_stages_non_bottleneck (Tuple[int]): Number of stages of Non-bottleneck block in decoder. Default: (2, 2). dec_non_bottleneck_channels (Tuple[int]): Size of channel numbers of various Non-bottleneck block in decoder. Default: (64, 16). drop_rate (float): Probability of an element to be zeroed. Default 0.1. """ def __init__(self, in_channels=3, enc_downsample_channels=(16, 64, 128), enc_stage_non_bottlenecks=(5, 8), enc_non_bottleneck_dilations=(2, 4, 8, 16), enc_non_bottleneck_channels=(64, 128), dec_upsample_channels=(64, 16), dec_stages_non_bottleneck=(2, 2), dec_non_bottleneck_channels=(64, 16), dropout_ratio=0.1, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), act_cfg=dict(type='ReLU'), init_cfg=None): super(ERFNet, self).__init__(init_cfg=init_cfg) assert len(enc_downsample_channels) \ == len(dec_upsample_channels)+1, 'Number of downsample\ block of encoder does not \ match number of upsample block of decoder!' assert len(enc_downsample_channels) \ == len(enc_stage_non_bottlenecks)+1, 'Number of \ downsample block of encoder does not match \ number of Non-bottleneck block of encoder!' assert len(enc_downsample_channels) \ == len(enc_non_bottleneck_channels)+1, 'Number of \ downsample block of encoder does not match \ number of channels of Non-bottleneck block of encoder!' assert enc_stage_non_bottlenecks[-1] \ % len(enc_non_bottleneck_dilations) == 0, 'Number of \ Non-bottleneck block of encoder does not match \ number of Non-bottleneck block of encoder!' assert len(dec_upsample_channels) \ == len(dec_stages_non_bottleneck), 'Number of \ upsample block of decoder does not match \ number of Non-bottleneck block of decoder!' assert len(dec_stages_non_bottleneck) \ == len(dec_non_bottleneck_channels), 'Number of \ Non-bottleneck block of decoder does not match \ number of channels of Non-bottleneck block of decoder!' self.in_channels = in_channels self.enc_downsample_channels = enc_downsample_channels self.enc_stage_non_bottlenecks = enc_stage_non_bottlenecks self.enc_non_bottleneck_dilations = enc_non_bottleneck_dilations self.enc_non_bottleneck_channels = enc_non_bottleneck_channels self.dec_upsample_channels = dec_upsample_channels self.dec_stages_non_bottleneck = dec_stages_non_bottleneck self.dec_non_bottleneck_channels = dec_non_bottleneck_channels self.dropout_ratio = dropout_ratio self.encoder = nn.ModuleList() self.decoder = nn.ModuleList() self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.encoder.append( DownsamplerBlock(self.in_channels, enc_downsample_channels[0])) for i in range(len(enc_downsample_channels) - 1): self.encoder.append( DownsamplerBlock(enc_downsample_channels[i], enc_downsample_channels[i + 1])) # Last part of encoder is some dilated NonBottleneck1d blocks. if i == len(enc_downsample_channels) - 2: iteration_times = int(enc_stage_non_bottlenecks[-1] / len(enc_non_bottleneck_dilations)) for j in range(iteration_times): for k in range(len(enc_non_bottleneck_dilations)): self.encoder.append( NonBottleneck1d(enc_downsample_channels[-1], self.dropout_ratio, enc_non_bottleneck_dilations[k])) else: for j in range(enc_stage_non_bottlenecks[i]): self.encoder.append( NonBottleneck1d(enc_downsample_channels[i + 1], self.dropout_ratio)) for i in range(len(dec_upsample_channels)): if i == 0: self.decoder.append( UpsamplerBlock(enc_downsample_channels[-1], dec_non_bottleneck_channels[i])) else: self.decoder.append( UpsamplerBlock(dec_non_bottleneck_channels[i - 1], dec_non_bottleneck_channels[i])) for j in range(dec_stages_non_bottleneck[i]): self.decoder.append( NonBottleneck1d(dec_non_bottleneck_channels[i])) def forward(self, x): for enc in self.encoder: x = enc(x) for dec in self.decoder: x = dec(x) return [x]
13,068
38.60303
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/fast_scnn.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule from mmcv.runner import BaseModule from mmseg.models.decode_heads.psp_head import PPM from mmseg.ops import resize from ..builder import BACKBONES from ..utils import InvertedResidual class LearningToDownsample(nn.Module): """Learning to downsample module. Args: in_channels (int): Number of input channels. dw_channels (tuple[int]): Number of output channels of the first and the second depthwise conv (dwconv) layers. out_channels (int): Number of output channels of the whole 'learning to downsample' module. conv_cfg (dict | None): Config of conv layers. Default: None norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN') act_cfg (dict): Config of activation layers. Default: dict(type='ReLU') dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config of depthwise ConvModule. If it is 'default', it will be the same as `act_cfg`. Default: None. """ def __init__(self, in_channels, dw_channels, out_channels, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), dw_act_cfg=None): super(LearningToDownsample, self).__init__() self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.dw_act_cfg = dw_act_cfg dw_channels1 = dw_channels[0] dw_channels2 = dw_channels[1] self.conv = ConvModule( in_channels, dw_channels1, 3, stride=2, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.dsconv1 = DepthwiseSeparableConvModule( dw_channels1, dw_channels2, kernel_size=3, stride=2, padding=1, norm_cfg=self.norm_cfg, dw_act_cfg=self.dw_act_cfg) self.dsconv2 = DepthwiseSeparableConvModule( dw_channels2, out_channels, kernel_size=3, stride=2, padding=1, norm_cfg=self.norm_cfg, dw_act_cfg=self.dw_act_cfg) def forward(self, x): x = self.conv(x) x = self.dsconv1(x) x = self.dsconv2(x) return x class GlobalFeatureExtractor(nn.Module): """Global feature extractor module. Args: in_channels (int): Number of input channels of the GFE module. Default: 64 block_channels (tuple[int]): Tuple of ints. Each int specifies the number of output channels of each Inverted Residual module. Default: (64, 96, 128) out_channels(int): Number of output channels of the GFE module. Default: 128 expand_ratio (int): Adjusts number of channels of the hidden layer in InvertedResidual by this amount. Default: 6 num_blocks (tuple[int]): Tuple of ints. Each int specifies the number of times each Inverted Residual module is repeated. The repeated Inverted Residual modules are called a 'group'. Default: (3, 3, 3) strides (tuple[int]): Tuple of ints. Each int specifies the downsampling factor of each 'group'. Default: (2, 2, 1) pool_scales (tuple[int]): Tuple of ints. Each int specifies the parameter required in 'global average pooling' within PPM. Default: (1, 2, 3, 6) conv_cfg (dict | None): Config of conv layers. Default: None norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN') act_cfg (dict): Config of activation layers. Default: dict(type='ReLU') align_corners (bool): align_corners argument of F.interpolate. Default: False """ def __init__(self, in_channels=64, block_channels=(64, 96, 128), out_channels=128, expand_ratio=6, num_blocks=(3, 3, 3), strides=(2, 2, 1), pool_scales=(1, 2, 3, 6), conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), align_corners=False): super(GlobalFeatureExtractor, self).__init__() self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg assert len(block_channels) == len(num_blocks) == 3 self.bottleneck1 = self._make_layer(in_channels, block_channels[0], num_blocks[0], strides[0], expand_ratio) self.bottleneck2 = self._make_layer(block_channels[0], block_channels[1], num_blocks[1], strides[1], expand_ratio) self.bottleneck3 = self._make_layer(block_channels[1], block_channels[2], num_blocks[2], strides[2], expand_ratio) self.ppm = PPM( pool_scales, block_channels[2], block_channels[2] // 4, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg, align_corners=align_corners) self.out = ConvModule( block_channels[2] * 2, out_channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def _make_layer(self, in_channels, out_channels, blocks, stride=1, expand_ratio=6): layers = [ InvertedResidual( in_channels, out_channels, stride, expand_ratio, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) ] for i in range(1, blocks): layers.append( InvertedResidual( out_channels, out_channels, 1, expand_ratio, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) return nn.Sequential(*layers) def forward(self, x): x = self.bottleneck1(x) x = self.bottleneck2(x) x = self.bottleneck3(x) x = torch.cat([x, *self.ppm(x)], dim=1) x = self.out(x) return x class FeatureFusionModule(nn.Module): """Feature fusion module. Args: higher_in_channels (int): Number of input channels of the higher-resolution branch. lower_in_channels (int): Number of input channels of the lower-resolution branch. out_channels (int): Number of output channels. conv_cfg (dict | None): Config of conv layers. Default: None norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN') dwconv_act_cfg (dict): Config of activation layers in 3x3 conv. Default: dict(type='ReLU'). conv_act_cfg (dict): Config of activation layers in the two 1x1 conv. Default: None. align_corners (bool): align_corners argument of F.interpolate. Default: False. """ def __init__(self, higher_in_channels, lower_in_channels, out_channels, conv_cfg=None, norm_cfg=dict(type='BN'), dwconv_act_cfg=dict(type='ReLU'), conv_act_cfg=None, align_corners=False): super(FeatureFusionModule, self).__init__() self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.dwconv_act_cfg = dwconv_act_cfg self.conv_act_cfg = conv_act_cfg self.align_corners = align_corners self.dwconv = ConvModule( lower_in_channels, out_channels, 3, padding=1, groups=out_channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.dwconv_act_cfg) self.conv_lower_res = ConvModule( out_channels, out_channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.conv_act_cfg) self.conv_higher_res = ConvModule( higher_in_channels, out_channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.conv_act_cfg) self.relu = nn.ReLU(True) def forward(self, higher_res_feature, lower_res_feature): lower_res_feature = resize( lower_res_feature, size=higher_res_feature.size()[2:], mode='bilinear', align_corners=self.align_corners) lower_res_feature = self.dwconv(lower_res_feature) lower_res_feature = self.conv_lower_res(lower_res_feature) higher_res_feature = self.conv_higher_res(higher_res_feature) out = higher_res_feature + lower_res_feature return self.relu(out) @BACKBONES.register_module() class FastSCNN(BaseModule): """Fast-SCNN Backbone. This backbone is the implementation of `Fast-SCNN: Fast Semantic Segmentation Network <https://arxiv.org/abs/1902.04502>`_. Args: in_channels (int): Number of input image channels. Default: 3. downsample_dw_channels (tuple[int]): Number of output channels after the first conv layer & the second conv layer in Learning-To-Downsample (LTD) module. Default: (32, 48). global_in_channels (int): Number of input channels of Global Feature Extractor(GFE). Equal to number of output channels of LTD. Default: 64. global_block_channels (tuple[int]): Tuple of integers that describe the output channels for each of the MobileNet-v2 bottleneck residual blocks in GFE. Default: (64, 96, 128). global_block_strides (tuple[int]): Tuple of integers that describe the strides (downsampling factors) for each of the MobileNet-v2 bottleneck residual blocks in GFE. Default: (2, 2, 1). global_out_channels (int): Number of output channels of GFE. Default: 128. higher_in_channels (int): Number of input channels of the higher resolution branch in FFM. Equal to global_in_channels. Default: 64. lower_in_channels (int): Number of input channels of the lower resolution branch in FFM. Equal to global_out_channels. Default: 128. fusion_out_channels (int): Number of output channels of FFM. Default: 128. out_indices (tuple): Tuple of indices of list [higher_res_features, lower_res_features, fusion_output]. Often set to (0,1,2) to enable aux. heads. Default: (0, 1, 2). conv_cfg (dict | None): Config of conv layers. Default: None norm_cfg (dict | None): Config of norm layers. Default: dict(type='BN') act_cfg (dict): Config of activation layers. Default: dict(type='ReLU') align_corners (bool): align_corners argument of F.interpolate. Default: False dw_act_cfg (dict): In DepthwiseSeparableConvModule, activation config of depthwise ConvModule. If it is 'default', it will be the same as `act_cfg`. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None """ def __init__(self, in_channels=3, downsample_dw_channels=(32, 48), global_in_channels=64, global_block_channels=(64, 96, 128), global_block_strides=(2, 2, 1), global_out_channels=128, higher_in_channels=64, lower_in_channels=128, fusion_out_channels=128, out_indices=(0, 1, 2), conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), align_corners=False, dw_act_cfg=None, init_cfg=None): super(FastSCNN, self).__init__(init_cfg) if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] if global_in_channels != higher_in_channels: raise AssertionError('Global Input Channels must be the same \ with Higher Input Channels!') elif global_out_channels != lower_in_channels: raise AssertionError('Global Output Channels must be the same \ with Lower Input Channels!') self.in_channels = in_channels self.downsample_dw_channels1 = downsample_dw_channels[0] self.downsample_dw_channels2 = downsample_dw_channels[1] self.global_in_channels = global_in_channels self.global_block_channels = global_block_channels self.global_block_strides = global_block_strides self.global_out_channels = global_out_channels self.higher_in_channels = higher_in_channels self.lower_in_channels = lower_in_channels self.fusion_out_channels = fusion_out_channels self.out_indices = out_indices self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.align_corners = align_corners self.learning_to_downsample = LearningToDownsample( in_channels, downsample_dw_channels, global_in_channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg, dw_act_cfg=dw_act_cfg) self.global_feature_extractor = GlobalFeatureExtractor( global_in_channels, global_block_channels, global_out_channels, strides=self.global_block_strides, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg, align_corners=self.align_corners) self.feature_fusion = FeatureFusionModule( higher_in_channels, lower_in_channels, fusion_out_channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, dwconv_act_cfg=self.act_cfg, align_corners=self.align_corners) def forward(self, x): higher_res_features = self.learning_to_downsample(x) lower_res_features = self.global_feature_extractor(higher_res_features) fusion_output = self.feature_fusion(higher_res_features, lower_res_features) outs = [higher_res_features, lower_res_features, fusion_output] outs = [outs[i] for i in self.out_indices] return tuple(outs)
15,660
37.197561
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/hrnet.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import torch.nn as nn from mmcv.cnn import build_conv_layer, build_norm_layer from mmcv.runner import BaseModule, ModuleList, Sequential from mmcv.utils.parrots_wrapper import _BatchNorm from mmseg.ops import Upsample, resize from ..builder import BACKBONES from .resnet import BasicBlock, Bottleneck class HRModule(BaseModule): """High-Resolution Module for HRNet. In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange is in this module. """ def __init__(self, num_branches, blocks, num_blocks, in_channels, num_channels, multiscale_output=True, with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), block_init_cfg=None, init_cfg=None): super(HRModule, self).__init__(init_cfg) self.block_init_cfg = block_init_cfg self._check_branches(num_branches, num_blocks, in_channels, num_channels) self.in_channels = in_channels self.num_branches = num_branches self.multiscale_output = multiscale_output self.norm_cfg = norm_cfg self.conv_cfg = conv_cfg self.with_cp = with_cp self.branches = self._make_branches(num_branches, blocks, num_blocks, num_channels) self.fuse_layers = self._make_fuse_layers() self.relu = nn.ReLU(inplace=False) def _check_branches(self, num_branches, num_blocks, in_channels, num_channels): """Check branches configuration.""" if num_branches != len(num_blocks): error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \ f'{len(num_blocks)})' raise ValueError(error_msg) if num_branches != len(num_channels): error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \ f'{len(num_channels)})' raise ValueError(error_msg) if num_branches != len(in_channels): error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \ f'{len(in_channels)})' raise ValueError(error_msg) def _make_one_branch(self, branch_index, block, num_blocks, num_channels, stride=1): """Build one branch.""" downsample = None if stride != 1 or \ self.in_channels[branch_index] != \ num_channels[branch_index] * block.expansion: downsample = nn.Sequential( build_conv_layer( self.conv_cfg, self.in_channels[branch_index], num_channels[branch_index] * block.expansion, kernel_size=1, stride=stride, bias=False), build_norm_layer(self.norm_cfg, num_channels[branch_index] * block.expansion)[1]) layers = [] layers.append( block( self.in_channels[branch_index], num_channels[branch_index], stride, downsample=downsample, with_cp=self.with_cp, norm_cfg=self.norm_cfg, conv_cfg=self.conv_cfg, init_cfg=self.block_init_cfg)) self.in_channels[branch_index] = \ num_channels[branch_index] * block.expansion for i in range(1, num_blocks[branch_index]): layers.append( block( self.in_channels[branch_index], num_channels[branch_index], with_cp=self.with_cp, norm_cfg=self.norm_cfg, conv_cfg=self.conv_cfg, init_cfg=self.block_init_cfg)) return Sequential(*layers) def _make_branches(self, num_branches, block, num_blocks, num_channels): """Build multiple branch.""" branches = [] for i in range(num_branches): branches.append( self._make_one_branch(i, block, num_blocks, num_channels)) return ModuleList(branches) def _make_fuse_layers(self): """Build fuse layer.""" if self.num_branches == 1: return None num_branches = self.num_branches in_channels = self.in_channels fuse_layers = [] num_out_branches = num_branches if self.multiscale_output else 1 for i in range(num_out_branches): fuse_layer = [] for j in range(num_branches): if j > i: fuse_layer.append( nn.Sequential( build_conv_layer( self.conv_cfg, in_channels[j], in_channels[i], kernel_size=1, stride=1, padding=0, bias=False), build_norm_layer(self.norm_cfg, in_channels[i])[1], # we set align_corners=False for HRNet Upsample( scale_factor=2**(j - i), mode='bilinear', align_corners=False))) elif j == i: fuse_layer.append(None) else: conv_downsamples = [] for k in range(i - j): if k == i - j - 1: conv_downsamples.append( nn.Sequential( build_conv_layer( self.conv_cfg, in_channels[j], in_channels[i], kernel_size=3, stride=2, padding=1, bias=False), build_norm_layer(self.norm_cfg, in_channels[i])[1])) else: conv_downsamples.append( nn.Sequential( build_conv_layer( self.conv_cfg, in_channels[j], in_channels[j], kernel_size=3, stride=2, padding=1, bias=False), build_norm_layer(self.norm_cfg, in_channels[j])[1], nn.ReLU(inplace=False))) fuse_layer.append(nn.Sequential(*conv_downsamples)) fuse_layers.append(nn.ModuleList(fuse_layer)) return nn.ModuleList(fuse_layers) def forward(self, x): """Forward function.""" if self.num_branches == 1: return [self.branches[0](x[0])] for i in range(self.num_branches): x[i] = self.branches[i](x[i]) x_fuse = [] for i in range(len(self.fuse_layers)): y = 0 for j in range(self.num_branches): if i == j: y += x[j] elif j > i: y = y + resize( self.fuse_layers[i][j](x[j]), size=x[i].shape[2:], mode='bilinear', align_corners=False) else: y += self.fuse_layers[i][j](x[j]) x_fuse.append(self.relu(y)) return x_fuse @BACKBONES.register_module() class HRNet(BaseModule): """HRNet backbone. This backbone is the implementation of `High-Resolution Representations for Labeling Pixels and Regions <https://arxiv.org/abs/1904.04514>`_. Args: extra (dict): Detailed configuration for each stage of HRNet. There must be 4 stages, the configuration for each stage must have 5 keys: - num_modules (int): The number of HRModule in this stage. - num_branches (int): The number of branches in the HRModule. - block (str): The type of convolution block. - num_blocks (tuple): The number of blocks in each branch. The length must be equal to num_branches. - num_channels (tuple): The number of channels in each branch. The length must be equal to num_branches. in_channels (int): Number of input image channels. Normally 3. conv_cfg (dict): Dictionary to construct and config conv layer. Default: None. norm_cfg (dict): Dictionary to construct and config norm layer. Use `BN` by default. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. frozen_stages (int): Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1. zero_init_residual (bool): Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: False. multiscale_output (bool): Whether to output multi-level features produced by multiple branches. If False, only the first level feature will be output. Default: True. pretrained (str, optional): Model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Example: >>> from mmseg.models import HRNet >>> import torch >>> extra = dict( >>> stage1=dict( >>> num_modules=1, >>> num_branches=1, >>> block='BOTTLENECK', >>> num_blocks=(4, ), >>> num_channels=(64, )), >>> stage2=dict( >>> num_modules=1, >>> num_branches=2, >>> block='BASIC', >>> num_blocks=(4, 4), >>> num_channels=(32, 64)), >>> stage3=dict( >>> num_modules=4, >>> num_branches=3, >>> block='BASIC', >>> num_blocks=(4, 4, 4), >>> num_channels=(32, 64, 128)), >>> stage4=dict( >>> num_modules=3, >>> num_branches=4, >>> block='BASIC', >>> num_blocks=(4, 4, 4, 4), >>> num_channels=(32, 64, 128, 256))) >>> self = HRNet(extra, in_channels=1) >>> self.eval() >>> inputs = torch.rand(1, 1, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 32, 8, 8) (1, 64, 4, 4) (1, 128, 2, 2) (1, 256, 1, 1) """ blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} def __init__(self, extra, in_channels=3, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=False, with_cp=False, frozen_stages=-1, zero_init_residual=False, multiscale_output=True, pretrained=None, init_cfg=None): super(HRNet, self).__init__(init_cfg) self.pretrained = pretrained self.zero_init_residual = zero_init_residual assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] else: raise TypeError('pretrained must be a str or None') # Assert configurations of 4 stages are in extra assert 'stage1' in extra and 'stage2' in extra \ and 'stage3' in extra and 'stage4' in extra # Assert whether the length of `num_blocks` and `num_channels` are # equal to `num_branches` for i in range(4): cfg = extra[f'stage{i + 1}'] assert len(cfg['num_blocks']) == cfg['num_branches'] and \ len(cfg['num_channels']) == cfg['num_branches'] self.extra = extra self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.norm_eval = norm_eval self.with_cp = with_cp self.frozen_stages = frozen_stages # stem net self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) self.conv1 = build_conv_layer( self.conv_cfg, in_channels, 64, kernel_size=3, stride=2, padding=1, bias=False) self.add_module(self.norm1_name, norm1) self.conv2 = build_conv_layer( self.conv_cfg, 64, 64, kernel_size=3, stride=2, padding=1, bias=False) self.add_module(self.norm2_name, norm2) self.relu = nn.ReLU(inplace=True) # stage 1 self.stage1_cfg = self.extra['stage1'] num_channels = self.stage1_cfg['num_channels'][0] block_type = self.stage1_cfg['block'] num_blocks = self.stage1_cfg['num_blocks'][0] block = self.blocks_dict[block_type] stage1_out_channels = num_channels * block.expansion self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) # stage 2 self.stage2_cfg = self.extra['stage2'] num_channels = self.stage2_cfg['num_channels'] block_type = self.stage2_cfg['block'] block = self.blocks_dict[block_type] num_channels = [channel * block.expansion for channel in num_channels] self.transition1 = self._make_transition_layer([stage1_out_channels], num_channels) self.stage2, pre_stage_channels = self._make_stage( self.stage2_cfg, num_channels) # stage 3 self.stage3_cfg = self.extra['stage3'] num_channels = self.stage3_cfg['num_channels'] block_type = self.stage3_cfg['block'] block = self.blocks_dict[block_type] num_channels = [channel * block.expansion for channel in num_channels] self.transition2 = self._make_transition_layer(pre_stage_channels, num_channels) self.stage3, pre_stage_channels = self._make_stage( self.stage3_cfg, num_channels) # stage 4 self.stage4_cfg = self.extra['stage4'] num_channels = self.stage4_cfg['num_channels'] block_type = self.stage4_cfg['block'] block = self.blocks_dict[block_type] num_channels = [channel * block.expansion for channel in num_channels] self.transition3 = self._make_transition_layer(pre_stage_channels, num_channels) self.stage4, pre_stage_channels = self._make_stage( self.stage4_cfg, num_channels, multiscale_output=multiscale_output) self._freeze_stages() @property def norm1(self): """nn.Module: the normalization layer named "norm1" """ return getattr(self, self.norm1_name) @property def norm2(self): """nn.Module: the normalization layer named "norm2" """ return getattr(self, self.norm2_name) def _make_transition_layer(self, num_channels_pre_layer, num_channels_cur_layer): """Make transition layer.""" num_branches_cur = len(num_channels_cur_layer) num_branches_pre = len(num_channels_pre_layer) transition_layers = [] for i in range(num_branches_cur): if i < num_branches_pre: if num_channels_cur_layer[i] != num_channels_pre_layer[i]: transition_layers.append( nn.Sequential( build_conv_layer( self.conv_cfg, num_channels_pre_layer[i], num_channels_cur_layer[i], kernel_size=3, stride=1, padding=1, bias=False), build_norm_layer(self.norm_cfg, num_channels_cur_layer[i])[1], nn.ReLU(inplace=True))) else: transition_layers.append(None) else: conv_downsamples = [] for j in range(i + 1 - num_branches_pre): in_channels = num_channels_pre_layer[-1] out_channels = num_channels_cur_layer[i] \ if j == i - num_branches_pre else in_channels conv_downsamples.append( nn.Sequential( build_conv_layer( self.conv_cfg, in_channels, out_channels, kernel_size=3, stride=2, padding=1, bias=False), build_norm_layer(self.norm_cfg, out_channels)[1], nn.ReLU(inplace=True))) transition_layers.append(nn.Sequential(*conv_downsamples)) return nn.ModuleList(transition_layers) def _make_layer(self, block, inplanes, planes, blocks, stride=1): """Make each layer.""" downsample = None if stride != 1 or inplanes != planes * block.expansion: downsample = nn.Sequential( build_conv_layer( self.conv_cfg, inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) layers = [] block_init_cfg = None if self.pretrained is None and not hasattr( self, 'init_cfg') and self.zero_init_residual: if block is BasicBlock: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm2')) elif block is Bottleneck: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm3')) layers.append( block( inplanes, planes, stride, downsample=downsample, with_cp=self.with_cp, norm_cfg=self.norm_cfg, conv_cfg=self.conv_cfg, init_cfg=block_init_cfg)) inplanes = planes * block.expansion for i in range(1, blocks): layers.append( block( inplanes, planes, with_cp=self.with_cp, norm_cfg=self.norm_cfg, conv_cfg=self.conv_cfg, init_cfg=block_init_cfg)) return Sequential(*layers) def _make_stage(self, layer_config, in_channels, multiscale_output=True): """Make each stage.""" num_modules = layer_config['num_modules'] num_branches = layer_config['num_branches'] num_blocks = layer_config['num_blocks'] num_channels = layer_config['num_channels'] block = self.blocks_dict[layer_config['block']] hr_modules = [] block_init_cfg = None if self.pretrained is None and not hasattr( self, 'init_cfg') and self.zero_init_residual: if block is BasicBlock: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm2')) elif block is Bottleneck: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm3')) for i in range(num_modules): # multi_scale_output is only used for the last module if not multiscale_output and i == num_modules - 1: reset_multiscale_output = False else: reset_multiscale_output = True hr_modules.append( HRModule( num_branches, block, num_blocks, in_channels, num_channels, reset_multiscale_output, with_cp=self.with_cp, norm_cfg=self.norm_cfg, conv_cfg=self.conv_cfg, block_init_cfg=block_init_cfg)) return Sequential(*hr_modules), in_channels def _freeze_stages(self): """Freeze stages param and norm stats.""" if self.frozen_stages >= 0: self.norm1.eval() self.norm2.eval() for m in [self.conv1, self.norm1, self.conv2, self.norm2]: for param in m.parameters(): param.requires_grad = False for i in range(1, self.frozen_stages + 1): if i == 1: m = getattr(self, f'layer{i}') t = getattr(self, f'transition{i}') elif i == 4: m = getattr(self, f'stage{i}') else: m = getattr(self, f'stage{i}') t = getattr(self, f'transition{i}') m.eval() for param in m.parameters(): param.requires_grad = False t.eval() for param in t.parameters(): param.requires_grad = False def forward(self, x): """Forward function.""" x = self.conv1(x) x = self.norm1(x) x = self.relu(x) x = self.conv2(x) x = self.norm2(x) x = self.relu(x) x = self.layer1(x) x_list = [] for i in range(self.stage2_cfg['num_branches']): if self.transition1[i] is not None: x_list.append(self.transition1[i](x)) else: x_list.append(x) y_list = self.stage2(x_list) x_list = [] for i in range(self.stage3_cfg['num_branches']): if self.transition2[i] is not None: x_list.append(self.transition2[i](y_list[-1])) else: x_list.append(y_list[i]) y_list = self.stage3(x_list) x_list = [] for i in range(self.stage4_cfg['num_branches']): if self.transition3[i] is not None: x_list.append(self.transition3[i](y_list[-1])) else: x_list.append(y_list[i]) y_list = self.stage4(x_list) return y_list def train(self, mode=True): """Convert the model into training mode will keeping the normalization layer freezed.""" super(HRNet, self).train(mode) self._freeze_stages() if mode and self.norm_eval: for m in self.modules(): # trick: eval have effect on BatchNorm only if isinstance(m, _BatchNorm): m.eval()
25,112
38.055988
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/icnet.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import ConvModule from mmcv.runner import BaseModule from mmseg.ops import resize from ..builder import BACKBONES, build_backbone from ..decode_heads.psp_head import PPM @BACKBONES.register_module() class ICNet(BaseModule): """ICNet for Real-Time Semantic Segmentation on High-Resolution Images. This backbone is the implementation of `ICNet <https://arxiv.org/abs/1704.08545>`_. Args: backbone_cfg (dict): Config dict to build backbone. Usually it is ResNet but it can also be other backbones. in_channels (int): The number of input image channels. Default: 3. layer_channels (Sequence[int]): The numbers of feature channels at layer 2 and layer 4 in ResNet. It can also be other backbones. Default: (512, 2048). light_branch_middle_channels (int): The number of channels of the middle layer in light branch. Default: 32. psp_out_channels (int): The number of channels of the output of PSP module. Default: 512. out_channels (Sequence[int]): The numbers of output feature channels at each branches. Default: (64, 256, 256). pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid Module. Default: (1, 2, 3, 6). conv_cfg (dict): Dictionary to construct and config conv layer. Default: None. norm_cfg (dict): Dictionary to construct and config norm layer. Default: dict(type='BN'). act_cfg (dict): Dictionary to construct and config act layer. Default: dict(type='ReLU'). align_corners (bool): align_corners argument of F.interpolate. Default: False. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, backbone_cfg, in_channels=3, layer_channels=(512, 2048), light_branch_middle_channels=32, psp_out_channels=512, out_channels=(64, 256, 256), pool_scales=(1, 2, 3, 6), conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), act_cfg=dict(type='ReLU'), align_corners=False, init_cfg=None): if backbone_cfg is None: raise TypeError('backbone_cfg must be passed from config file!') if init_cfg is None: init_cfg = [ dict(type='Kaiming', mode='fan_out', layer='Conv2d'), dict(type='Constant', val=1, layer='_BatchNorm'), dict(type='Normal', mean=0.01, layer='Linear') ] super(ICNet, self).__init__(init_cfg=init_cfg) self.align_corners = align_corners self.backbone = build_backbone(backbone_cfg) # Note: Default `ceil_mode` is false in nn.MaxPool2d, set # `ceil_mode=True` to keep information in the corner of feature map. self.backbone.maxpool = nn.MaxPool2d( kernel_size=3, stride=2, padding=1, ceil_mode=True) self.psp_modules = PPM( pool_scales=pool_scales, in_channels=layer_channels[1], channels=psp_out_channels, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, align_corners=align_corners) self.psp_bottleneck = ConvModule( layer_channels[1] + len(pool_scales) * psp_out_channels, psp_out_channels, 3, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) self.conv_sub1 = nn.Sequential( ConvModule( in_channels=in_channels, out_channels=light_branch_middle_channels, kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg), ConvModule( in_channels=light_branch_middle_channels, out_channels=light_branch_middle_channels, kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg), ConvModule( in_channels=light_branch_middle_channels, out_channels=out_channels[0], kernel_size=3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg)) self.conv_sub2 = ConvModule( layer_channels[0], out_channels[1], 1, conv_cfg=conv_cfg, norm_cfg=norm_cfg) self.conv_sub4 = ConvModule( psp_out_channels, out_channels[2], 1, conv_cfg=conv_cfg, norm_cfg=norm_cfg) def forward(self, x): output = [] # sub 1 output.append(self.conv_sub1(x)) # sub 2 x = resize( x, scale_factor=0.5, mode='bilinear', align_corners=self.align_corners) x = self.backbone.stem(x) x = self.backbone.maxpool(x) x = self.backbone.layer1(x) x = self.backbone.layer2(x) output.append(self.conv_sub2(x)) # sub 4 x = resize( x, scale_factor=0.5, mode='bilinear', align_corners=self.align_corners) x = self.backbone.layer3(x) x = self.backbone.layer4(x) psp_outs = self.psp_modules(x) + [x] psp_outs = torch.cat(psp_outs, dim=1) x = self.psp_bottleneck(psp_outs) output.append(self.conv_sub4(x)) return output
5,887
34.257485
76
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/mae.py
# Copyright (c) OpenMMLab. All rights reserved.import math import math import torch import torch.nn as nn from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init, trunc_normal_) from mmcv.runner import ModuleList, _load_checkpoint from torch.nn.modules.batchnorm import _BatchNorm from mmseg.utils import get_root_logger from ..builder import BACKBONES from .beit import BEiT, BEiTAttention, BEiTTransformerEncoderLayer class MAEAttention(BEiTAttention): """Multi-head self-attention with relative position bias used in MAE. This module is different from ``BEiTAttention`` by initializing the relative bias table with zeros. """ def init_weights(self): """Initialize relative position bias with zeros.""" # As MAE initializes relative position bias as zeros and this class # inherited from BEiT which initializes relative position bias # with `trunc_normal`, `init_weights` here does # nothing and just passes directly pass class MAETransformerEncoderLayer(BEiTTransformerEncoderLayer): """Implements one encoder layer in Vision Transformer. This module is different from ``BEiTTransformerEncoderLayer`` by replacing ``BEiTAttention`` with ``MAEAttention``. """ def build_attn(self, attn_cfg): self.attn = MAEAttention(**attn_cfg) @BACKBONES.register_module() class MAE(BEiT): """VisionTransformer with support for patch. Args: img_size (int | tuple): Input image size. Default: 224. patch_size (int): The patch size. Default: 16. in_channels (int): Number of input channels. Default: 3. embed_dims (int): embedding dimension. Default: 768. num_layers (int): depth of transformer. Default: 12. num_heads (int): number of attention heads. Default: 12. mlp_ratio (int): ratio of mlp hidden dim to embedding dim. Default: 4. out_indices (list | tuple | int): Output from which stages. Default: -1. attn_drop_rate (float): The drop out rate for attention layer. Default 0.0 drop_path_rate (float): stochastic depth rate. Default 0.0. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). patch_norm (bool): Whether to add a norm in PatchEmbed Block. Default: False. final_norm (bool): Whether to add a additional layer to normalize final feature map. Default: False. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. pretrained (str, optional): model pretrained path. Default: None. init_values (float): Initialize the values of Attention and FFN with learnable scaling. Defaults to 0.1. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dims=768, num_layers=12, num_heads=12, mlp_ratio=4, out_indices=-1, attn_drop_rate=0., drop_path_rate=0., norm_cfg=dict(type='LN'), act_cfg=dict(type='GELU'), patch_norm=False, final_norm=False, num_fcs=2, norm_eval=False, pretrained=None, init_values=0.1, init_cfg=None): super(MAE, self).__init__( img_size=img_size, patch_size=patch_size, in_channels=in_channels, embed_dims=embed_dims, num_layers=num_layers, num_heads=num_heads, mlp_ratio=mlp_ratio, out_indices=out_indices, qv_bias=False, attn_drop_rate=attn_drop_rate, drop_path_rate=drop_path_rate, norm_cfg=norm_cfg, act_cfg=act_cfg, patch_norm=patch_norm, final_norm=final_norm, num_fcs=num_fcs, norm_eval=norm_eval, pretrained=pretrained, init_values=init_values, init_cfg=init_cfg) self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims)) self.num_patches = self.patch_shape[0] * self.patch_shape[1] self.pos_embed = nn.Parameter( torch.zeros(1, self.num_patches + 1, embed_dims)) def _build_layers(self): dpr = [ x.item() for x in torch.linspace(0, self.drop_path_rate, self.num_layers) ] self.layers = ModuleList() for i in range(self.num_layers): self.layers.append( MAETransformerEncoderLayer( embed_dims=self.embed_dims, num_heads=self.num_heads, feedforward_channels=self.mlp_ratio * self.embed_dims, attn_drop_rate=self.attn_drop_rate, drop_path_rate=dpr[i], num_fcs=self.num_fcs, bias=True, act_cfg=self.act_cfg, norm_cfg=self.norm_cfg, window_size=self.patch_shape, init_values=self.init_values)) def fix_init_weight(self): """Rescale the initialization according to layer id. This function is copied from https://github.com/microsoft/unilm/blob/master/beit/modeling_pretrain.py. # noqa: E501 Copyright (c) Microsoft Corporation Licensed under the MIT License """ def rescale(param, layer_id): param.div_(math.sqrt(2.0 * layer_id)) for layer_id, layer in enumerate(self.layers): rescale(layer.attn.proj.weight.data, layer_id + 1) rescale(layer.ffn.layers[1].weight.data, layer_id + 1) def init_weights(self): def _init_weights(m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if isinstance(m, nn.Linear) and m.bias is not None: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.LayerNorm): nn.init.constant_(m.bias, 0) nn.init.constant_(m.weight, 1.0) self.apply(_init_weights) self.fix_init_weight() if (isinstance(self.init_cfg, dict) and self.init_cfg.get('type') == 'Pretrained'): logger = get_root_logger() checkpoint = _load_checkpoint( self.init_cfg['checkpoint'], logger=logger, map_location='cpu') state_dict = self.resize_rel_pos_embed(checkpoint) state_dict = self.resize_abs_pos_embed(state_dict) self.load_state_dict(state_dict, False) elif self.init_cfg is not None: super(MAE, self).init_weights() else: # We only implement the 'jax_impl' initialization implemented at # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 # Copyright 2019 Ross Wightman # Licensed under the Apache License, Version 2.0 (the "License") trunc_normal_(self.cls_token, std=.02) for n, m in self.named_modules(): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if m.bias is not None: if 'ffn' in n: nn.init.normal_(m.bias, mean=0., std=1e-6) else: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.Conv2d): kaiming_init(m, mode='fan_in', bias=0.) elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): constant_init(m, val=1.0, bias=0.) def resize_abs_pos_embed(self, state_dict): if 'pos_embed' in state_dict: pos_embed_checkpoint = state_dict['pos_embed'] embedding_size = pos_embed_checkpoint.shape[-1] num_extra_tokens = self.pos_embed.shape[-2] - self.num_patches # height (== width) for the checkpoint position embedding orig_size = int( (pos_embed_checkpoint.shape[-2] - num_extra_tokens)**0.5) # height (== width) for the new position embedding new_size = int(self.num_patches**0.5) # class_token and dist_token are kept unchanged if orig_size != new_size: extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] # only the position tokens are interpolated pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute( 0, 3, 1, 2) pos_tokens = torch.nn.functional.interpolate( pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False) pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) state_dict['pos_embed'] = new_pos_embed return state_dict def forward(self, inputs): B = inputs.shape[0] x, hw_shape = self.patch_embed(inputs) # stole cls_tokens impl from Phil Wang, thanks cls_tokens = self.cls_token.expand(B, -1, -1) x = torch.cat((cls_tokens, x), dim=1) x = x + self.pos_embed outs = [] for i, layer in enumerate(self.layers): x = layer(x) if i == len(self.layers) - 1: if self.final_norm: x = self.norm1(x) if i in self.out_indices: out = x[:, 1:] B, _, C = out.shape out = out.reshape(B, hw_shape[0], hw_shape[1], C).permute(0, 3, 1, 2).contiguous() outs.append(out) return tuple(outs)
10,647
39.641221
128
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/mit.py
# Copyright (c) OpenMMLab. All rights reserved. import math import warnings import torch import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.cnn import Conv2d, build_activation_layer, build_norm_layer from mmcv.cnn.bricks.drop import build_dropout from mmcv.cnn.bricks.transformer import MultiheadAttention from mmcv.cnn.utils.weight_init import (constant_init, normal_init, trunc_normal_init) from mmcv.runner import BaseModule, ModuleList, Sequential from ..builder import BACKBONES from ..utils import PatchEmbed, nchw_to_nlc, nlc_to_nchw class MixFFN(BaseModule): """An implementation of MixFFN of Segformer. The differences between MixFFN & FFN: 1. Use 1X1 Conv to replace Linear layer. 2. Introduce 3X3 Conv to encode positional information. Args: embed_dims (int): The feature dimension. Same as `MultiheadAttention`. Defaults: 256. feedforward_channels (int): The hidden dimension of FFNs. Defaults: 1024. act_cfg (dict, optional): The activation config for FFNs. Default: dict(type='ReLU') ffn_drop (float, optional): Probability of an element to be zeroed in FFN. Default 0.0. dropout_layer (obj:`ConfigDict`): The dropout_layer used when adding the shortcut. init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. Default: None. """ def __init__(self, embed_dims, feedforward_channels, act_cfg=dict(type='GELU'), ffn_drop=0., dropout_layer=None, init_cfg=None): super(MixFFN, self).__init__(init_cfg) self.embed_dims = embed_dims self.feedforward_channels = feedforward_channels self.act_cfg = act_cfg self.activate = build_activation_layer(act_cfg) in_channels = embed_dims fc1 = Conv2d( in_channels=in_channels, out_channels=feedforward_channels, kernel_size=1, stride=1, bias=True) # 3x3 depth wise conv to provide positional encode information pe_conv = Conv2d( in_channels=feedforward_channels, out_channels=feedforward_channels, kernel_size=3, stride=1, padding=(3 - 1) // 2, bias=True, groups=feedforward_channels) fc2 = Conv2d( in_channels=feedforward_channels, out_channels=in_channels, kernel_size=1, stride=1, bias=True) drop = nn.Dropout(ffn_drop) layers = [fc1, pe_conv, self.activate, drop, fc2, drop] self.layers = Sequential(*layers) self.dropout_layer = build_dropout( dropout_layer) if dropout_layer else torch.nn.Identity() def forward(self, x, hw_shape, identity=None): out = nlc_to_nchw(x, hw_shape) out = self.layers(out) out = nchw_to_nlc(out) if identity is None: identity = x return identity + self.dropout_layer(out) class EfficientMultiheadAttention(MultiheadAttention): """An implementation of Efficient Multi-head Attention of Segformer. This module is modified from MultiheadAttention which is a module from mmcv.cnn.bricks.transformer. Args: embed_dims (int): The embedding dimension. num_heads (int): Parallel attention heads. attn_drop (float): A Dropout layer on attn_output_weights. Default: 0.0. proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. Default: 0.0. dropout_layer (obj:`ConfigDict`): The dropout_layer used when adding the shortcut. Default: None. init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. Default: None. batch_first (bool): Key, Query and Value are shape of (batch, n, embed_dim) or (n, batch, embed_dim). Default: False. qkv_bias (bool): enable bias for qkv if True. Default True. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head Attention of Segformer. Default: 1. """ def __init__(self, embed_dims, num_heads, attn_drop=0., proj_drop=0., dropout_layer=None, init_cfg=None, batch_first=True, qkv_bias=False, norm_cfg=dict(type='LN'), sr_ratio=1): super().__init__( embed_dims, num_heads, attn_drop, proj_drop, dropout_layer=dropout_layer, init_cfg=init_cfg, batch_first=batch_first, bias=qkv_bias) self.sr_ratio = sr_ratio if sr_ratio > 1: self.sr = Conv2d( in_channels=embed_dims, out_channels=embed_dims, kernel_size=sr_ratio, stride=sr_ratio) # The ret[0] of build_norm_layer is norm name. self.norm = build_norm_layer(norm_cfg, embed_dims)[1] # handle the BC-breaking from https://github.com/open-mmlab/mmcv/pull/1418 # noqa from mmseg import digit_version, mmcv_version if mmcv_version < digit_version('1.3.17'): warnings.warn('The legacy version of forward function in' 'EfficientMultiheadAttention is deprecated in' 'mmcv>=1.3.17 and will no longer support in the' 'future. Please upgrade your mmcv.') self.forward = self.legacy_forward def forward(self, x, hw_shape, identity=None): x_q = x if self.sr_ratio > 1: x_kv = nlc_to_nchw(x, hw_shape) x_kv = self.sr(x_kv) x_kv = nchw_to_nlc(x_kv) x_kv = self.norm(x_kv) else: x_kv = x if identity is None: identity = x_q # Because the dataflow('key', 'query', 'value') of # ``torch.nn.MultiheadAttention`` is (num_query, batch, # embed_dims), We should adjust the shape of dataflow from # batch_first (batch, num_query, embed_dims) to num_query_first # (num_query ,batch, embed_dims), and recover ``attn_output`` # from num_query_first to batch_first. if self.batch_first: x_q = x_q.transpose(0, 1) x_kv = x_kv.transpose(0, 1) out = self.attn(query=x_q, key=x_kv, value=x_kv)[0] if self.batch_first: out = out.transpose(0, 1) return identity + self.dropout_layer(self.proj_drop(out)) def legacy_forward(self, x, hw_shape, identity=None): """multi head attention forward in mmcv version < 1.3.17.""" x_q = x if self.sr_ratio > 1: x_kv = nlc_to_nchw(x, hw_shape) x_kv = self.sr(x_kv) x_kv = nchw_to_nlc(x_kv) x_kv = self.norm(x_kv) else: x_kv = x if identity is None: identity = x_q # `need_weights=True` will let nn.MultiHeadAttention # `return attn_output, attn_output_weights.sum(dim=1) / num_heads` # The `attn_output_weights.sum(dim=1)` may cause cuda error. So, we set # `need_weights=False` to ignore `attn_output_weights.sum(dim=1)`. # This issue - `https://github.com/pytorch/pytorch/issues/37583` report # the error that large scale tensor sum operation may cause cuda error. out = self.attn(query=x_q, key=x_kv, value=x_kv, need_weights=False)[0] return identity + self.dropout_layer(self.proj_drop(out)) class TransformerEncoderLayer(BaseModule): """Implements one encoder layer in Segformer. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. drop_rate (float): Probability of an element to be zeroed. after the feed forward layer. Default 0.0. attn_drop_rate (float): The drop out rate for attention layer. Default 0.0. drop_path_rate (float): stochastic depth rate. Default 0.0. qkv_bias (bool): enable bias for qkv if True. Default: True. act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). batch_first (bool): Key, Query and Value are shape of (batch, n, embed_dim) or (n, batch, embed_dim). Default: False. init_cfg (dict, optional): Initialization config dict. Default:None. sr_ratio (int): The ratio of spatial reduction of Efficient Multi-head Attention of Segformer. Default: 1. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. """ def __init__(self, embed_dims, num_heads, feedforward_channels, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., qkv_bias=True, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), batch_first=True, sr_ratio=1, with_cp=False): super(TransformerEncoderLayer, self).__init__() # The ret[0] of build_norm_layer is norm name. self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] self.attn = EfficientMultiheadAttention( embed_dims=embed_dims, num_heads=num_heads, attn_drop=attn_drop_rate, proj_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), batch_first=batch_first, qkv_bias=qkv_bias, norm_cfg=norm_cfg, sr_ratio=sr_ratio) # The ret[0] of build_norm_layer is norm name. self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] self.ffn = MixFFN( embed_dims=embed_dims, feedforward_channels=feedforward_channels, ffn_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), act_cfg=act_cfg) self.with_cp = with_cp def forward(self, x, hw_shape): def _inner_forward(x): x = self.attn(self.norm1(x), hw_shape, identity=x) x = self.ffn(self.norm2(x), hw_shape, identity=x) return x if self.with_cp and x.requires_grad: x = cp.checkpoint(_inner_forward, x) else: x = _inner_forward(x) return x @BACKBONES.register_module() class MixVisionTransformer(BaseModule): """The backbone of Segformer. This backbone is the implementation of `SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers <https://arxiv.org/abs/2105.15203>`_. Args: in_channels (int): Number of input channels. Default: 3. embed_dims (int): Embedding dimension. Default: 768. num_stags (int): The num of stages. Default: 4. num_layers (Sequence[int]): The layer number of each transformer encode layer. Default: [3, 4, 6, 3]. num_heads (Sequence[int]): The attention heads of each transformer encode layer. Default: [1, 2, 4, 8]. patch_sizes (Sequence[int]): The patch_size of each overlapped patch embedding. Default: [7, 3, 3, 3]. strides (Sequence[int]): The stride of each overlapped patch embedding. Default: [4, 2, 2, 2]. sr_ratios (Sequence[int]): The spatial reduction rate of each transformer encode layer. Default: [8, 4, 2, 1]. out_indices (Sequence[int] | int): Output from which stages. Default: (0, 1, 2, 3). mlp_ratio (int): ratio of mlp hidden dim to embedding dim. Default: 4. qkv_bias (bool): Enable bias for qkv if True. Default: True. drop_rate (float): Probability of an element to be zeroed. Default 0.0 attn_drop_rate (float): The drop out rate for attention layer. Default 0.0 drop_path_rate (float): stochastic depth rate. Default 0.0 norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). pretrained (str, optional): model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. """ def __init__(self, in_channels=3, embed_dims=64, num_stages=4, num_layers=[3, 4, 6, 3], num_heads=[1, 2, 4, 8], patch_sizes=[7, 3, 3, 3], strides=[4, 2, 2, 2], sr_ratios=[8, 4, 2, 1], out_indices=(0, 1, 2, 3), mlp_ratio=4, qkv_bias=True, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN', eps=1e-6), pretrained=None, init_cfg=None, with_cp=False): super(MixVisionTransformer, self).__init__(init_cfg=init_cfg) assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be set at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is not None: raise TypeError('pretrained must be a str or None') self.embed_dims = embed_dims self.num_stages = num_stages self.num_layers = num_layers self.num_heads = num_heads self.patch_sizes = patch_sizes self.strides = strides self.sr_ratios = sr_ratios self.with_cp = with_cp assert num_stages == len(num_layers) == len(num_heads) \ == len(patch_sizes) == len(strides) == len(sr_ratios) self.out_indices = out_indices assert max(out_indices) < self.num_stages # transformer encoder dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, sum(num_layers)) ] # stochastic num_layer decay rule cur = 0 self.layers = ModuleList() for i, num_layer in enumerate(num_layers): embed_dims_i = embed_dims * num_heads[i] patch_embed = PatchEmbed( in_channels=in_channels, embed_dims=embed_dims_i, kernel_size=patch_sizes[i], stride=strides[i], padding=patch_sizes[i] // 2, norm_cfg=norm_cfg) layer = ModuleList([ TransformerEncoderLayer( embed_dims=embed_dims_i, num_heads=num_heads[i], feedforward_channels=mlp_ratio * embed_dims_i, drop_rate=drop_rate, attn_drop_rate=attn_drop_rate, drop_path_rate=dpr[cur + idx], qkv_bias=qkv_bias, act_cfg=act_cfg, norm_cfg=norm_cfg, with_cp=with_cp, sr_ratio=sr_ratios[i]) for idx in range(num_layer) ]) in_channels = embed_dims_i # The ret[0] of build_norm_layer is norm name. norm = build_norm_layer(norm_cfg, embed_dims_i)[1] self.layers.append(ModuleList([patch_embed, layer, norm])) cur += num_layer def init_weights(self): if self.init_cfg is None: for m in self.modules(): if isinstance(m, nn.Linear): trunc_normal_init(m, std=.02, bias=0.) elif isinstance(m, nn.LayerNorm): constant_init(m, val=1.0, bias=0.) elif isinstance(m, nn.Conv2d): fan_out = m.kernel_size[0] * m.kernel_size[ 1] * m.out_channels fan_out //= m.groups normal_init( m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) else: super(MixVisionTransformer, self).init_weights() def forward(self, x): outs = [] for i, layer in enumerate(self.layers): x, hw_shape = layer[0](x) for block in layer[1]: x = block(x, hw_shape) x = layer[2](x) x = nlc_to_nchw(x, hw_shape) if i in self.out_indices: outs.append(x) return outs
17,527
37.864745
89
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/mobilenet_v2.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import torch.nn as nn from mmcv.cnn import ConvModule from mmcv.runner import BaseModule from torch.nn.modules.batchnorm import _BatchNorm from ..builder import BACKBONES from ..utils import InvertedResidual, make_divisible @BACKBONES.register_module() class MobileNetV2(BaseModule): """MobileNetV2 backbone. This backbone is the implementation of `MobileNetV2: Inverted Residuals and Linear Bottlenecks <https://arxiv.org/abs/1801.04381>`_. Args: widen_factor (float): Width multiplier, multiply number of channels in each layer by this amount. Default: 1.0. strides (Sequence[int], optional): Strides of the first block of each layer. If not specified, default config in ``arch_setting`` will be used. dilations (Sequence[int]): Dilation of each layer. out_indices (None or Sequence[int]): Output from which stages. Default: (7, ). frozen_stages (int): Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters. conv_cfg (dict): Config dict for convolution layer. Default: None, which means using conv2d. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict): Config dict for activation layer. Default: dict(type='ReLU6'). norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. pretrained (str, optional): model pretrained path. Default: None init_cfg (dict or list[dict], optional): Initialization config dict. Default: None """ # Parameters to build layers. 3 parameters are needed to construct a # layer, from left to right: expand_ratio, channel, num_blocks. arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4], [6, 96, 3], [6, 160, 3], [6, 320, 1]] def __init__(self, widen_factor=1., strides=(1, 2, 2, 2, 1, 2, 1), dilations=(1, 1, 1, 1, 1, 1, 1), out_indices=(1, 2, 4, 6), frozen_stages=-1, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU6'), norm_eval=False, with_cp=False, pretrained=None, init_cfg=None): super(MobileNetV2, self).__init__(init_cfg) self.pretrained = pretrained assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is a deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] else: raise TypeError('pretrained must be a str or None') self.widen_factor = widen_factor self.strides = strides self.dilations = dilations assert len(strides) == len(dilations) == len(self.arch_settings) self.out_indices = out_indices for index in out_indices: if index not in range(0, 7): raise ValueError('the item in out_indices must in ' f'range(0, 7). But received {index}') if frozen_stages not in range(-1, 7): raise ValueError('frozen_stages must be in range(-1, 7). ' f'But received {frozen_stages}') self.out_indices = out_indices self.frozen_stages = frozen_stages self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.norm_eval = norm_eval self.with_cp = with_cp self.in_channels = make_divisible(32 * widen_factor, 8) self.conv1 = ConvModule( in_channels=3, out_channels=self.in_channels, kernel_size=3, stride=2, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.layers = [] for i, layer_cfg in enumerate(self.arch_settings): expand_ratio, channel, num_blocks = layer_cfg stride = self.strides[i] dilation = self.dilations[i] out_channels = make_divisible(channel * widen_factor, 8) inverted_res_layer = self.make_layer( out_channels=out_channels, num_blocks=num_blocks, stride=stride, dilation=dilation, expand_ratio=expand_ratio) layer_name = f'layer{i + 1}' self.add_module(layer_name, inverted_res_layer) self.layers.append(layer_name) def make_layer(self, out_channels, num_blocks, stride, dilation, expand_ratio): """Stack InvertedResidual blocks to build a layer for MobileNetV2. Args: out_channels (int): out_channels of block. num_blocks (int): Number of blocks. stride (int): Stride of the first block. dilation (int): Dilation of the first block. expand_ratio (int): Expand the number of channels of the hidden layer in InvertedResidual by this ratio. """ layers = [] for i in range(num_blocks): layers.append( InvertedResidual( self.in_channels, out_channels, stride if i == 0 else 1, expand_ratio=expand_ratio, dilation=dilation if i == 0 else 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg, with_cp=self.with_cp)) self.in_channels = out_channels return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) outs = [] for i, layer_name in enumerate(self.layers): layer = getattr(self, layer_name) x = layer(x) if i in self.out_indices: outs.append(x) if len(outs) == 1: return outs[0] else: return tuple(outs) def _freeze_stages(self): if self.frozen_stages >= 0: for param in self.conv1.parameters(): param.requires_grad = False for i in range(1, self.frozen_stages + 1): layer = getattr(self, f'layer{i}') layer.eval() for param in layer.parameters(): param.requires_grad = False def train(self, mode=True): super(MobileNetV2, self).train(mode) self._freeze_stages() if mode and self.norm_eval: for m in self.modules(): if isinstance(m, _BatchNorm): m.eval()
7,640
37.590909
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/mobilenet_v3.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import mmcv from mmcv.cnn import ConvModule from mmcv.cnn.bricks import Conv2dAdaptivePadding from mmcv.runner import BaseModule from torch.nn.modules.batchnorm import _BatchNorm from ..builder import BACKBONES from ..utils import InvertedResidualV3 as InvertedResidual @BACKBONES.register_module() class MobileNetV3(BaseModule): """MobileNetV3 backbone. This backbone is the improved implementation of `Searching for MobileNetV3 <https://ieeexplore.ieee.org/document/9008835>`_. Args: arch (str): Architecture of mobilnetv3, from {'small', 'large'}. Default: 'small'. conv_cfg (dict): Config dict for convolution layer. Default: None, which means using conv2d. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN'). out_indices (tuple[int]): Output from which layer. Default: (0, 1, 12). frozen_stages (int): Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. pretrained (str, optional): model pretrained path. Default: None init_cfg (dict or list[dict], optional): Initialization config dict. Default: None """ # Parameters to build each block: # [kernel size, mid channels, out channels, with_se, act type, stride] arch_settings = { 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4 [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8 [3, 88, 24, False, 'ReLU', 1], [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16 [5, 240, 40, True, 'HSwish', 1], [5, 240, 40, True, 'HSwish', 1], [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16 [5, 144, 48, True, 'HSwish', 1], [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32 [5, 576, 96, True, 'HSwish', 1], [5, 576, 96, True, 'HSwish', 1]], 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2 [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4 [3, 72, 24, False, 'ReLU', 1], [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8 [5, 120, 40, True, 'ReLU', 1], [5, 120, 40, True, 'ReLU', 1], [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16 [3, 200, 80, False, 'HSwish', 1], [3, 184, 80, False, 'HSwish', 1], [3, 184, 80, False, 'HSwish', 1], [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16 [3, 672, 112, True, 'HSwish', 1], [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32 [5, 960, 160, True, 'HSwish', 1], [5, 960, 160, True, 'HSwish', 1]] } # yapf: disable def __init__(self, arch='small', conv_cfg=None, norm_cfg=dict(type='BN'), out_indices=(0, 1, 12), frozen_stages=-1, reduction_factor=1, norm_eval=False, with_cp=False, pretrained=None, init_cfg=None): super(MobileNetV3, self).__init__(init_cfg) self.pretrained = pretrained assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is a deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] else: raise TypeError('pretrained must be a str or None') assert arch in self.arch_settings assert isinstance(reduction_factor, int) and reduction_factor > 0 assert mmcv.is_tuple_of(out_indices, int) for index in out_indices: if index not in range(0, len(self.arch_settings[arch]) + 2): raise ValueError( 'the item in out_indices must in ' f'range(0, {len(self.arch_settings[arch])+2}). ' f'But received {index}') if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2): raise ValueError('frozen_stages must be in range(-1, ' f'{len(self.arch_settings[arch])+2}). ' f'But received {frozen_stages}') self.arch = arch self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.out_indices = out_indices self.frozen_stages = frozen_stages self.reduction_factor = reduction_factor self.norm_eval = norm_eval self.with_cp = with_cp self.layers = self._make_layer() def _make_layer(self): layers = [] # build the first layer (layer0) in_channels = 16 layer = ConvModule( in_channels=3, out_channels=in_channels, kernel_size=3, stride=2, padding=1, conv_cfg=dict(type='Conv2dAdaptivePadding'), norm_cfg=self.norm_cfg, act_cfg=dict(type='HSwish')) self.add_module('layer0', layer) layers.append('layer0') layer_setting = self.arch_settings[self.arch] for i, params in enumerate(layer_setting): (kernel_size, mid_channels, out_channels, with_se, act, stride) = params if self.arch == 'large' and i >= 12 or self.arch == 'small' and \ i >= 8: mid_channels = mid_channels // self.reduction_factor out_channels = out_channels // self.reduction_factor if with_se: se_cfg = dict( channels=mid_channels, ratio=4, act_cfg=(dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, divisor=6.0))) else: se_cfg = None layer = InvertedResidual( in_channels=in_channels, out_channels=out_channels, mid_channels=mid_channels, kernel_size=kernel_size, stride=stride, se_cfg=se_cfg, with_expand_conv=(in_channels != mid_channels), conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=dict(type=act), with_cp=self.with_cp) in_channels = out_channels layer_name = 'layer{}'.format(i + 1) self.add_module(layer_name, layer) layers.append(layer_name) # build the last layer # block5 layer12 os=32 for small model # block6 layer16 os=32 for large model layer = ConvModule( in_channels=in_channels, out_channels=576 if self.arch == 'small' else 960, kernel_size=1, stride=1, dilation=4, padding=0, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=dict(type='HSwish')) layer_name = 'layer{}'.format(len(layer_setting) + 1) self.add_module(layer_name, layer) layers.append(layer_name) # next, convert backbone MobileNetV3 to a semantic segmentation version if self.arch == 'small': self.layer4.depthwise_conv.conv.stride = (1, 1) self.layer9.depthwise_conv.conv.stride = (1, 1) for i in range(4, len(layers)): layer = getattr(self, layers[i]) if isinstance(layer, InvertedResidual): modified_module = layer.depthwise_conv.conv else: modified_module = layer.conv if i < 9: modified_module.dilation = (2, 2) pad = 2 else: modified_module.dilation = (4, 4) pad = 4 if not isinstance(modified_module, Conv2dAdaptivePadding): # Adjust padding pad *= (modified_module.kernel_size[0] - 1) // 2 modified_module.padding = (pad, pad) else: self.layer7.depthwise_conv.conv.stride = (1, 1) self.layer13.depthwise_conv.conv.stride = (1, 1) for i in range(7, len(layers)): layer = getattr(self, layers[i]) if isinstance(layer, InvertedResidual): modified_module = layer.depthwise_conv.conv else: modified_module = layer.conv if i < 13: modified_module.dilation = (2, 2) pad = 2 else: modified_module.dilation = (4, 4) pad = 4 if not isinstance(modified_module, Conv2dAdaptivePadding): # Adjust padding pad *= (modified_module.kernel_size[0] - 1) // 2 modified_module.padding = (pad, pad) return layers def forward(self, x): outs = [] for i, layer_name in enumerate(self.layers): layer = getattr(self, layer_name) x = layer(x) if i in self.out_indices: outs.append(x) return outs def _freeze_stages(self): for i in range(self.frozen_stages + 1): layer = getattr(self, f'layer{i}') layer.eval() for param in layer.parameters(): param.requires_grad = False def train(self, mode=True): super(MobileNetV3, self).train(mode) self._freeze_stages() if mode and self.norm_eval: for m in self.modules(): if isinstance(m, _BatchNorm): m.eval()
10,845
39.470149
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/mscan.py
# Copyright (c) OpenMMLab. All rights reserved. # Originally from https://github.com/visual-attention-network/segnext # Licensed under the Apache License, Version 2.0 (the "License") import math import warnings import torch import torch.nn as nn from mmcv.cnn import build_activation_layer, build_norm_layer from mmcv.cnn.bricks import DropPath from mmcv.cnn.utils.weight_init import (constant_init, normal_init, trunc_normal_init) from mmcv.runner import BaseModule from mmseg.models.builder import BACKBONES class Mlp(BaseModule): """Multi Layer Perceptron (MLP) Module. Args: in_features (int): The dimension of input features. hidden_features (int): The dimension of hidden features. Defaults: None. out_features (int): The dimension of output features. Defaults: None. act_cfg (dict): Config dict for activation layer in block. Default: dict(type='GELU'). drop (float): The number of dropout rate in MLP block. Defaults: 0.0. """ def __init__(self, in_features, hidden_features=None, out_features=None, act_cfg=dict(type='GELU'), drop=0.): super().__init__() out_features = out_features or in_features hidden_features = hidden_features or in_features self.fc1 = nn.Conv2d(in_features, hidden_features, 1) self.dwconv = nn.Conv2d( hidden_features, hidden_features, 3, 1, 1, bias=True, groups=hidden_features) self.act = build_activation_layer(act_cfg) self.fc2 = nn.Conv2d(hidden_features, out_features, 1) self.drop = nn.Dropout(drop) def forward(self, x): """Forward function.""" x = self.fc1(x) x = self.dwconv(x) x = self.act(x) x = self.drop(x) x = self.fc2(x) x = self.drop(x) return x class StemConv(BaseModule): """Stem Block at the beginning of Semantic Branch. Args: in_channels (int): The dimension of input channels. out_channels (int): The dimension of output channels. act_cfg (dict): Config dict for activation layer in block. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Defaults: dict(type='SyncBN', requires_grad=True). """ def __init__(self, in_channels, out_channels, act_cfg=dict(type='GELU'), norm_cfg=dict(type='SyncBN', requires_grad=True)): super(StemConv, self).__init__() self.proj = nn.Sequential( nn.Conv2d( in_channels, out_channels // 2, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), build_norm_layer(norm_cfg, out_channels // 2)[1], build_activation_layer(act_cfg), nn.Conv2d( out_channels // 2, out_channels, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), build_norm_layer(norm_cfg, out_channels)[1], ) def forward(self, x): """Forward function.""" x = self.proj(x) _, _, H, W = x.size() x = x.flatten(2).transpose(1, 2) return x, H, W class MSCAAttention(BaseModule): """Attention Module in Multi-Scale Convolutional Attention Module (MSCA). Args: channels (int): The dimension of channels. kernel_sizes (list): The size of attention kernel. Defaults: [5, [1, 7], [1, 11], [1, 21]]. paddings (list): The number of corresponding padding value in attention module. Defaults: [2, [0, 3], [0, 5], [0, 10]]. """ def __init__(self, channels, kernel_sizes=[5, [1, 7], [1, 11], [1, 21]], paddings=[2, [0, 3], [0, 5], [0, 10]]): super().__init__() self.conv0 = nn.Conv2d( channels, channels, kernel_size=kernel_sizes[0], padding=paddings[0], groups=channels) for i, (kernel_size, padding) in enumerate(zip(kernel_sizes[1:], paddings[1:])): kernel_size_ = [kernel_size, kernel_size[::-1]] padding_ = [padding, padding[::-1]] conv_name = [f'conv{i}_1', f'conv{i}_2'] for i_kernel, i_pad, i_conv in zip(kernel_size_, padding_, conv_name): self.add_module( i_conv, nn.Conv2d( channels, channels, tuple(i_kernel), padding=i_pad, groups=channels)) self.conv3 = nn.Conv2d(channels, channels, 1) def forward(self, x): """Forward function.""" u = x.clone() attn = self.conv0(x) # Multi-Scale Feature extraction attn_0 = self.conv0_1(attn) attn_0 = self.conv0_2(attn_0) attn_1 = self.conv1_1(attn) attn_1 = self.conv1_2(attn_1) attn_2 = self.conv2_1(attn) attn_2 = self.conv2_2(attn_2) attn = attn + attn_0 + attn_1 + attn_2 # Channel Mixing attn = self.conv3(attn) # Convolutional Attention x = attn * u return x class MSCASpatialAttention(BaseModule): """Spatial Attention Module in Multi-Scale Convolutional Attention Module (MSCA). Args: in_channels (int): The dimension of channels. attention_kernel_sizes (list): The size of attention kernel. Defaults: [5, [1, 7], [1, 11], [1, 21]]. attention_kernel_paddings (list): The number of corresponding padding value in attention module. Defaults: [2, [0, 3], [0, 5], [0, 10]]. act_cfg (dict): Config dict for activation layer in block. Default: dict(type='GELU'). """ def __init__(self, in_channels, attention_kernel_sizes=[5, [1, 7], [1, 11], [1, 21]], attention_kernel_paddings=[2, [0, 3], [0, 5], [0, 10]], act_cfg=dict(type='GELU')): super().__init__() self.proj_1 = nn.Conv2d(in_channels, in_channels, 1) self.activation = build_activation_layer(act_cfg) self.spatial_gating_unit = MSCAAttention(in_channels, attention_kernel_sizes, attention_kernel_paddings) self.proj_2 = nn.Conv2d(in_channels, in_channels, 1) def forward(self, x): """Forward function.""" shorcut = x.clone() x = self.proj_1(x) x = self.activation(x) x = self.spatial_gating_unit(x) x = self.proj_2(x) x = x + shorcut return x class MSCABlock(BaseModule): """Basic Multi-Scale Convolutional Attention Block. It leverage the large- kernel attention (LKA) mechanism to build both channel and spatial attention. In each branch, it uses two depth-wise strip convolutions to approximate standard depth-wise convolutions with large kernels. The kernel size for each branch is set to 7, 11, and 21, respectively. Args: channels (int): The dimension of channels. attention_kernel_sizes (list): The size of attention kernel. Defaults: [5, [1, 7], [1, 11], [1, 21]]. attention_kernel_paddings (list): The number of corresponding padding value in attention module. Defaults: [2, [0, 3], [0, 5], [0, 10]]. mlp_ratio (float): The ratio of multiple input dimension to calculate hidden feature in MLP layer. Defaults: 4.0. drop (float): The number of dropout rate in MLP block. Defaults: 0.0. drop_path (float): The ratio of drop paths. Defaults: 0.0. act_cfg (dict): Config dict for activation layer in block. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Defaults: dict(type='SyncBN', requires_grad=True). """ def __init__(self, channels, attention_kernel_sizes=[5, [1, 7], [1, 11], [1, 21]], attention_kernel_paddings=[2, [0, 3], [0, 5], [0, 10]], mlp_ratio=4., drop=0., drop_path=0., act_cfg=dict(type='GELU'), norm_cfg=dict(type='SyncBN', requires_grad=True)): super().__init__() self.norm1 = build_norm_layer(norm_cfg, channels)[1] self.attn = MSCASpatialAttention(channels, attention_kernel_sizes, attention_kernel_paddings, act_cfg) self.drop_path = DropPath( drop_path) if drop_path > 0. else nn.Identity() self.norm2 = build_norm_layer(norm_cfg, channels)[1] mlp_hidden_channels = int(channels * mlp_ratio) self.mlp = Mlp( in_features=channels, hidden_features=mlp_hidden_channels, act_cfg=act_cfg, drop=drop) layer_scale_init_value = 1e-2 self.layer_scale_1 = nn.Parameter( layer_scale_init_value * torch.ones((channels)), requires_grad=True) self.layer_scale_2 = nn.Parameter( layer_scale_init_value * torch.ones((channels)), requires_grad=True) def forward(self, x, H, W): """Forward function.""" B, N, C = x.shape x = x.permute(0, 2, 1).view(B, C, H, W) x = x + self.drop_path( self.layer_scale_1.unsqueeze(-1).unsqueeze(-1) * self.attn(self.norm1(x))) x = x + self.drop_path( self.layer_scale_2.unsqueeze(-1).unsqueeze(-1) * self.mlp(self.norm2(x))) x = x.view(B, C, N).permute(0, 2, 1) return x class OverlapPatchEmbed(BaseModule): """Image to Patch Embedding. Args: patch_size (int): The patch size. Defaults: 7. stride (int): Stride of the convolutional layer. Default: 4. in_channels (int): The number of input channels. Defaults: 3. embed_dims (int): The dimensions of embedding. Defaults: 768. norm_cfg (dict): Config dict for normalization layer. Defaults: dict(type='SyncBN', requires_grad=True). """ def __init__(self, patch_size=7, stride=4, in_channels=3, embed_dim=768, norm_cfg=dict(type='SyncBN', requires_grad=True)): super().__init__() self.proj = nn.Conv2d( in_channels, embed_dim, kernel_size=patch_size, stride=stride, padding=patch_size // 2) self.norm = build_norm_layer(norm_cfg, embed_dim)[1] def forward(self, x): """Forward function.""" x = self.proj(x) _, _, H, W = x.shape x = self.norm(x) x = x.flatten(2).transpose(1, 2) return x, H, W @BACKBONES.register_module() class MSCAN(BaseModule): """SegNeXt Multi-Scale Convolutional Attention Network (MCSAN) backbone. This backbone is the implementation of `SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation <https://arxiv.org/abs/2209.08575>`_. Inspiration from https://github.com/visual-attention-network/segnext. Args: in_channels (int): The number of input channels. Defaults: 3. embed_dims (list[int]): Embedding dimension. Defaults: [64, 128, 256, 512]. mlp_ratios (list[int]): Ratio of mlp hidden dim to embedding dim. Defaults: [4, 4, 4, 4]. drop_rate (float): Dropout rate. Defaults: 0. drop_path_rate (float): Stochastic depth rate. Defaults: 0. depths (list[int]): Depths of each Swin Transformer stage. Default: [3, 4, 6, 3]. num_stages (int): MSCAN stages. Default: 4. attention_kernel_sizes (list): Size of attention kernel in Attention Module (Figure 2(b) of original paper). Defaults: [5, [1, 7], [1, 11], [1, 21]]. attention_kernel_paddings (list): Size of attention paddings in Attention Module (Figure 2(b) of original paper). Defaults: [2, [0, 3], [0, 5], [0, 10]]. norm_cfg (dict): Config of norm layers. Defaults: dict(type='SyncBN', requires_grad=True). pretrained (str, optional): model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels=3, embed_dims=[64, 128, 256, 512], mlp_ratios=[4, 4, 4, 4], drop_rate=0., drop_path_rate=0., depths=[3, 4, 6, 3], num_stages=4, attention_kernel_sizes=[5, [1, 7], [1, 11], [1, 21]], attention_kernel_paddings=[2, [0, 3], [0, 5], [0, 10]], act_cfg=dict(type='GELU'), norm_cfg=dict(type='SyncBN', requires_grad=True), pretrained=None, init_cfg=None): super(MSCAN, self).__init__(init_cfg=init_cfg) assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be set at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is not None: raise TypeError('pretrained must be a str or None') self.depths = depths self.num_stages = num_stages dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) ] # stochastic depth decay rule cur = 0 for i in range(num_stages): if i == 0: patch_embed = StemConv(3, embed_dims[0], norm_cfg=norm_cfg) else: patch_embed = OverlapPatchEmbed( patch_size=7 if i == 0 else 3, stride=4 if i == 0 else 2, in_channels=in_channels if i == 0 else embed_dims[i - 1], embed_dim=embed_dims[i], norm_cfg=norm_cfg) block = nn.ModuleList([ MSCABlock( channels=embed_dims[i], attention_kernel_sizes=attention_kernel_sizes, attention_kernel_paddings=attention_kernel_paddings, mlp_ratio=mlp_ratios[i], drop=drop_rate, drop_path=dpr[cur + j], act_cfg=act_cfg, norm_cfg=norm_cfg) for j in range(depths[i]) ]) norm = nn.LayerNorm(embed_dims[i]) cur += depths[i] setattr(self, f'patch_embed{i + 1}', patch_embed) setattr(self, f'block{i + 1}', block) setattr(self, f'norm{i + 1}', norm) def init_weights(self): """Initialize modules of MSCAN.""" print('init cfg', self.init_cfg) if self.init_cfg is None: for m in self.modules(): if isinstance(m, nn.Linear): trunc_normal_init(m, std=.02, bias=0.) elif isinstance(m, nn.LayerNorm): constant_init(m, val=1.0, bias=0.) elif isinstance(m, nn.Conv2d): fan_out = m.kernel_size[0] * m.kernel_size[ 1] * m.out_channels fan_out //= m.groups normal_init( m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) else: super(MSCAN, self).init_weights() def forward(self, x): """Forward function.""" B = x.shape[0] outs = [] for i in range(self.num_stages): patch_embed = getattr(self, f'patch_embed{i + 1}') block = getattr(self, f'block{i + 1}') norm = getattr(self, f'norm{i + 1}') x, H, W = patch_embed(x) for blk in block: x = blk(x, H, W) x = norm(x) x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() outs.append(x) return outs
16,888
34.934043
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/resnest.py
# Copyright (c) OpenMMLab. All rights reserved. import math import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as cp from mmcv.cnn import build_conv_layer, build_norm_layer from ..builder import BACKBONES from ..utils import ResLayer from .resnet import Bottleneck as _Bottleneck from .resnet import ResNetV1d class RSoftmax(nn.Module): """Radix Softmax module in ``SplitAttentionConv2d``. Args: radix (int): Radix of input. groups (int): Groups of input. """ def __init__(self, radix, groups): super().__init__() self.radix = radix self.groups = groups def forward(self, x): batch = x.size(0) if self.radix > 1: x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) x = F.softmax(x, dim=1) x = x.reshape(batch, -1) else: x = torch.sigmoid(x) return x class SplitAttentionConv2d(nn.Module): """Split-Attention Conv2d in ResNeSt. Args: in_channels (int): Same as nn.Conv2d. out_channels (int): Same as nn.Conv2d. kernel_size (int | tuple[int]): Same as nn.Conv2d. stride (int | tuple[int]): Same as nn.Conv2d. padding (int | tuple[int]): Same as nn.Conv2d. dilation (int | tuple[int]): Same as nn.Conv2d. groups (int): Same as nn.Conv2d. radix (int): Radix of SpltAtConv2d. Default: 2 reduction_factor (int): Reduction factor of inter_channels. Default: 4. conv_cfg (dict): Config dict for convolution layer. Default: None, which means using conv2d. norm_cfg (dict): Config dict for normalization layer. Default: None. dcn (dict): Config dict for DCN. Default: None. """ def __init__(self, in_channels, channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, radix=2, reduction_factor=4, conv_cfg=None, norm_cfg=dict(type='BN'), dcn=None): super(SplitAttentionConv2d, self).__init__() inter_channels = max(in_channels * radix // reduction_factor, 32) self.radix = radix self.groups = groups self.channels = channels self.with_dcn = dcn is not None self.dcn = dcn fallback_on_stride = False if self.with_dcn: fallback_on_stride = self.dcn.pop('fallback_on_stride', False) if self.with_dcn and not fallback_on_stride: assert conv_cfg is None, 'conv_cfg must be None for DCN' conv_cfg = dcn self.conv = build_conv_layer( conv_cfg, in_channels, channels * radix, kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups * radix, bias=False) self.norm0_name, norm0 = build_norm_layer( norm_cfg, channels * radix, postfix=0) self.add_module(self.norm0_name, norm0) self.relu = nn.ReLU(inplace=True) self.fc1 = build_conv_layer( None, channels, inter_channels, 1, groups=self.groups) self.norm1_name, norm1 = build_norm_layer( norm_cfg, inter_channels, postfix=1) self.add_module(self.norm1_name, norm1) self.fc2 = build_conv_layer( None, inter_channels, channels * radix, 1, groups=self.groups) self.rsoftmax = RSoftmax(radix, groups) @property def norm0(self): """nn.Module: the normalization layer named "norm0" """ return getattr(self, self.norm0_name) @property def norm1(self): """nn.Module: the normalization layer named "norm1" """ return getattr(self, self.norm1_name) def forward(self, x): x = self.conv(x) x = self.norm0(x) x = self.relu(x) batch, rchannel = x.shape[:2] batch = x.size(0) if self.radix > 1: splits = x.view(batch, self.radix, -1, *x.shape[2:]) gap = splits.sum(dim=1) else: gap = x gap = F.adaptive_avg_pool2d(gap, 1) gap = self.fc1(gap) gap = self.norm1(gap) gap = self.relu(gap) atten = self.fc2(gap) atten = self.rsoftmax(atten).view(batch, -1, 1, 1) if self.radix > 1: attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) out = torch.sum(attens * splits, dim=1) else: out = atten * x return out.contiguous() class Bottleneck(_Bottleneck): """Bottleneck block for ResNeSt. Args: inplane (int): Input planes of this block. planes (int): Middle planes of this block. groups (int): Groups of conv2. width_per_group (int): Width per group of conv2. 64x4d indicates ``groups=64, width_per_group=4`` and 32x8d indicates ``groups=32, width_per_group=8``. radix (int): Radix of SpltAtConv2d. Default: 2 reduction_factor (int): Reduction factor of inter_channels in SplitAttentionConv2d. Default: 4. avg_down_stride (bool): Whether to use average pool for stride in Bottleneck. Default: True. kwargs (dict): Key word arguments for base class. """ expansion = 4 def __init__(self, inplanes, planes, groups=1, base_width=4, base_channels=64, radix=2, reduction_factor=4, avg_down_stride=True, **kwargs): """Bottleneck block for ResNeSt.""" super(Bottleneck, self).__init__(inplanes, planes, **kwargs) if groups == 1: width = self.planes else: width = math.floor(self.planes * (base_width / base_channels)) * groups self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 self.norm1_name, norm1 = build_norm_layer( self.norm_cfg, width, postfix=1) self.norm3_name, norm3 = build_norm_layer( self.norm_cfg, self.planes * self.expansion, postfix=3) self.conv1 = build_conv_layer( self.conv_cfg, self.inplanes, width, kernel_size=1, stride=self.conv1_stride, bias=False) self.add_module(self.norm1_name, norm1) self.with_modulated_dcn = False self.conv2 = SplitAttentionConv2d( width, width, kernel_size=3, stride=1 if self.avg_down_stride else self.conv2_stride, padding=self.dilation, dilation=self.dilation, groups=groups, radix=radix, reduction_factor=reduction_factor, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, dcn=self.dcn) delattr(self, self.norm2_name) if self.avg_down_stride: self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) self.conv3 = build_conv_layer( self.conv_cfg, width, self.planes * self.expansion, kernel_size=1, bias=False) self.add_module(self.norm3_name, norm3) def forward(self, x): def _inner_forward(x): identity = x out = self.conv1(x) out = self.norm1(out) out = self.relu(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv1_plugin_names) out = self.conv2(out) if self.avg_down_stride: out = self.avd_layer(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv2_plugin_names) out = self.conv3(out) out = self.norm3(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv3_plugin_names) if self.downsample is not None: identity = self.downsample(x) out += identity return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) out = self.relu(out) return out @BACKBONES.register_module() class ResNeSt(ResNetV1d): """ResNeSt backbone. This backbone is the implementation of `ResNeSt: Split-Attention Networks <https://arxiv.org/abs/2004.08955>`_. Args: groups (int): Number of groups of Bottleneck. Default: 1 base_width (int): Base width of Bottleneck. Default: 4 radix (int): Radix of SpltAtConv2d. Default: 2 reduction_factor (int): Reduction factor of inter_channels in SplitAttentionConv2d. Default: 4. avg_down_stride (bool): Whether to use average pool for stride in Bottleneck. Default: True. kwargs (dict): Keyword arguments for ResNet. """ arch_settings = { 50: (Bottleneck, (3, 4, 6, 3)), 101: (Bottleneck, (3, 4, 23, 3)), 152: (Bottleneck, (3, 8, 36, 3)), 200: (Bottleneck, (3, 24, 36, 3)) } def __init__(self, groups=1, base_width=4, radix=2, reduction_factor=4, avg_down_stride=True, **kwargs): self.groups = groups self.base_width = base_width self.radix = radix self.reduction_factor = reduction_factor self.avg_down_stride = avg_down_stride super(ResNeSt, self).__init__(**kwargs) def make_res_layer(self, **kwargs): """Pack all blocks in a stage into a ``ResLayer``.""" return ResLayer( groups=self.groups, base_width=self.base_width, base_channels=self.base_channels, radix=self.radix, reduction_factor=self.reduction_factor, avg_down_stride=self.avg_down_stride, **kwargs)
10,259
31.163009
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/resnet.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer from mmcv.runner import BaseModule from mmcv.utils.parrots_wrapper import _BatchNorm from ..builder import BACKBONES from ..utils import ResLayer class BasicBlock(BaseModule): """Basic block for ResNet.""" expansion = 1 def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), dcn=None, plugins=None, init_cfg=None): super(BasicBlock, self).__init__(init_cfg) assert dcn is None, 'Not implemented yet.' assert plugins is None, 'Not implemented yet.' self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) self.conv1 = build_conv_layer( conv_cfg, inplanes, planes, 3, stride=stride, padding=dilation, dilation=dilation, bias=False) self.add_module(self.norm1_name, norm1) self.conv2 = build_conv_layer( conv_cfg, planes, planes, 3, padding=1, bias=False) self.add_module(self.norm2_name, norm2) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride self.dilation = dilation self.with_cp = with_cp @property def norm1(self): """nn.Module: normalization layer after the first convolution layer""" return getattr(self, self.norm1_name) @property def norm2(self): """nn.Module: normalization layer after the second convolution layer""" return getattr(self, self.norm2_name) def forward(self, x): """Forward function.""" def _inner_forward(x): identity = x out = self.conv1(x) out = self.norm1(out) out = self.relu(out) out = self.conv2(out) out = self.norm2(out) if self.downsample is not None: identity = self.downsample(x) out += identity return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) out = self.relu(out) return out class Bottleneck(BaseModule): """Bottleneck block for ResNet. If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is "caffe", the stride-two layer is the first 1x1 conv layer. """ expansion = 4 def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), dcn=None, plugins=None, init_cfg=None): super(Bottleneck, self).__init__(init_cfg) assert style in ['pytorch', 'caffe'] assert dcn is None or isinstance(dcn, dict) assert plugins is None or isinstance(plugins, list) if plugins is not None: allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] assert all(p['position'] in allowed_position for p in plugins) self.inplanes = inplanes self.planes = planes self.stride = stride self.dilation = dilation self.style = style self.with_cp = with_cp self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.dcn = dcn self.with_dcn = dcn is not None self.plugins = plugins self.with_plugins = plugins is not None if self.with_plugins: # collect plugins for conv1/conv2/conv3 self.after_conv1_plugins = [ plugin['cfg'] for plugin in plugins if plugin['position'] == 'after_conv1' ] self.after_conv2_plugins = [ plugin['cfg'] for plugin in plugins if plugin['position'] == 'after_conv2' ] self.after_conv3_plugins = [ plugin['cfg'] for plugin in plugins if plugin['position'] == 'after_conv3' ] if self.style == 'pytorch': self.conv1_stride = 1 self.conv2_stride = stride else: self.conv1_stride = stride self.conv2_stride = 1 self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) self.norm3_name, norm3 = build_norm_layer( norm_cfg, planes * self.expansion, postfix=3) self.conv1 = build_conv_layer( conv_cfg, inplanes, planes, kernel_size=1, stride=self.conv1_stride, bias=False) self.add_module(self.norm1_name, norm1) fallback_on_stride = False if self.with_dcn: fallback_on_stride = dcn.pop('fallback_on_stride', False) if not self.with_dcn or fallback_on_stride: self.conv2 = build_conv_layer( conv_cfg, planes, planes, kernel_size=3, stride=self.conv2_stride, padding=dilation, dilation=dilation, bias=False) else: assert self.conv_cfg is None, 'conv_cfg must be None for DCN' self.conv2 = build_conv_layer( dcn, planes, planes, kernel_size=3, stride=self.conv2_stride, padding=dilation, dilation=dilation, bias=False) self.add_module(self.norm2_name, norm2) self.conv3 = build_conv_layer( conv_cfg, planes, planes * self.expansion, kernel_size=1, bias=False) self.add_module(self.norm3_name, norm3) self.relu = nn.ReLU(inplace=True) self.downsample = downsample if self.with_plugins: self.after_conv1_plugin_names = self.make_block_plugins( planes, self.after_conv1_plugins) self.after_conv2_plugin_names = self.make_block_plugins( planes, self.after_conv2_plugins) self.after_conv3_plugin_names = self.make_block_plugins( planes * self.expansion, self.after_conv3_plugins) def make_block_plugins(self, in_channels, plugins): """make plugins for block. Args: in_channels (int): Input channels of plugin. plugins (list[dict]): List of plugins cfg to build. Returns: list[str]: List of the names of plugin. """ assert isinstance(plugins, list) plugin_names = [] for plugin in plugins: plugin = plugin.copy() name, layer = build_plugin_layer( plugin, in_channels=in_channels, postfix=plugin.pop('postfix', '')) assert not hasattr(self, name), f'duplicate plugin {name}' self.add_module(name, layer) plugin_names.append(name) return plugin_names def forward_plugin(self, x, plugin_names): """Forward function for plugins.""" out = x for name in plugin_names: out = getattr(self, name)(x) return out @property def norm1(self): """nn.Module: normalization layer after the first convolution layer""" return getattr(self, self.norm1_name) @property def norm2(self): """nn.Module: normalization layer after the second convolution layer""" return getattr(self, self.norm2_name) @property def norm3(self): """nn.Module: normalization layer after the third convolution layer""" return getattr(self, self.norm3_name) def forward(self, x): """Forward function.""" def _inner_forward(x): identity = x out = self.conv1(x) out = self.norm1(out) out = self.relu(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv1_plugin_names) out = self.conv2(out) out = self.norm2(out) out = self.relu(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv2_plugin_names) out = self.conv3(out) out = self.norm3(out) if self.with_plugins: out = self.forward_plugin(out, self.after_conv3_plugin_names) if self.downsample is not None: identity = self.downsample(x) out += identity return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) out = self.relu(out) return out @BACKBONES.register_module() class ResNet(BaseModule): """ResNet backbone. This backbone is the improved implementation of `Deep Residual Learning for Image Recognition <https://arxiv.org/abs/1512.03385>`_. Args: depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. in_channels (int): Number of input image channels. Default: 3. stem_channels (int): Number of stem channels. Default: 64. base_channels (int): Number of base channels of res layer. Default: 64. num_stages (int): Resnet stages, normally 4. Default: 4. strides (Sequence[int]): Strides of the first block of each stage. Default: (1, 2, 2, 2). dilations (Sequence[int]): Dilation of each stage. Default: (1, 1, 1, 1). out_indices (Sequence[int]): Output from which stages. Default: (0, 1, 2, 3). style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer. Default: 'pytorch'. deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv. Default: False. avg_down (bool): Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False. frozen_stages (int): Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1. conv_cfg (dict | None): Dictionary to construct and config conv layer. When conv_cfg is None, cfg will be set to dict(type='Conv2d'). Default: None. norm_cfg (dict): Dictionary to construct and config norm layer. Default: dict(type='BN', requires_grad=True). norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. dcn (dict | None): Dictionary to construct and config DCN conv layer. When dcn is not None, conv_cfg must be None. Default: None. stage_with_dcn (Sequence[bool]): Whether to set DCN conv for each stage. The length of stage_with_dcn is equal to num_stages. Default: (False, False, False, False). plugins (list[dict]): List of plugins for stages, each dict contains: - cfg (dict, required): Cfg dict to build plugin. - position (str, required): Position inside block to insert plugin, options: 'after_conv1', 'after_conv2', 'after_conv3'. - stages (tuple[bool], optional): Stages to apply plugin, length should be same as 'num_stages'. Default: None. multi_grid (Sequence[int]|None): Multi grid dilation rates of last stage. Default: None. contract_dilation (bool): Whether contract first dilation of each layer Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. zero_init_residual (bool): Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True. pretrained (str, optional): model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Example: >>> from mmseg.models import ResNet >>> import torch >>> self = ResNet(depth=18) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 64, 8, 8) (1, 128, 4, 4) (1, 256, 2, 2) (1, 512, 1, 1) """ arch_settings = { 18: (BasicBlock, (2, 2, 2, 2)), 34: (BasicBlock, (3, 4, 6, 3)), 50: (Bottleneck, (3, 4, 6, 3)), 101: (Bottleneck, (3, 4, 23, 3)), 152: (Bottleneck, (3, 8, 36, 3)) } def __init__(self, depth, in_channels=3, stem_channels=64, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=False, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, multi_grid=None, contract_dilation=False, with_cp=False, zero_init_residual=True, pretrained=None, init_cfg=None): super(ResNet, self).__init__(init_cfg) if depth not in self.arch_settings: raise KeyError(f'invalid depth {depth} for resnet') self.pretrained = pretrained self.zero_init_residual = zero_init_residual block_init_cfg = None assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is a deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] block = self.arch_settings[depth][0] if self.zero_init_residual: if block is BasicBlock: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm2')) elif block is Bottleneck: block_init_cfg = dict( type='Constant', val=0, override=dict(name='norm3')) else: raise TypeError('pretrained must be a str or None') self.depth = depth self.stem_channels = stem_channels self.base_channels = base_channels self.num_stages = num_stages assert num_stages >= 1 and num_stages <= 4 self.strides = strides self.dilations = dilations assert len(strides) == len(dilations) == num_stages self.out_indices = out_indices assert max(out_indices) < num_stages self.style = style self.deep_stem = deep_stem self.avg_down = avg_down self.frozen_stages = frozen_stages self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.with_cp = with_cp self.norm_eval = norm_eval self.dcn = dcn self.stage_with_dcn = stage_with_dcn if dcn is not None: assert len(stage_with_dcn) == num_stages self.plugins = plugins self.multi_grid = multi_grid self.contract_dilation = contract_dilation self.block, stage_blocks = self.arch_settings[depth] self.stage_blocks = stage_blocks[:num_stages] self.inplanes = stem_channels self._make_stem_layer(in_channels, stem_channels) self.res_layers = [] for i, num_blocks in enumerate(self.stage_blocks): stride = strides[i] dilation = dilations[i] dcn = self.dcn if self.stage_with_dcn[i] else None if plugins is not None: stage_plugins = self.make_stage_plugins(plugins, i) else: stage_plugins = None # multi grid is applied to last layer only stage_multi_grid = multi_grid if i == len( self.stage_blocks) - 1 else None planes = base_channels * 2**i res_layer = self.make_res_layer( block=self.block, inplanes=self.inplanes, planes=planes, num_blocks=num_blocks, stride=stride, dilation=dilation, style=self.style, avg_down=self.avg_down, with_cp=with_cp, conv_cfg=conv_cfg, norm_cfg=norm_cfg, dcn=dcn, plugins=stage_plugins, multi_grid=stage_multi_grid, contract_dilation=contract_dilation, init_cfg=block_init_cfg) self.inplanes = planes * self.block.expansion layer_name = f'layer{i+1}' self.add_module(layer_name, res_layer) self.res_layers.append(layer_name) self._freeze_stages() self.feat_dim = self.block.expansion * base_channels * 2**( len(self.stage_blocks) - 1) def make_stage_plugins(self, plugins, stage_idx): """make plugins for ResNet 'stage_idx'th stage . Currently we support to insert 'context_block', 'empirical_attention_block', 'nonlocal_block' into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck. An example of plugins format could be : >>> plugins=[ ... dict(cfg=dict(type='xxx', arg1='xxx'), ... stages=(False, True, True, True), ... position='after_conv2'), ... dict(cfg=dict(type='yyy'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='1'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='2'), ... stages=(True, True, True, True), ... position='after_conv3') ... ] >>> self = ResNet(depth=18) >>> stage_plugins = self.make_stage_plugins(plugins, 0) >>> assert len(stage_plugins) == 3 Suppose 'stage_idx=0', the structure of blocks in the stage would be: conv1-> conv2->conv3->yyy->zzz1->zzz2 Suppose 'stage_idx=1', the structure of blocks in the stage would be: conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 If stages is missing, the plugin would be applied to all stages. Args: plugins (list[dict]): List of plugins cfg to build. The postfix is required if multiple same type plugins are inserted. stage_idx (int): Index of stage to build Returns: list[dict]: Plugins for current stage """ stage_plugins = [] for plugin in plugins: plugin = plugin.copy() stages = plugin.pop('stages', None) assert stages is None or len(stages) == self.num_stages # whether to insert plugin into current stage if stages is None or stages[stage_idx]: stage_plugins.append(plugin) return stage_plugins def make_res_layer(self, **kwargs): """Pack all blocks in a stage into a ``ResLayer``.""" return ResLayer(**kwargs) @property def norm1(self): """nn.Module: the normalization layer named "norm1" """ return getattr(self, self.norm1_name) def _make_stem_layer(self, in_channels, stem_channels): """Make stem layer for ResNet.""" if self.deep_stem: self.stem = nn.Sequential( build_conv_layer( self.conv_cfg, in_channels, stem_channels // 2, kernel_size=3, stride=2, padding=1, bias=False), build_norm_layer(self.norm_cfg, stem_channels // 2)[1], nn.ReLU(inplace=True), build_conv_layer( self.conv_cfg, stem_channels // 2, stem_channels // 2, kernel_size=3, stride=1, padding=1, bias=False), build_norm_layer(self.norm_cfg, stem_channels // 2)[1], nn.ReLU(inplace=True), build_conv_layer( self.conv_cfg, stem_channels // 2, stem_channels, kernel_size=3, stride=1, padding=1, bias=False), build_norm_layer(self.norm_cfg, stem_channels)[1], nn.ReLU(inplace=True)) else: self.conv1 = build_conv_layer( self.conv_cfg, in_channels, stem_channels, kernel_size=7, stride=2, padding=3, bias=False) self.norm1_name, norm1 = build_norm_layer( self.norm_cfg, stem_channels, postfix=1) self.add_module(self.norm1_name, norm1) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) def _freeze_stages(self): """Freeze stages param and norm stats.""" if self.frozen_stages >= 0: if self.deep_stem: self.stem.eval() for param in self.stem.parameters(): param.requires_grad = False else: self.norm1.eval() for m in [self.conv1, self.norm1]: for param in m.parameters(): param.requires_grad = False for i in range(1, self.frozen_stages + 1): m = getattr(self, f'layer{i}') m.eval() for param in m.parameters(): param.requires_grad = False def forward(self, x): """Forward function.""" if self.deep_stem: x = self.stem(x) else: x = self.conv1(x) x = self.norm1(x) x = self.relu(x) x = self.maxpool(x) outs = [] for i, layer_name in enumerate(self.res_layers): res_layer = getattr(self, layer_name) x = res_layer(x) if i in self.out_indices: outs.append(x) return tuple(outs) def train(self, mode=True): """Convert the model into training mode while keep normalization layer freezed.""" super(ResNet, self).train(mode) self._freeze_stages() if mode and self.norm_eval: for m in self.modules(): # trick: eval have effect on BatchNorm only if isinstance(m, _BatchNorm): m.eval() @BACKBONES.register_module() class ResNetV1c(ResNet): """ResNetV1c variant described in [1]_. Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv in the input stem with three 3x3 convs. For more details please refer to `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/abs/1812.01187>`_. """ def __init__(self, **kwargs): super(ResNetV1c, self).__init__( deep_stem=True, avg_down=False, **kwargs) @BACKBONES.register_module() class ResNetV1d(ResNet): """ResNetV1d variant described in [1]_. Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1. """ def __init__(self, **kwargs): super(ResNetV1d, self).__init__( deep_stem=True, avg_down=True, **kwargs)
25,804
35.090909
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/resnext.py
# Copyright (c) OpenMMLab. All rights reserved. import math from mmcv.cnn import build_conv_layer, build_norm_layer from ..builder import BACKBONES from ..utils import ResLayer from .resnet import Bottleneck as _Bottleneck from .resnet import ResNet class Bottleneck(_Bottleneck): """Bottleneck block for ResNeXt. If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is "caffe", the stride-two layer is the first 1x1 conv layer. """ def __init__(self, inplanes, planes, groups=1, base_width=4, base_channels=64, **kwargs): super(Bottleneck, self).__init__(inplanes, planes, **kwargs) if groups == 1: width = self.planes else: width = math.floor(self.planes * (base_width / base_channels)) * groups self.norm1_name, norm1 = build_norm_layer( self.norm_cfg, width, postfix=1) self.norm2_name, norm2 = build_norm_layer( self.norm_cfg, width, postfix=2) self.norm3_name, norm3 = build_norm_layer( self.norm_cfg, self.planes * self.expansion, postfix=3) self.conv1 = build_conv_layer( self.conv_cfg, self.inplanes, width, kernel_size=1, stride=self.conv1_stride, bias=False) self.add_module(self.norm1_name, norm1) fallback_on_stride = False self.with_modulated_dcn = False if self.with_dcn: fallback_on_stride = self.dcn.pop('fallback_on_stride', False) if not self.with_dcn or fallback_on_stride: self.conv2 = build_conv_layer( self.conv_cfg, width, width, kernel_size=3, stride=self.conv2_stride, padding=self.dilation, dilation=self.dilation, groups=groups, bias=False) else: assert self.conv_cfg is None, 'conv_cfg must be None for DCN' self.conv2 = build_conv_layer( self.dcn, width, width, kernel_size=3, stride=self.conv2_stride, padding=self.dilation, dilation=self.dilation, groups=groups, bias=False) self.add_module(self.norm2_name, norm2) self.conv3 = build_conv_layer( self.conv_cfg, width, self.planes * self.expansion, kernel_size=1, bias=False) self.add_module(self.norm3_name, norm3) @BACKBONES.register_module() class ResNeXt(ResNet): """ResNeXt backbone. This backbone is the implementation of `Aggregated Residual Transformations for Deep Neural Networks <https://arxiv.org/abs/1611.05431>`_. Args: depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. in_channels (int): Number of input image channels. Normally 3. num_stages (int): Resnet stages, normally 4. groups (int): Group of resnext. base_width (int): Base width of resnext. strides (Sequence[int]): Strides of the first block of each stage. dilations (Sequence[int]): Dilation of each stage. out_indices (Sequence[int]): Output from which stages. style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer. frozen_stages (int): Stages to be frozen (all param fixed). -1 means not freezing any parameters. norm_cfg (dict): dictionary to construct and config norm layer. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. zero_init_residual (bool): whether to use zero init for last norm layer in resblocks to let them behave as identity. Example: >>> from mmseg.models import ResNeXt >>> import torch >>> self = ResNeXt(depth=50) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 256, 8, 8) (1, 512, 4, 4) (1, 1024, 2, 2) (1, 2048, 1, 1) """ arch_settings = { 50: (Bottleneck, (3, 4, 6, 3)), 101: (Bottleneck, (3, 4, 23, 3)), 152: (Bottleneck, (3, 8, 36, 3)) } def __init__(self, groups=1, base_width=4, **kwargs): self.groups = groups self.base_width = base_width super(ResNeXt, self).__init__(**kwargs) def make_res_layer(self, **kwargs): """Pack all blocks in a stage into a ``ResLayer``""" return ResLayer( groups=self.groups, base_width=self.base_width, base_channels=self.base_channels, **kwargs)
5,321
34.245033
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/stdc.py
# Copyright (c) OpenMMLab. All rights reserved. """Modified from https://github.com/MichaelFan01/STDC-Seg.""" import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule from mmcv.runner.base_module import BaseModule, ModuleList, Sequential from mmseg.ops import resize from ..builder import BACKBONES, build_backbone from .bisenetv1 import AttentionRefinementModule class STDCModule(BaseModule): """STDCModule. Args: in_channels (int): The number of input channels. out_channels (int): The number of output channels before scaling. stride (int): The number of stride for the first conv layer. norm_cfg (dict): Config dict for normalization layer. Default: None. act_cfg (dict): The activation config for conv layers. num_convs (int): Numbers of conv layers. fusion_type (str): Type of fusion operation. Default: 'add'. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, out_channels, stride, norm_cfg=None, act_cfg=None, num_convs=4, fusion_type='add', init_cfg=None): super(STDCModule, self).__init__(init_cfg=init_cfg) assert num_convs > 1 assert fusion_type in ['add', 'cat'] self.stride = stride self.with_downsample = True if self.stride == 2 else False self.fusion_type = fusion_type self.layers = ModuleList() conv_0 = ConvModule( in_channels, out_channels // 2, kernel_size=1, norm_cfg=norm_cfg) if self.with_downsample: self.downsample = ConvModule( out_channels // 2, out_channels // 2, kernel_size=3, stride=2, padding=1, groups=out_channels // 2, norm_cfg=norm_cfg, act_cfg=None) if self.fusion_type == 'add': self.layers.append(nn.Sequential(conv_0, self.downsample)) self.skip = Sequential( ConvModule( in_channels, in_channels, kernel_size=3, stride=2, padding=1, groups=in_channels, norm_cfg=norm_cfg, act_cfg=None), ConvModule( in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None)) else: self.layers.append(conv_0) self.skip = nn.AvgPool2d(kernel_size=3, stride=2, padding=1) else: self.layers.append(conv_0) for i in range(1, num_convs): out_factor = 2**(i + 1) if i != num_convs - 1 else 2**i self.layers.append( ConvModule( out_channels // 2**i, out_channels // out_factor, kernel_size=3, stride=1, padding=1, norm_cfg=norm_cfg, act_cfg=act_cfg)) def forward(self, inputs): if self.fusion_type == 'add': out = self.forward_add(inputs) else: out = self.forward_cat(inputs) return out def forward_add(self, inputs): layer_outputs = [] x = inputs.clone() for layer in self.layers: x = layer(x) layer_outputs.append(x) if self.with_downsample: inputs = self.skip(inputs) return torch.cat(layer_outputs, dim=1) + inputs def forward_cat(self, inputs): x0 = self.layers[0](inputs) layer_outputs = [x0] for i, layer in enumerate(self.layers[1:]): if i == 0: if self.with_downsample: x = layer(self.downsample(x0)) else: x = layer(x0) else: x = layer(x) layer_outputs.append(x) if self.with_downsample: layer_outputs[0] = self.skip(x0) return torch.cat(layer_outputs, dim=1) class FeatureFusionModule(BaseModule): """Feature Fusion Module. This module is different from FeatureFusionModule in BiSeNetV1. It uses two ConvModules in `self.attention` whose inter channel number is calculated by given `scale_factor`, while FeatureFusionModule in BiSeNetV1 only uses one ConvModule in `self.conv_atten`. Args: in_channels (int): The number of input channels. out_channels (int): The number of output channels. scale_factor (int): The number of channel scale factor. Default: 4. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict): The activation config for conv layers. Default: dict(type='ReLU'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, out_channels, scale_factor=4, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), init_cfg=None): super(FeatureFusionModule, self).__init__(init_cfg=init_cfg) channels = out_channels // scale_factor self.conv0 = ConvModule( in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=act_cfg) self.attention = nn.Sequential( nn.AdaptiveAvgPool2d((1, 1)), ConvModule( out_channels, channels, 1, norm_cfg=None, bias=False, act_cfg=act_cfg), ConvModule( channels, out_channels, 1, norm_cfg=None, bias=False, act_cfg=None), nn.Sigmoid()) def forward(self, spatial_inputs, context_inputs): inputs = torch.cat([spatial_inputs, context_inputs], dim=1) x = self.conv0(inputs) attn = self.attention(x) x_attn = x * attn return x_attn + x @BACKBONES.register_module() class STDCNet(BaseModule): """This backbone is the implementation of `Rethinking BiSeNet For Real-time Semantic Segmentation <https://arxiv.org/abs/2104.13188>`_. Args: stdc_type (int): The type of backbone structure, `STDCNet1` and`STDCNet2` denotes two main backbones in paper, whose FLOPs is 813M and 1446M, respectively. in_channels (int): The num of input_channels. channels (tuple[int]): The output channels for each stage. bottleneck_type (str): The type of STDC Module type, the value must be 'add' or 'cat'. norm_cfg (dict): Config dict for normalization layer. act_cfg (dict): The activation config for conv layers. num_convs (int): Numbers of conv layer at each STDC Module. Default: 4. with_final_conv (bool): Whether add a conv layer at the Module output. Default: True. pretrained (str, optional): Model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Example: >>> import torch >>> stdc_type = 'STDCNet1' >>> in_channels = 3 >>> channels = (32, 64, 256, 512, 1024) >>> bottleneck_type = 'cat' >>> inputs = torch.rand(1, 3, 1024, 2048) >>> self = STDCNet(stdc_type, in_channels, ... channels, bottleneck_type).eval() >>> outputs = self.forward(inputs) >>> for i in range(len(outputs)): ... print(f'outputs[{i}].shape = {outputs[i].shape}') outputs[0].shape = torch.Size([1, 256, 128, 256]) outputs[1].shape = torch.Size([1, 512, 64, 128]) outputs[2].shape = torch.Size([1, 1024, 32, 64]) """ arch_settings = { 'STDCNet1': [(2, 1), (2, 1), (2, 1)], 'STDCNet2': [(2, 1, 1, 1), (2, 1, 1, 1, 1), (2, 1, 1)] } def __init__(self, stdc_type, in_channels, channels, bottleneck_type, norm_cfg, act_cfg, num_convs=4, with_final_conv=False, pretrained=None, init_cfg=None): super(STDCNet, self).__init__(init_cfg=init_cfg) assert stdc_type in self.arch_settings, \ f'invalid structure {stdc_type} for STDCNet.' assert bottleneck_type in ['add', 'cat'],\ f'bottleneck_type must be `add` or `cat`, got {bottleneck_type}' assert len(channels) == 5,\ f'invalid channels length {len(channels)} for STDCNet.' self.in_channels = in_channels self.channels = channels self.stage_strides = self.arch_settings[stdc_type] self.prtrained = pretrained self.num_convs = num_convs self.with_final_conv = with_final_conv self.stages = ModuleList([ ConvModule( self.in_channels, self.channels[0], kernel_size=3, stride=2, padding=1, norm_cfg=norm_cfg, act_cfg=act_cfg), ConvModule( self.channels[0], self.channels[1], kernel_size=3, stride=2, padding=1, norm_cfg=norm_cfg, act_cfg=act_cfg) ]) # `self.num_shallow_features` is the number of shallow modules in # `STDCNet`, which is noted as `Stage1` and `Stage2` in original paper. # They are both not used for following modules like Attention # Refinement Module and Feature Fusion Module. # Thus they would be cut from `outs`. Please refer to Figure 4 # of original paper for more details. self.num_shallow_features = len(self.stages) for strides in self.stage_strides: idx = len(self.stages) - 1 self.stages.append( self._make_stage(self.channels[idx], self.channels[idx + 1], strides, norm_cfg, act_cfg, bottleneck_type)) # After appending, `self.stages` is a ModuleList including several # shallow modules and STDCModules. # (len(self.stages) == # self.num_shallow_features + len(self.stage_strides)) if self.with_final_conv: self.final_conv = ConvModule( self.channels[-1], max(1024, self.channels[-1]), 1, norm_cfg=norm_cfg, act_cfg=act_cfg) def _make_stage(self, in_channels, out_channels, strides, norm_cfg, act_cfg, bottleneck_type): layers = [] for i, stride in enumerate(strides): layers.append( STDCModule( in_channels if i == 0 else out_channels, out_channels, stride, norm_cfg, act_cfg, num_convs=self.num_convs, fusion_type=bottleneck_type)) return Sequential(*layers) def forward(self, x): outs = [] for stage in self.stages: x = stage(x) outs.append(x) if self.with_final_conv: outs[-1] = self.final_conv(outs[-1]) outs = outs[self.num_shallow_features:] return tuple(outs) @BACKBONES.register_module() class STDCContextPathNet(BaseModule): """STDCNet with Context Path. The `outs` below is a list of three feature maps from deep to shallow, whose height and width is from small to big, respectively. The biggest feature map of `outs` is outputted for `STDCHead`, where Detail Loss would be calculated by Detail Ground-truth. The other two feature maps are used for Attention Refinement Module, respectively. Besides, the biggest feature map of `outs` and the last output of Attention Refinement Module are concatenated for Feature Fusion Module. Then, this fusion feature map `feat_fuse` would be outputted for `decode_head`. More details please refer to Figure 4 of original paper. Args: backbone_cfg (dict): Config dict for stdc backbone. last_in_channels (tuple(int)), The number of channels of last two feature maps from stdc backbone. Default: (1024, 512). out_channels (int): The channels of output feature maps. Default: 128. ffm_cfg (dict): Config dict for Feature Fusion Module. Default: `dict(in_channels=512, out_channels=256, scale_factor=4)`. upsample_mode (str): Algorithm used for upsampling: ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | ``'trilinear'``. Default: ``'nearest'``. align_corners (str): align_corners argument of F.interpolate. It must be `None` if upsample_mode is ``'nearest'``. Default: None. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN'). init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. Return: outputs (tuple): The tuple of list of output feature map for auxiliary heads and decoder head. """ def __init__(self, backbone_cfg, last_in_channels=(1024, 512), out_channels=128, ffm_cfg=dict( in_channels=512, out_channels=256, scale_factor=4), upsample_mode='nearest', align_corners=None, norm_cfg=dict(type='BN'), init_cfg=None): super(STDCContextPathNet, self).__init__(init_cfg=init_cfg) self.backbone = build_backbone(backbone_cfg) self.arms = ModuleList() self.convs = ModuleList() for channels in last_in_channels: self.arms.append(AttentionRefinementModule(channels, out_channels)) self.convs.append( ConvModule( out_channels, out_channels, 3, padding=1, norm_cfg=norm_cfg)) self.conv_avg = ConvModule( last_in_channels[0], out_channels, 1, norm_cfg=norm_cfg) self.ffm = FeatureFusionModule(**ffm_cfg) self.upsample_mode = upsample_mode self.align_corners = align_corners def forward(self, x): outs = list(self.backbone(x)) avg = F.adaptive_avg_pool2d(outs[-1], 1) avg_feat = self.conv_avg(avg) feature_up = resize( avg_feat, size=outs[-1].shape[2:], mode=self.upsample_mode, align_corners=self.align_corners) arms_out = [] for i in range(len(self.arms)): x_arm = self.arms[i](outs[len(outs) - 1 - i]) + feature_up feature_up = resize( x_arm, size=outs[len(outs) - 1 - i - 1].shape[2:], mode=self.upsample_mode, align_corners=self.align_corners) feature_up = self.convs[i](feature_up) arms_out.append(feature_up) feat_fuse = self.ffm(outs[0], arms_out[1]) # The `outputs` has four feature maps. # `outs[0]` is outputted for `STDCHead` auxiliary head. # Two feature maps of `arms_out` are outputted for auxiliary head. # `feat_fuse` is outputted for decoder head. outputs = [outs[0]] + list(arms_out) + [feat_fuse] return tuple(outputs)
16,158
37.200946
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/swin.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings from collections import OrderedDict from copy import deepcopy import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as cp from mmcv.cnn import build_norm_layer from mmcv.cnn.bricks.transformer import FFN, build_dropout from mmcv.cnn.utils.weight_init import (constant_init, trunc_normal_, trunc_normal_init) from mmcv.runner import (BaseModule, CheckpointLoader, ModuleList, load_state_dict) from mmcv.utils import to_2tuple from ...utils import get_root_logger from ..builder import BACKBONES from ..utils.embed import PatchEmbed, PatchMerging class WindowMSA(BaseModule): """Window based multi-head self-attention (W-MSA) module with relative position bias. Args: embed_dims (int): Number of input channels. num_heads (int): Number of attention heads. window_size (tuple[int]): The height and width of the window. qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. Default: True. qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. attn_drop_rate (float, optional): Dropout ratio of attention weight. Default: 0.0 proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. init_cfg (dict | None, optional): The Config for initialization. Default: None. """ def __init__(self, embed_dims, num_heads, window_size, qkv_bias=True, qk_scale=None, attn_drop_rate=0., proj_drop_rate=0., init_cfg=None): super().__init__(init_cfg=init_cfg) self.embed_dims = embed_dims self.window_size = window_size # Wh, Ww self.num_heads = num_heads head_embed_dims = embed_dims // num_heads self.scale = qk_scale or head_embed_dims**-0.5 # define a parameter table of relative position bias self.relative_position_bias_table = nn.Parameter( torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH # About 2x faster than original impl Wh, Ww = self.window_size rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) rel_position_index = rel_index_coords + rel_index_coords.T rel_position_index = rel_position_index.flip(1).contiguous() self.register_buffer('relative_position_index', rel_position_index) self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop_rate) self.proj = nn.Linear(embed_dims, embed_dims) self.proj_drop = nn.Dropout(proj_drop_rate) self.softmax = nn.Softmax(dim=-1) def init_weights(self): trunc_normal_(self.relative_position_bias_table, std=0.02) def forward(self, x, mask=None): """ Args: x (tensor): input features with shape of (num_windows*B, N, C) mask (tensor | None, Optional): mask with shape of (num_windows, Wh*Ww, Wh*Ww), value should be between (-inf, 0]. """ B, N, C = x.shape qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) # make torchscript happy (cannot use tensor as tuple) q, k, v = qkv[0], qkv[1], qkv[2] q = q * self.scale attn = (q @ k.transpose(-2, -1)) relative_position_bias = self.relative_position_bias_table[ self.relative_position_index.view(-1)].view( self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH relative_position_bias = relative_position_bias.permute( 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww attn = attn + relative_position_bias.unsqueeze(0) if mask is not None: nW = mask.shape[0] attn = attn.view(B // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) attn = attn.view(-1, self.num_heads, N, N) attn = self.softmax(attn) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x @staticmethod def double_step_seq(step1, len1, step2, len2): seq1 = torch.arange(0, step1 * len1, step1) seq2 = torch.arange(0, step2 * len2, step2) return (seq1[:, None] + seq2[None, :]).reshape(1, -1) class ShiftWindowMSA(BaseModule): """Shifted Window Multihead Self-Attention Module. Args: embed_dims (int): Number of input channels. num_heads (int): Number of attention heads. window_size (int): The height and width of the window. shift_size (int, optional): The shift step of each window towards right-bottom. If zero, act as regular window-msa. Defaults to 0. qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. Default: True qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Defaults: None. attn_drop_rate (float, optional): Dropout ratio of attention weight. Defaults: 0. proj_drop_rate (float, optional): Dropout ratio of output. Defaults: 0. dropout_layer (dict, optional): The dropout_layer used before output. Defaults: dict(type='DropPath', drop_prob=0.). init_cfg (dict, optional): The extra config for initialization. Default: None. """ def __init__(self, embed_dims, num_heads, window_size, shift_size=0, qkv_bias=True, qk_scale=None, attn_drop_rate=0, proj_drop_rate=0, dropout_layer=dict(type='DropPath', drop_prob=0.), init_cfg=None): super().__init__(init_cfg=init_cfg) self.window_size = window_size self.shift_size = shift_size assert 0 <= self.shift_size < self.window_size self.w_msa = WindowMSA( embed_dims=embed_dims, num_heads=num_heads, window_size=to_2tuple(window_size), qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop_rate=attn_drop_rate, proj_drop_rate=proj_drop_rate, init_cfg=None) self.drop = build_dropout(dropout_layer) def forward(self, query, hw_shape): B, L, C = query.shape H, W = hw_shape assert L == H * W, 'input feature has wrong size' query = query.view(B, H, W, C) # pad feature maps to multiples of window size pad_r = (self.window_size - W % self.window_size) % self.window_size pad_b = (self.window_size - H % self.window_size) % self.window_size query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) H_pad, W_pad = query.shape[1], query.shape[2] # cyclic shift if self.shift_size > 0: shifted_query = torch.roll( query, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) # calculate attention mask for SW-MSA img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) h_slices = (slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None)) w_slices = (slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None)) cnt = 0 for h in h_slices: for w in w_slices: img_mask[:, h, w, :] = cnt cnt += 1 # nW, window_size, window_size, 1 mask_windows = self.window_partition(img_mask) mask_windows = mask_windows.view( -1, self.window_size * self.window_size) attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( attn_mask == 0, float(0.0)) else: shifted_query = query attn_mask = None # nW*B, window_size, window_size, C query_windows = self.window_partition(shifted_query) # nW*B, window_size*window_size, C query_windows = query_windows.view(-1, self.window_size**2, C) # W-MSA/SW-MSA (nW*B, window_size*window_size, C) attn_windows = self.w_msa(query_windows, mask=attn_mask) # merge windows attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) # B H' W' C shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) # reverse cyclic shift if self.shift_size > 0: x = torch.roll( shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) else: x = shifted_x if pad_r > 0 or pad_b: x = x[:, :H, :W, :].contiguous() x = x.view(B, H * W, C) x = self.drop(x) return x def window_reverse(self, windows, H, W): """ Args: windows: (num_windows*B, window_size, window_size, C) H (int): Height of image W (int): Width of image Returns: x: (B, H, W, C) """ window_size = self.window_size B = int(windows.shape[0] / (H * W / window_size / window_size)) x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) return x def window_partition(self, x): """ Args: x: (B, H, W, C) Returns: windows: (num_windows*B, window_size, window_size, C) """ B, H, W, C = x.shape window_size = self.window_size x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() windows = windows.view(-1, window_size, window_size, C) return windows class SwinBlock(BaseModule): """" Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. window_size (int, optional): The local window scale. Default: 7. shift (bool, optional): whether to shift window or not. Default False. qkv_bias (bool, optional): enable bias for qkv if True. Default: True. qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. drop_rate (float, optional): Dropout rate. Default: 0. attn_drop_rate (float, optional): Attention dropout rate. Default: 0. drop_path_rate (float, optional): Stochastic depth rate. Default: 0. act_cfg (dict, optional): The config dict of activation function. Default: dict(type='GELU'). norm_cfg (dict, optional): The config dict of normalization. Default: dict(type='LN'). with_cp (bool, optional): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. init_cfg (dict | list | None, optional): The init config. Default: None. """ def __init__(self, embed_dims, num_heads, feedforward_channels, window_size=7, shift=False, qkv_bias=True, qk_scale=None, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), with_cp=False, init_cfg=None): super(SwinBlock, self).__init__(init_cfg=init_cfg) self.with_cp = with_cp self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] self.attn = ShiftWindowMSA( embed_dims=embed_dims, num_heads=num_heads, window_size=window_size, shift_size=window_size // 2 if shift else 0, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop_rate=attn_drop_rate, proj_drop_rate=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), init_cfg=None) self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] self.ffn = FFN( embed_dims=embed_dims, feedforward_channels=feedforward_channels, num_fcs=2, ffn_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), act_cfg=act_cfg, add_identity=True, init_cfg=None) def forward(self, x, hw_shape): def _inner_forward(x): identity = x x = self.norm1(x) x = self.attn(x, hw_shape) x = x + identity identity = x x = self.norm2(x) x = self.ffn(x, identity=identity) return x if self.with_cp and x.requires_grad: x = cp.checkpoint(_inner_forward, x) else: x = _inner_forward(x) return x class SwinBlockSequence(BaseModule): """Implements one stage in Swin Transformer. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. depth (int): The number of blocks in this stage. window_size (int, optional): The local window scale. Default: 7. qkv_bias (bool, optional): enable bias for qkv if True. Default: True. qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. drop_rate (float, optional): Dropout rate. Default: 0. attn_drop_rate (float, optional): Attention dropout rate. Default: 0. drop_path_rate (float | list[float], optional): Stochastic depth rate. Default: 0. downsample (BaseModule | None, optional): The downsample operation module. Default: None. act_cfg (dict, optional): The config dict of activation function. Default: dict(type='GELU'). norm_cfg (dict, optional): The config dict of normalization. Default: dict(type='LN'). with_cp (bool, optional): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. init_cfg (dict | list | None, optional): The init config. Default: None. """ def __init__(self, embed_dims, num_heads, feedforward_channels, depth, window_size=7, qkv_bias=True, qk_scale=None, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., downsample=None, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), with_cp=False, init_cfg=None): super().__init__(init_cfg=init_cfg) if isinstance(drop_path_rate, list): drop_path_rates = drop_path_rate assert len(drop_path_rates) == depth else: drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] self.blocks = ModuleList() for i in range(depth): block = SwinBlock( embed_dims=embed_dims, num_heads=num_heads, feedforward_channels=feedforward_channels, window_size=window_size, shift=False if i % 2 == 0 else True, qkv_bias=qkv_bias, qk_scale=qk_scale, drop_rate=drop_rate, attn_drop_rate=attn_drop_rate, drop_path_rate=drop_path_rates[i], act_cfg=act_cfg, norm_cfg=norm_cfg, with_cp=with_cp, init_cfg=None) self.blocks.append(block) self.downsample = downsample def forward(self, x, hw_shape): for block in self.blocks: x = block(x, hw_shape) if self.downsample: x_down, down_hw_shape = self.downsample(x, hw_shape) return x_down, down_hw_shape, x, hw_shape else: return x, hw_shape, x, hw_shape @BACKBONES.register_module() class SwinTransformer(BaseModule): """Swin Transformer backbone. This backbone is the implementation of `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows <https://arxiv.org/abs/2103.14030>`_. Inspiration from https://github.com/microsoft/Swin-Transformer. Args: pretrain_img_size (int | tuple[int]): The size of input image when pretrain. Defaults: 224. in_channels (int): The num of input channels. Defaults: 3. embed_dims (int): The feature dimension. Default: 96. patch_size (int | tuple[int]): Patch size. Default: 4. window_size (int): Window size. Default: 7. mlp_ratio (int | float): Ratio of mlp hidden dim to embedding dim. Default: 4. depths (tuple[int]): Depths of each Swin Transformer stage. Default: (2, 2, 6, 2). num_heads (tuple[int]): Parallel attention heads of each Swin Transformer stage. Default: (3, 6, 12, 24). strides (tuple[int]): The patch merging or patch embedding stride of each Swin Transformer stage. (In swin, we set kernel size equal to stride.) Default: (4, 2, 2, 2). out_indices (tuple[int]): Output from which stages. Default: (0, 1, 2, 3). qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. patch_norm (bool): If add a norm layer for patch embed and patch merging. Default: True. drop_rate (float): Dropout rate. Defaults: 0. attn_drop_rate (float): Attention dropout rate. Default: 0. drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. use_abs_pos_embed (bool): If True, add absolute position embedding to the patch embedding. Defaults: False. act_cfg (dict): Config dict for activation layer. Default: dict(type='LN'). norm_cfg (dict): Config dict for normalization layer at output of backone. Defaults: dict(type='LN'). with_cp (bool, optional): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. pretrained (str, optional): model pretrained path. Default: None. frozen_stages (int): Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, pretrain_img_size=224, in_channels=3, embed_dims=96, patch_size=4, window_size=7, mlp_ratio=4, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24), strides=(4, 2, 2, 2), out_indices=(0, 1, 2, 3), qkv_bias=True, qk_scale=None, patch_norm=True, drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, use_abs_pos_embed=False, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), with_cp=False, pretrained=None, frozen_stages=-1, init_cfg=None): self.frozen_stages = frozen_stages if isinstance(pretrain_img_size, int): pretrain_img_size = to_2tuple(pretrain_img_size) elif isinstance(pretrain_img_size, tuple): if len(pretrain_img_size) == 1: pretrain_img_size = to_2tuple(pretrain_img_size[0]) assert len(pretrain_img_size) == 2, \ f'The size of image should have length 1 or 2, ' \ f'but got {len(pretrain_img_size)}' assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be specified at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: init_cfg = init_cfg else: raise TypeError('pretrained must be a str or None') super(SwinTransformer, self).__init__(init_cfg=init_cfg) num_layers = len(depths) self.out_indices = out_indices self.use_abs_pos_embed = use_abs_pos_embed assert strides[0] == patch_size, 'Use non-overlapping patch embed.' self.patch_embed = PatchEmbed( in_channels=in_channels, embed_dims=embed_dims, conv_type='Conv2d', kernel_size=patch_size, stride=strides[0], padding='corner', norm_cfg=norm_cfg if patch_norm else None, init_cfg=None) if self.use_abs_pos_embed: patch_row = pretrain_img_size[0] // patch_size patch_col = pretrain_img_size[1] // patch_size num_patches = patch_row * patch_col self.absolute_pos_embed = nn.Parameter( torch.zeros((1, num_patches, embed_dims))) self.drop_after_pos = nn.Dropout(p=drop_rate) # set stochastic depth decay rule total_depth = sum(depths) dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, total_depth) ] self.stages = ModuleList() in_channels = embed_dims for i in range(num_layers): if i < num_layers - 1: downsample = PatchMerging( in_channels=in_channels, out_channels=2 * in_channels, stride=strides[i + 1], norm_cfg=norm_cfg if patch_norm else None, init_cfg=None) else: downsample = None stage = SwinBlockSequence( embed_dims=in_channels, num_heads=num_heads[i], feedforward_channels=int(mlp_ratio * in_channels), depth=depths[i], window_size=window_size, qkv_bias=qkv_bias, qk_scale=qk_scale, drop_rate=drop_rate, attn_drop_rate=attn_drop_rate, drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], downsample=downsample, act_cfg=act_cfg, norm_cfg=norm_cfg, with_cp=with_cp, init_cfg=None) self.stages.append(stage) if downsample: in_channels = downsample.out_channels self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] # Add a norm layer for each output for i in out_indices: layer = build_norm_layer(norm_cfg, self.num_features[i])[1] layer_name = f'norm{i}' self.add_module(layer_name, layer) def train(self, mode=True): """Convert the model into training mode while keep layers freezed.""" super(SwinTransformer, self).train(mode) self._freeze_stages() def _freeze_stages(self): if self.frozen_stages >= 0: self.patch_embed.eval() for param in self.patch_embed.parameters(): param.requires_grad = False if self.use_abs_pos_embed: self.absolute_pos_embed.requires_grad = False self.drop_after_pos.eval() for i in range(1, self.frozen_stages + 1): if (i - 1) in self.out_indices: norm_layer = getattr(self, f'norm{i-1}') norm_layer.eval() for param in norm_layer.parameters(): param.requires_grad = False m = self.stages[i - 1] m.eval() for param in m.parameters(): param.requires_grad = False def init_weights(self): logger = get_root_logger() if self.init_cfg is None: logger.warn(f'No pre-trained weights for ' f'{self.__class__.__name__}, ' f'training start from scratch') if self.use_abs_pos_embed: trunc_normal_(self.absolute_pos_embed, std=0.02) for m in self.modules(): if isinstance(m, nn.Linear): trunc_normal_init(m, std=.02, bias=0.) elif isinstance(m, nn.LayerNorm): constant_init(m, val=1.0, bias=0.) else: assert 'checkpoint' in self.init_cfg, f'Only support ' \ f'specify `Pretrained` in ' \ f'`init_cfg` in ' \ f'{self.__class__.__name__} ' ckpt = CheckpointLoader.load_checkpoint( self.init_cfg['checkpoint'], logger=logger, map_location='cpu') if 'state_dict' in ckpt: _state_dict = ckpt['state_dict'] elif 'model' in ckpt: _state_dict = ckpt['model'] else: _state_dict = ckpt state_dict = OrderedDict() for k, v in _state_dict.items(): if k.startswith('backbone.'): state_dict[k[9:]] = v else: state_dict[k] = v # strip prefix of state_dict if list(state_dict.keys())[0].startswith('module.'): state_dict = {k[7:]: v for k, v in state_dict.items()} # reshape absolute position embedding if state_dict.get('absolute_pos_embed') is not None: absolute_pos_embed = state_dict['absolute_pos_embed'] N1, L, C1 = absolute_pos_embed.size() N2, C2, H, W = self.absolute_pos_embed.size() if N1 != N2 or C1 != C2 or L != H * W: logger.warning('Error in loading absolute_pos_embed, pass') else: state_dict['absolute_pos_embed'] = absolute_pos_embed.view( N2, H, W, C2).permute(0, 3, 1, 2).contiguous() # interpolate position bias table if needed relative_position_bias_table_keys = [ k for k in state_dict.keys() if 'relative_position_bias_table' in k ] for table_key in relative_position_bias_table_keys: table_pretrained = state_dict[table_key] table_current = self.state_dict()[table_key] L1, nH1 = table_pretrained.size() L2, nH2 = table_current.size() if nH1 != nH2: logger.warning(f'Error in loading {table_key}, pass') elif L1 != L2: S1 = int(L1**0.5) S2 = int(L2**0.5) table_pretrained_resized = F.interpolate( table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), size=(S2, S2), mode='bicubic') state_dict[table_key] = table_pretrained_resized.view( nH2, L2).permute(1, 0).contiguous() # load state_dict load_state_dict(self, state_dict, strict=False, logger=logger) def forward(self, x): x, hw_shape = self.patch_embed(x) if self.use_abs_pos_embed: x = x + self.absolute_pos_embed x = self.drop_after_pos(x) outs = [] for i, stage in enumerate(self.stages): x, hw_shape, out, out_hw_shape = stage(x, hw_shape) if i in self.out_indices: norm_layer = getattr(self, f'norm{i}') out = norm_layer(out) out = out.view(-1, *out_hw_shape, self.num_features[i]).permute(0, 3, 1, 2).contiguous() outs.append(out) return outs
29,838
38.417437
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/timm_backbone.py
# Copyright (c) OpenMMLab. All rights reserved. try: import timm except ImportError: timm = None from mmcv.cnn.bricks.registry import NORM_LAYERS from mmcv.runner import BaseModule from ..builder import BACKBONES @BACKBONES.register_module() class TIMMBackbone(BaseModule): """Wrapper to use backbones from timm library. More details can be found in `timm <https://github.com/rwightman/pytorch-image-models>`_ . Args: model_name (str): Name of timm model to instantiate. pretrained (bool): Load pretrained weights if True. checkpoint_path (str): Path of checkpoint to load after model is initialized. in_channels (int): Number of input image channels. Default: 3. init_cfg (dict, optional): Initialization config dict **kwargs: Other timm & model specific arguments. """ def __init__( self, model_name, features_only=True, pretrained=True, checkpoint_path='', in_channels=3, init_cfg=None, **kwargs, ): if timm is None: raise RuntimeError('timm is not installed') super(TIMMBackbone, self).__init__(init_cfg) if 'norm_layer' in kwargs: kwargs['norm_layer'] = NORM_LAYERS.get(kwargs['norm_layer']) self.timm_model = timm.create_model( model_name=model_name, features_only=features_only, pretrained=pretrained, in_chans=in_channels, checkpoint_path=checkpoint_path, **kwargs, ) # Make unused parameters None self.timm_model.global_pool = None self.timm_model.fc = None self.timm_model.classifier = None # Hack to use pretrained weights from timm if pretrained or checkpoint_path: self._is_init = True def forward(self, x): features = self.timm_model(x) return features
1,948
29.453125
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/twins.py
# Copyright (c) OpenMMLab. All rights reserved. import math import warnings import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import build_norm_layer from mmcv.cnn.bricks.drop import build_dropout from mmcv.cnn.bricks.transformer import FFN from mmcv.cnn.utils.weight_init import (constant_init, normal_init, trunc_normal_init) from mmcv.runner import BaseModule, ModuleList from torch.nn.modules.batchnorm import _BatchNorm from mmseg.models.backbones.mit import EfficientMultiheadAttention from mmseg.models.builder import BACKBONES from ..utils.embed import PatchEmbed class GlobalSubsampledAttention(EfficientMultiheadAttention): """Global Sub-sampled Attention (Spatial Reduction Attention) This module is modified from EfficientMultiheadAttention, which is a module from mmseg.models.backbones.mit.py. Specifically, there is no difference between `GlobalSubsampledAttention` and `EfficientMultiheadAttention`, `GlobalSubsampledAttention` is built as a brand new class because it is renamed as `Global sub-sampled attention (GSA)` in paper. Args: embed_dims (int): The embedding dimension. num_heads (int): Parallel attention heads. attn_drop (float): A Dropout layer on attn_output_weights. Default: 0.0. proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. Default: 0.0. dropout_layer (obj:`ConfigDict`): The dropout_layer used when adding the shortcut. Default: None. batch_first (bool): Key, Query and Value are shape of (batch, n, embed_dims) or (n, batch, embed_dims). Default: False. qkv_bias (bool): enable bias for qkv if True. Default: True. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). sr_ratio (int): The ratio of spatial reduction of GSA of PCPVT. Default: 1. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, embed_dims, num_heads, attn_drop=0., proj_drop=0., dropout_layer=None, batch_first=True, qkv_bias=True, norm_cfg=dict(type='LN'), sr_ratio=1, init_cfg=None): super(GlobalSubsampledAttention, self).__init__( embed_dims, num_heads, attn_drop=attn_drop, proj_drop=proj_drop, dropout_layer=dropout_layer, batch_first=batch_first, qkv_bias=qkv_bias, norm_cfg=norm_cfg, sr_ratio=sr_ratio, init_cfg=init_cfg) class GSAEncoderLayer(BaseModule): """Implements one encoder layer with GSA. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. drop_rate (float): Probability of an element to be zeroed after the feed forward layer. Default: 0.0. attn_drop_rate (float): The drop out rate for attention layer. Default: 0.0. drop_path_rate (float): Stochastic depth rate. Default 0.0. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. qkv_bias (bool): Enable bias for qkv if True. Default: True act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). sr_ratio (float): Kernel_size of conv in Attention modules. Default: 1. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, embed_dims, num_heads, feedforward_channels, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., num_fcs=2, qkv_bias=True, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), sr_ratio=1., init_cfg=None): super(GSAEncoderLayer, self).__init__(init_cfg=init_cfg) self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] self.attn = GlobalSubsampledAttention( embed_dims=embed_dims, num_heads=num_heads, attn_drop=attn_drop_rate, proj_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), qkv_bias=qkv_bias, norm_cfg=norm_cfg, sr_ratio=sr_ratio) self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] self.ffn = FFN( embed_dims=embed_dims, feedforward_channels=feedforward_channels, num_fcs=num_fcs, ffn_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), act_cfg=act_cfg, add_identity=False) self.drop_path = build_dropout( dict(type='DropPath', drop_prob=drop_path_rate) ) if drop_path_rate > 0. else nn.Identity() def forward(self, x, hw_shape): x = x + self.drop_path(self.attn(self.norm1(x), hw_shape, identity=0.)) x = x + self.drop_path(self.ffn(self.norm2(x))) return x class LocallyGroupedSelfAttention(BaseModule): """Locally-grouped Self Attention (LSA) module. Args: embed_dims (int): Number of input channels. num_heads (int): Number of attention heads. Default: 8 qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. Default: False. qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. attn_drop_rate (float, optional): Dropout ratio of attention weight. Default: 0.0 proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. window_size(int): Window size of LSA. Default: 1. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, embed_dims, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop_rate=0., proj_drop_rate=0., window_size=1, init_cfg=None): super(LocallyGroupedSelfAttention, self).__init__(init_cfg=init_cfg) assert embed_dims % num_heads == 0, f'dim {embed_dims} should be ' \ f'divided by num_heads ' \ f'{num_heads}.' self.embed_dims = embed_dims self.num_heads = num_heads head_dim = embed_dims // num_heads self.scale = qk_scale or head_dim**-0.5 self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop_rate) self.proj = nn.Linear(embed_dims, embed_dims) self.proj_drop = nn.Dropout(proj_drop_rate) self.window_size = window_size def forward(self, x, hw_shape): b, n, c = x.shape h, w = hw_shape x = x.view(b, h, w, c) # pad feature maps to multiples of Local-groups pad_l = pad_t = 0 pad_r = (self.window_size - w % self.window_size) % self.window_size pad_b = (self.window_size - h % self.window_size) % self.window_size x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) # calculate attention mask for LSA Hp, Wp = x.shape[1:-1] _h, _w = Hp // self.window_size, Wp // self.window_size mask = torch.zeros((1, Hp, Wp), device=x.device) mask[:, -pad_b:, :].fill_(1) mask[:, :, -pad_r:].fill_(1) # [B, _h, _w, window_size, window_size, C] x = x.reshape(b, _h, self.window_size, _w, self.window_size, c).transpose(2, 3) mask = mask.reshape(1, _h, self.window_size, _w, self.window_size).transpose(2, 3).reshape( 1, _h * _w, self.window_size * self.window_size) # [1, _h*_w, window_size*window_size, window_size*window_size] attn_mask = mask.unsqueeze(2) - mask.unsqueeze(3) attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-1000.0)).masked_fill( attn_mask == 0, float(0.0)) # [3, B, _w*_h, nhead, window_size*window_size, dim] qkv = self.qkv(x).reshape(b, _h * _w, self.window_size * self.window_size, 3, self.num_heads, c // self.num_heads).permute( 3, 0, 1, 4, 2, 5) q, k, v = qkv[0], qkv[1], qkv[2] # [B, _h*_w, n_head, window_size*window_size, window_size*window_size] attn = (q @ k.transpose(-2, -1)) * self.scale attn = attn + attn_mask.unsqueeze(2) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) attn = (attn @ v).transpose(2, 3).reshape(b, _h, _w, self.window_size, self.window_size, c) x = attn.transpose(2, 3).reshape(b, _h * self.window_size, _w * self.window_size, c) if pad_r > 0 or pad_b > 0: x = x[:, :h, :w, :].contiguous() x = x.reshape(b, n, c) x = self.proj(x) x = self.proj_drop(x) return x class LSAEncoderLayer(BaseModule): """Implements one encoder layer in Twins-SVT. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. drop_rate (float): Probability of an element to be zeroed after the feed forward layer. Default: 0.0. attn_drop_rate (float, optional): Dropout ratio of attention weight. Default: 0.0 drop_path_rate (float): Stochastic depth rate. Default 0.0. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. qkv_bias (bool): Enable bias for qkv if True. Default: True qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. Default: None. act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). window_size (int): Window size of LSA. Default: 1. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, embed_dims, num_heads, feedforward_channels, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., num_fcs=2, qkv_bias=True, qk_scale=None, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), window_size=1, init_cfg=None): super(LSAEncoderLayer, self).__init__(init_cfg=init_cfg) self.norm1 = build_norm_layer(norm_cfg, embed_dims, postfix=1)[1] self.attn = LocallyGroupedSelfAttention(embed_dims, num_heads, qkv_bias, qk_scale, attn_drop_rate, drop_rate, window_size) self.norm2 = build_norm_layer(norm_cfg, embed_dims, postfix=2)[1] self.ffn = FFN( embed_dims=embed_dims, feedforward_channels=feedforward_channels, num_fcs=num_fcs, ffn_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), act_cfg=act_cfg, add_identity=False) self.drop_path = build_dropout( dict(type='DropPath', drop_prob=drop_path_rate) ) if drop_path_rate > 0. else nn.Identity() def forward(self, x, hw_shape): x = x + self.drop_path(self.attn(self.norm1(x), hw_shape)) x = x + self.drop_path(self.ffn(self.norm2(x))) return x class ConditionalPositionEncoding(BaseModule): """The Conditional Position Encoding (CPE) module. The CPE is the implementation of 'Conditional Positional Encodings for Vision Transformers <https://arxiv.org/abs/2102.10882>'_. Args: in_channels (int): Number of input channels. embed_dims (int): The feature dimension. Default: 768. stride (int): Stride of conv layer. Default: 1. """ def __init__(self, in_channels, embed_dims=768, stride=1, init_cfg=None): super(ConditionalPositionEncoding, self).__init__(init_cfg=init_cfg) self.proj = nn.Conv2d( in_channels, embed_dims, kernel_size=3, stride=stride, padding=1, bias=True, groups=embed_dims) self.stride = stride def forward(self, x, hw_shape): b, n, c = x.shape h, w = hw_shape feat_token = x cnn_feat = feat_token.transpose(1, 2).view(b, c, h, w) if self.stride == 1: x = self.proj(cnn_feat) + cnn_feat else: x = self.proj(cnn_feat) x = x.flatten(2).transpose(1, 2) return x @BACKBONES.register_module() class PCPVT(BaseModule): """The backbone of Twins-PCPVT. This backbone is the implementation of `Twins: Revisiting the Design of Spatial Attention in Vision Transformers <https://arxiv.org/abs/1512.03385>`_. Args: in_channels (int): Number of input channels. Default: 3. embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. strides (list): The strides. Default: [4, 2, 2, 2]. num_heads (int): Number of attention heads. Default: [1, 2, 4, 8]. mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. Default: [4, 4, 4, 4]. out_indices (tuple[int]): Output from which stages. Default: (0, 1, 2, 3). qkv_bias (bool): Enable bias for qkv if True. Default: False. drop_rate (float): Probability of an element to be zeroed. Default 0. attn_drop_rate (float): The drop out rate for attention layer. Default 0.0 drop_path_rate (float): Stochastic depth rate. Default 0.0 norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') depths (list): Depths of each stage. Default [3, 4, 6, 3] sr_ratios (list): Kernel_size of conv in each Attn module in Transformer encoder layer. Default: [8, 4, 2, 1]. norm_after_stage(bool): Add extra norm. Default False. init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, in_channels=3, embed_dims=[64, 128, 256, 512], patch_sizes=[4, 2, 2, 2], strides=[4, 2, 2, 2], num_heads=[1, 2, 4, 8], mlp_ratios=[4, 4, 4, 4], out_indices=(0, 1, 2, 3), qkv_bias=False, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_cfg=dict(type='LN'), depths=[3, 4, 6, 3], sr_ratios=[8, 4, 2, 1], norm_after_stage=False, pretrained=None, init_cfg=None): super(PCPVT, self).__init__(init_cfg=init_cfg) assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be set at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is not None: raise TypeError('pretrained must be a str or None') self.depths = depths # patch_embed self.patch_embeds = ModuleList() self.position_encoding_drops = ModuleList() self.layers = ModuleList() for i in range(len(depths)): self.patch_embeds.append( PatchEmbed( in_channels=in_channels if i == 0 else embed_dims[i - 1], embed_dims=embed_dims[i], conv_type='Conv2d', kernel_size=patch_sizes[i], stride=strides[i], padding='corner', norm_cfg=norm_cfg)) self.position_encoding_drops.append(nn.Dropout(p=drop_rate)) self.position_encodings = ModuleList([ ConditionalPositionEncoding(embed_dim, embed_dim) for embed_dim in embed_dims ]) # transformer encoder dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) ] # stochastic depth decay rule cur = 0 for k in range(len(depths)): _block = ModuleList([ GSAEncoderLayer( embed_dims=embed_dims[k], num_heads=num_heads[k], feedforward_channels=mlp_ratios[k] * embed_dims[k], attn_drop_rate=attn_drop_rate, drop_rate=drop_rate, drop_path_rate=dpr[cur + i], num_fcs=2, qkv_bias=qkv_bias, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), sr_ratio=sr_ratios[k]) for i in range(depths[k]) ]) self.layers.append(_block) cur += depths[k] self.norm_name, norm = build_norm_layer( norm_cfg, embed_dims[-1], postfix=1) self.out_indices = out_indices self.norm_after_stage = norm_after_stage if self.norm_after_stage: self.norm_list = ModuleList() for dim in embed_dims: self.norm_list.append(build_norm_layer(norm_cfg, dim)[1]) def init_weights(self): if self.init_cfg is not None: super(PCPVT, self).init_weights() else: for m in self.modules(): if isinstance(m, nn.Linear): trunc_normal_init(m, std=.02, bias=0.) elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): constant_init(m, val=1.0, bias=0.) elif isinstance(m, nn.Conv2d): fan_out = m.kernel_size[0] * m.kernel_size[ 1] * m.out_channels fan_out //= m.groups normal_init( m, mean=0, std=math.sqrt(2.0 / fan_out), bias=0) def forward(self, x): outputs = list() b = x.shape[0] for i in range(len(self.depths)): x, hw_shape = self.patch_embeds[i](x) h, w = hw_shape x = self.position_encoding_drops[i](x) for j, blk in enumerate(self.layers[i]): x = blk(x, hw_shape) if j == 0: x = self.position_encodings[i](x, hw_shape) if self.norm_after_stage: x = self.norm_list[i](x) x = x.reshape(b, h, w, -1).permute(0, 3, 1, 2).contiguous() if i in self.out_indices: outputs.append(x) return tuple(outputs) @BACKBONES.register_module() class SVT(PCPVT): """The backbone of Twins-SVT. This backbone is the implementation of `Twins: Revisiting the Design of Spatial Attention in Vision Transformers <https://arxiv.org/abs/1512.03385>`_. Args: in_channels (int): Number of input channels. Default: 3. embed_dims (list): Embedding dimension. Default: [64, 128, 256, 512]. patch_sizes (list): The patch sizes. Default: [4, 2, 2, 2]. strides (list): The strides. Default: [4, 2, 2, 2]. num_heads (int): Number of attention heads. Default: [1, 2, 4]. mlp_ratios (int): Ratio of mlp hidden dim to embedding dim. Default: [4, 4, 4]. out_indices (tuple[int]): Output from which stages. Default: (0, 1, 2, 3). qkv_bias (bool): Enable bias for qkv if True. Default: False. drop_rate (float): Dropout rate. Default 0. attn_drop_rate (float): Dropout ratio of attention weight. Default 0.0 drop_path_rate (float): Stochastic depth rate. Default 0.2. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') depths (list): Depths of each stage. Default [4, 4, 4]. sr_ratios (list): Kernel_size of conv in each Attn module in Transformer encoder layer. Default: [4, 2, 1]. windiow_sizes (list): Window size of LSA. Default: [7, 7, 7], input_features_slice(bool): Input features need slice. Default: False. norm_after_stage(bool): Add extra norm. Default False. strides (list): Strides in patch-Embedding modules. Default: (2, 2, 2) init_cfg (dict, optional): The Config for initialization. Defaults to None. """ def __init__(self, in_channels=3, embed_dims=[64, 128, 256], patch_sizes=[4, 2, 2, 2], strides=[4, 2, 2, 2], num_heads=[1, 2, 4], mlp_ratios=[4, 4, 4], out_indices=(0, 1, 2, 3), qkv_bias=False, drop_rate=0., attn_drop_rate=0., drop_path_rate=0.2, norm_cfg=dict(type='LN'), depths=[4, 4, 4], sr_ratios=[4, 2, 1], windiow_sizes=[7, 7, 7], norm_after_stage=True, pretrained=None, init_cfg=None): super(SVT, self).__init__(in_channels, embed_dims, patch_sizes, strides, num_heads, mlp_ratios, out_indices, qkv_bias, drop_rate, attn_drop_rate, drop_path_rate, norm_cfg, depths, sr_ratios, norm_after_stage, pretrained, init_cfg) # transformer encoder dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) ] # stochastic depth decay rule for k in range(len(depths)): for i in range(depths[k]): if i % 2 == 0: self.layers[k][i] = \ LSAEncoderLayer( embed_dims=embed_dims[k], num_heads=num_heads[k], feedforward_channels=mlp_ratios[k] * embed_dims[k], drop_rate=drop_rate, attn_drop_rate=attn_drop_rate, drop_path_rate=dpr[sum(depths[:k])+i], qkv_bias=qkv_bias, window_size=windiow_sizes[k])
23,822
39.44652
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/unet.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, build_norm_layer) from mmcv.runner import BaseModule from mmcv.utils.parrots_wrapper import _BatchNorm from mmseg.ops import Upsample from ..builder import BACKBONES from ..utils import UpConvBlock class BasicConvBlock(nn.Module): """Basic convolutional block for UNet. This module consists of several plain convolutional layers. Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. num_convs (int): Number of convolutional layers. Default: 2. stride (int): Whether use stride convolution to downsample the input feature map. If stride=2, it only uses stride convolution in the first convolutional layer to downsample the input feature map. Options are 1 or 2. Default: 1. dilation (int): Whether use dilated convolution to expand the receptive field. Set dilation rate of each convolutional layer and the dilation rate of the first convolutional layer is always 1. Default: 1. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. conv_cfg (dict | None): Config dict for convolution layer. Default: None. norm_cfg (dict | None): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict | None): Config dict for activation layer in ConvModule. Default: dict(type='ReLU'). dcn (bool): Use deformable convolution in convolutional layer or not. Default: None. plugins (dict): plugins for convolutional layers. Default: None. """ def __init__(self, in_channels, out_channels, num_convs=2, stride=1, dilation=1, with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), dcn=None, plugins=None): super(BasicConvBlock, self).__init__() assert dcn is None, 'Not implemented yet.' assert plugins is None, 'Not implemented yet.' self.with_cp = with_cp convs = [] for i in range(num_convs): convs.append( ConvModule( in_channels=in_channels if i == 0 else out_channels, out_channels=out_channels, kernel_size=3, stride=stride if i == 0 else 1, dilation=1 if i == 0 else dilation, padding=1 if i == 0 else dilation, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) self.convs = nn.Sequential(*convs) def forward(self, x): """Forward function.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self.convs, x) else: out = self.convs(x) return out @UPSAMPLE_LAYERS.register_module() class DeconvModule(nn.Module): """Deconvolution upsample module in decoder for UNet (2X upsample). This module uses deconvolution to upsample feature map in the decoder of UNet. Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. norm_cfg (dict | None): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict | None): Config dict for activation layer in ConvModule. Default: dict(type='ReLU'). kernel_size (int): Kernel size of the convolutional layer. Default: 4. """ def __init__(self, in_channels, out_channels, with_cp=False, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), *, kernel_size=4, scale_factor=2): super(DeconvModule, self).__init__() assert (kernel_size - scale_factor >= 0) and\ (kernel_size - scale_factor) % 2 == 0,\ f'kernel_size should be greater than or equal to scale_factor '\ f'and (kernel_size - scale_factor) should be even numbers, '\ f'while the kernel size is {kernel_size} and scale_factor is '\ f'{scale_factor}.' stride = scale_factor padding = (kernel_size - scale_factor) // 2 self.with_cp = with_cp deconv = nn.ConvTranspose2d( in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) norm_name, norm = build_norm_layer(norm_cfg, out_channels) activate = build_activation_layer(act_cfg) self.deconv_upsamping = nn.Sequential(deconv, norm, activate) def forward(self, x): """Forward function.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self.deconv_upsamping, x) else: out = self.deconv_upsamping(x) return out @UPSAMPLE_LAYERS.register_module() class InterpConv(nn.Module): """Interpolation upsample module in decoder for UNet. This module uses interpolation to upsample feature map in the decoder of UNet. It consists of one interpolation upsample layer and one convolutional layer. It can be one interpolation upsample layer followed by one convolutional layer (conv_first=False) or one convolutional layer followed by one interpolation upsample layer (conv_first=True). Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. norm_cfg (dict | None): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict | None): Config dict for activation layer in ConvModule. Default: dict(type='ReLU'). conv_cfg (dict | None): Config dict for convolution layer. Default: None. conv_first (bool): Whether convolutional layer or interpolation upsample layer first. Default: False. It means interpolation upsample layer followed by one convolutional layer. kernel_size (int): Kernel size of the convolutional layer. Default: 1. stride (int): Stride of the convolutional layer. Default: 1. padding (int): Padding of the convolutional layer. Default: 1. upsample_cfg (dict): Interpolation config of the upsample layer. Default: dict( scale_factor=2, mode='bilinear', align_corners=False). """ def __init__(self, in_channels, out_channels, with_cp=False, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), *, conv_cfg=None, conv_first=False, kernel_size=1, stride=1, padding=0, upsample_cfg=dict( scale_factor=2, mode='bilinear', align_corners=False)): super(InterpConv, self).__init__() self.with_cp = with_cp conv = ConvModule( in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) upsample = Upsample(**upsample_cfg) if conv_first: self.interp_upsample = nn.Sequential(conv, upsample) else: self.interp_upsample = nn.Sequential(upsample, conv) def forward(self, x): """Forward function.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self.interp_upsample, x) else: out = self.interp_upsample(x) return out @BACKBONES.register_module() class UNet(BaseModule): """UNet backbone. This backbone is the implementation of `U-Net: Convolutional Networks for Biomedical Image Segmentation <https://arxiv.org/abs/1505.04597>`_. Args: in_channels (int): Number of input image channels. Default" 3. base_channels (int): Number of base channels of each stage. The output channels of the first stage. Default: 64. num_stages (int): Number of stages in encoder, normally 5. Default: 5. strides (Sequence[int 1 | 2]): Strides of each stage in encoder. len(strides) is equal to num_stages. Normally the stride of the first stage in encoder is 1. If strides[i]=2, it uses stride convolution to downsample in the correspondence encoder stage. Default: (1, 1, 1, 1, 1). enc_num_convs (Sequence[int]): Number of convolutional layers in the convolution block of the correspondence encoder stage. Default: (2, 2, 2, 2, 2). dec_num_convs (Sequence[int]): Number of convolutional layers in the convolution block of the correspondence decoder stage. Default: (2, 2, 2, 2). downsamples (Sequence[int]): Whether use MaxPool to downsample the feature map after the first stage of encoder (stages: [1, num_stages)). If the correspondence encoder stage use stride convolution (strides[i]=2), it will never use MaxPool to downsample, even downsamples[i-1]=True. Default: (True, True, True, True). enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. Default: (1, 1, 1, 1, 1). dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. Default: (1, 1, 1, 1). with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. conv_cfg (dict | None): Config dict for convolution layer. Default: None. norm_cfg (dict | None): Config dict for normalization layer. Default: dict(type='BN'). act_cfg (dict | None): Config dict for activation layer in ConvModule. Default: dict(type='ReLU'). upsample_cfg (dict): The upsample config of the upsample module in decoder. Default: dict(type='InterpConv'). norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. dcn (bool): Use deformable convolution in convolutional layer or not. Default: None. plugins (dict): plugins for convolutional layers. Default: None. pretrained (str, optional): model pretrained path. Default: None init_cfg (dict or list[dict], optional): Initialization config dict. Default: None Notice: The input image size should be divisible by the whole downsample rate of the encoder. More detail of the whole downsample rate can be found in UNet._check_input_divisible. """ def __init__(self, in_channels=3, base_channels=64, num_stages=5, strides=(1, 1, 1, 1, 1), enc_num_convs=(2, 2, 2, 2, 2), dec_num_convs=(2, 2, 2, 2), downsamples=(True, True, True, True), enc_dilations=(1, 1, 1, 1, 1), dec_dilations=(1, 1, 1, 1), with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU'), upsample_cfg=dict(type='InterpConv'), norm_eval=False, dcn=None, plugins=None, pretrained=None, init_cfg=None): super(UNet, self).__init__(init_cfg) self.pretrained = pretrained assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be setting at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is a deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is None: if init_cfg is None: self.init_cfg = [ dict(type='Kaiming', layer='Conv2d'), dict( type='Constant', val=1, layer=['_BatchNorm', 'GroupNorm']) ] else: raise TypeError('pretrained must be a str or None') assert dcn is None, 'Not implemented yet.' assert plugins is None, 'Not implemented yet.' assert len(strides) == num_stages, \ 'The length of strides should be equal to num_stages, '\ f'while the strides is {strides}, the length of '\ f'strides is {len(strides)}, and the num_stages is '\ f'{num_stages}.' assert len(enc_num_convs) == num_stages, \ 'The length of enc_num_convs should be equal to num_stages, '\ f'while the enc_num_convs is {enc_num_convs}, the length of '\ f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ f'{num_stages}.' assert len(dec_num_convs) == (num_stages-1), \ 'The length of dec_num_convs should be equal to (num_stages-1), '\ f'while the dec_num_convs is {dec_num_convs}, the length of '\ f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ f'{num_stages}.' assert len(downsamples) == (num_stages-1), \ 'The length of downsamples should be equal to (num_stages-1), '\ f'while the downsamples is {downsamples}, the length of '\ f'downsamples is {len(downsamples)}, and the num_stages is '\ f'{num_stages}.' assert len(enc_dilations) == num_stages, \ 'The length of enc_dilations should be equal to num_stages, '\ f'while the enc_dilations is {enc_dilations}, the length of '\ f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ f'{num_stages}.' assert len(dec_dilations) == (num_stages-1), \ 'The length of dec_dilations should be equal to (num_stages-1), '\ f'while the dec_dilations is {dec_dilations}, the length of '\ f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ f'{num_stages}.' self.num_stages = num_stages self.strides = strides self.downsamples = downsamples self.norm_eval = norm_eval self.base_channels = base_channels self.encoder = nn.ModuleList() self.decoder = nn.ModuleList() for i in range(num_stages): enc_conv_block = [] if i != 0: if strides[i] == 1 and downsamples[i - 1]: enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) upsample = (strides[i] != 1 or downsamples[i - 1]) self.decoder.append( UpConvBlock( conv_block=BasicConvBlock, in_channels=base_channels * 2**i, skip_channels=base_channels * 2**(i - 1), out_channels=base_channels * 2**(i - 1), num_convs=dec_num_convs[i - 1], stride=1, dilation=dec_dilations[i - 1], with_cp=with_cp, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, upsample_cfg=upsample_cfg if upsample else None, dcn=None, plugins=None)) enc_conv_block.append( BasicConvBlock( in_channels=in_channels, out_channels=base_channels * 2**i, num_convs=enc_num_convs[i], stride=strides[i], dilation=enc_dilations[i], with_cp=with_cp, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, dcn=None, plugins=None)) self.encoder.append((nn.Sequential(*enc_conv_block))) in_channels = base_channels * 2**i def forward(self, x): self._check_input_divisible(x) enc_outs = [] for enc in self.encoder: x = enc(x) enc_outs.append(x) dec_outs = [x] for i in reversed(range(len(self.decoder))): x = self.decoder[i](enc_outs[i], x) dec_outs.append(x) return dec_outs def train(self, mode=True): """Convert the model into training mode while keep normalization layer freezed.""" super(UNet, self).train(mode) if mode and self.norm_eval: for m in self.modules(): # trick: eval have effect on BatchNorm only if isinstance(m, _BatchNorm): m.eval() def _check_input_divisible(self, x): h, w = x.shape[-2:] whole_downsample_rate = 1 for i in range(1, self.num_stages): if self.strides[i] == 2 or self.downsamples[i - 1]: whole_downsample_rate *= 2 assert (h % whole_downsample_rate == 0) \ and (w % whole_downsample_rate == 0),\ f'The input image size {(h, w)} should be divisible by the whole '\ f'downsample rate {whole_downsample_rate}, when num_stages is '\ f'{self.num_stages}, strides is {self.strides}, and downsamples '\ f'is {self.downsamples}.'
18,611
41.396355
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/backbones/vit.py
# Copyright (c) OpenMMLab. All rights reserved. import math import warnings import torch import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.cnn import build_norm_layer from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init, trunc_normal_) from mmcv.runner import (BaseModule, CheckpointLoader, ModuleList, load_state_dict) from torch.nn.modules.batchnorm import _BatchNorm from torch.nn.modules.utils import _pair as to_2tuple from mmseg.ops import resize from mmseg.utils import get_root_logger from ..builder import BACKBONES from ..utils import PatchEmbed class TransformerEncoderLayer(BaseModule): """Implements one encoder layer in Vision Transformer. Args: embed_dims (int): The feature dimension. num_heads (int): Parallel attention heads. feedforward_channels (int): The hidden dimension for FFNs. drop_rate (float): Probability of an element to be zeroed after the feed forward layer. Default: 0.0. attn_drop_rate (float): The drop out rate for attention layer. Default: 0.0. drop_path_rate (float): stochastic depth rate. Default 0.0. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. qkv_bias (bool): enable bias for qkv if True. Default: True act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN'). batch_first (bool): Key, Query and Value are shape of (batch, n, embed_dim) or (n, batch, embed_dim). Default: True. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. """ def __init__(self, embed_dims, num_heads, feedforward_channels, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., num_fcs=2, qkv_bias=True, act_cfg=dict(type='GELU'), norm_cfg=dict(type='LN'), batch_first=True, attn_cfg=dict(), ffn_cfg=dict(), with_cp=False): super(TransformerEncoderLayer, self).__init__() self.norm1_name, norm1 = build_norm_layer( norm_cfg, embed_dims, postfix=1) self.add_module(self.norm1_name, norm1) attn_cfg.update( dict( embed_dims=embed_dims, num_heads=num_heads, attn_drop=attn_drop_rate, proj_drop=drop_rate, batch_first=batch_first, bias=qkv_bias)) self.build_attn(attn_cfg) self.norm2_name, norm2 = build_norm_layer( norm_cfg, embed_dims, postfix=2) self.add_module(self.norm2_name, norm2) ffn_cfg.update( dict( embed_dims=embed_dims, feedforward_channels=feedforward_channels, num_fcs=num_fcs, ffn_drop=drop_rate, dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate) if drop_path_rate > 0 else None, act_cfg=act_cfg)) self.build_ffn(ffn_cfg) self.with_cp = with_cp def build_attn(self, attn_cfg): self.attn = MultiheadAttention(**attn_cfg) def build_ffn(self, ffn_cfg): self.ffn = FFN(**ffn_cfg) @property def norm1(self): return getattr(self, self.norm1_name) @property def norm2(self): return getattr(self, self.norm2_name) def forward(self, x): def _inner_forward(x): x = self.attn(self.norm1(x), identity=x) x = self.ffn(self.norm2(x), identity=x) return x if self.with_cp and x.requires_grad: x = cp.checkpoint(_inner_forward, x) else: x = _inner_forward(x) return x @BACKBONES.register_module() class VisionTransformer(BaseModule): """Vision Transformer. This backbone is the implementation of `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`_. Args: img_size (int | tuple): Input image size. Default: 224. patch_size (int): The patch size. Default: 16. in_channels (int): Number of input channels. Default: 3. embed_dims (int): embedding dimension. Default: 768. num_layers (int): depth of transformer. Default: 12. num_heads (int): number of attention heads. Default: 12. mlp_ratio (int): ratio of mlp hidden dim to embedding dim. Default: 4. out_indices (list | tuple | int): Output from which stages. Default: -1. qkv_bias (bool): enable bias for qkv if True. Default: True. drop_rate (float): Probability of an element to be zeroed. Default 0.0 attn_drop_rate (float): The drop out rate for attention layer. Default 0.0 drop_path_rate (float): stochastic depth rate. Default 0.0 with_cls_token (bool): Whether concatenating class token into image tokens as transformer input. Default: True. output_cls_token (bool): Whether output the cls_token. If set True, `with_cls_token` must be True. Default: False. norm_cfg (dict): Config dict for normalization layer. Default: dict(type='LN') act_cfg (dict): The activation config for FFNs. Default: dict(type='GELU'). patch_norm (bool): Whether to add a norm in PatchEmbed Block. Default: False. final_norm (bool): Whether to add a additional layer to normalize final feature map. Default: False. interpolate_mode (str): Select the interpolate mode for position embeding vector resize. Default: bicubic. num_fcs (int): The number of fully-connected layers for FFNs. Default: 2. norm_eval (bool): Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False. pretrained (str, optional): model pretrained path. Default: None. init_cfg (dict or list[dict], optional): Initialization config dict. Default: None. """ def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dims=768, num_layers=12, num_heads=12, mlp_ratio=4, out_indices=-1, qkv_bias=True, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., with_cls_token=True, output_cls_token=False, norm_cfg=dict(type='LN'), act_cfg=dict(type='GELU'), patch_norm=False, final_norm=False, interpolate_mode='bicubic', num_fcs=2, norm_eval=False, with_cp=False, pretrained=None, init_cfg=None): super(VisionTransformer, self).__init__(init_cfg=init_cfg) if isinstance(img_size, int): img_size = to_2tuple(img_size) elif isinstance(img_size, tuple): if len(img_size) == 1: img_size = to_2tuple(img_size[0]) assert len(img_size) == 2, \ f'The size of image should have length 1 or 2, ' \ f'but got {len(img_size)}' if output_cls_token: assert with_cls_token is True, f'with_cls_token must be True if' \ f'set output_cls_token to True, but got {with_cls_token}' assert not (init_cfg and pretrained), \ 'init_cfg and pretrained cannot be set at the same time' if isinstance(pretrained, str): warnings.warn('DeprecationWarning: pretrained is deprecated, ' 'please use "init_cfg" instead') self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) elif pretrained is not None: raise TypeError('pretrained must be a str or None') self.img_size = img_size self.patch_size = patch_size self.interpolate_mode = interpolate_mode self.norm_eval = norm_eval self.with_cp = with_cp self.pretrained = pretrained self.patch_embed = PatchEmbed( in_channels=in_channels, embed_dims=embed_dims, conv_type='Conv2d', kernel_size=patch_size, stride=patch_size, padding='corner', norm_cfg=norm_cfg if patch_norm else None, init_cfg=None, ) num_patches = (img_size[0] // patch_size) * \ (img_size[1] // patch_size) self.with_cls_token = with_cls_token self.output_cls_token = output_cls_token self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims)) self.pos_embed = nn.Parameter( torch.zeros(1, num_patches + 1, embed_dims)) self.drop_after_pos = nn.Dropout(p=drop_rate) if isinstance(out_indices, int): if out_indices == -1: out_indices = num_layers - 1 self.out_indices = [out_indices] elif isinstance(out_indices, list) or isinstance(out_indices, tuple): self.out_indices = out_indices else: raise TypeError('out_indices must be type of int, list or tuple') dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, num_layers) ] # stochastic depth decay rule self.layers = ModuleList() for i in range(num_layers): self.layers.append( TransformerEncoderLayer( embed_dims=embed_dims, num_heads=num_heads, feedforward_channels=mlp_ratio * embed_dims, attn_drop_rate=attn_drop_rate, drop_rate=drop_rate, drop_path_rate=dpr[i], num_fcs=num_fcs, qkv_bias=qkv_bias, act_cfg=act_cfg, norm_cfg=norm_cfg, with_cp=with_cp, batch_first=True)) self.final_norm = final_norm if final_norm: self.norm1_name, norm1 = build_norm_layer( norm_cfg, embed_dims, postfix=1) self.add_module(self.norm1_name, norm1) @property def norm1(self): return getattr(self, self.norm1_name) def init_weights(self): if (isinstance(self.init_cfg, dict) and self.init_cfg.get('type') == 'Pretrained'): logger = get_root_logger() checkpoint = CheckpointLoader.load_checkpoint( self.init_cfg['checkpoint'], logger=logger, map_location='cpu') if 'state_dict' in checkpoint: state_dict = checkpoint['state_dict'] else: state_dict = checkpoint if 'pos_embed' in state_dict.keys(): if self.pos_embed.shape != state_dict['pos_embed'].shape: logger.info(msg=f'Resize the pos_embed shape from ' f'{state_dict["pos_embed"].shape} to ' f'{self.pos_embed.shape}') h, w = self.img_size pos_size = int( math.sqrt(state_dict['pos_embed'].shape[1] - 1)) state_dict['pos_embed'] = self.resize_pos_embed( state_dict['pos_embed'], (h // self.patch_size, w // self.patch_size), (pos_size, pos_size), self.interpolate_mode) load_state_dict(self, state_dict, strict=False, logger=logger) elif self.init_cfg is not None: super(VisionTransformer, self).init_weights() else: # We only implement the 'jax_impl' initialization implemented at # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 trunc_normal_(self.pos_embed, std=.02) trunc_normal_(self.cls_token, std=.02) for n, m in self.named_modules(): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if m.bias is not None: if 'ffn' in n: nn.init.normal_(m.bias, mean=0., std=1e-6) else: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.Conv2d): kaiming_init(m, mode='fan_in', bias=0.) elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): constant_init(m, val=1.0, bias=0.) def _pos_embeding(self, patched_img, hw_shape, pos_embed): """Positioning embeding method. Resize the pos_embed, if the input image size doesn't match the training size. Args: patched_img (torch.Tensor): The patched image, it should be shape of [B, L1, C]. hw_shape (tuple): The downsampled image resolution. pos_embed (torch.Tensor): The pos_embed weighs, it should be shape of [B, L2, c]. Return: torch.Tensor: The pos encoded image feature. """ assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ 'the shapes of patched_img and pos_embed must be [B, L, C]' x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] if x_len != pos_len: if pos_len == (self.img_size[0] // self.patch_size) * ( self.img_size[1] // self.patch_size) + 1: pos_h = self.img_size[0] // self.patch_size pos_w = self.img_size[1] // self.patch_size else: raise ValueError( 'Unexpected shape of pos_embed, got {}.'.format( pos_embed.shape)) pos_embed = self.resize_pos_embed(pos_embed, hw_shape, (pos_h, pos_w), self.interpolate_mode) return self.drop_after_pos(patched_img + pos_embed) @staticmethod def resize_pos_embed(pos_embed, input_shpae, pos_shape, mode): """Resize pos_embed weights. Resize pos_embed using bicubic interpolate method. Args: pos_embed (torch.Tensor): Position embedding weights. input_shpae (tuple): Tuple for (downsampled input image height, downsampled input image width). pos_shape (tuple): The resolution of downsampled origin training image. mode (str): Algorithm used for upsampling: ``'nearest'`` | ``'linear'`` | ``'bilinear'`` | ``'bicubic'`` | ``'trilinear'``. Default: ``'nearest'`` Return: torch.Tensor: The resized pos_embed of shape [B, L_new, C] """ assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' pos_h, pos_w = pos_shape # keep dim for easy deployment cls_token_weight = pos_embed[:, 0:1] pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] pos_embed_weight = pos_embed_weight.reshape( 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) pos_embed_weight = resize( pos_embed_weight, size=input_shpae, align_corners=False, mode=mode) pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) return pos_embed def forward(self, inputs): B = inputs.shape[0] x, hw_shape = self.patch_embed(inputs) # stole cls_tokens impl from Phil Wang, thanks cls_tokens = self.cls_token.expand(B, -1, -1) x = torch.cat((cls_tokens, x), dim=1) x = self._pos_embeding(x, hw_shape, self.pos_embed) if not self.with_cls_token: # Remove class token for transformer encoder input x = x[:, 1:] outs = [] for i, layer in enumerate(self.layers): x = layer(x) if i == len(self.layers) - 1: if self.final_norm: x = self.norm1(x) if i in self.out_indices: if self.with_cls_token: # Remove class token and reshape token for decoder head out = x[:, 1:] else: out = x B, _, C = out.shape out = out.reshape(B, hw_shape[0], hw_shape[1], C).permute(0, 3, 1, 2).contiguous() if self.output_cls_token: out = [out, x[:, 0]] outs.append(out) return tuple(outs) def train(self, mode=True): super(VisionTransformer, self).train(mode) if mode and self.norm_eval: for m in self.modules(): if isinstance(m, nn.LayerNorm): m.eval()
17,876
39.537415
128
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/__init__.py
# Copyright (c) OpenMMLab. All rights reserved. from .ann_head import ANNHead from .apc_head import APCHead from .aspp_head import ASPPHead from .cc_head import CCHead from .da_head import DAHead from .dm_head import DMHead from .dnl_head import DNLHead from .dpt_head import DPTHead from .ema_head import EMAHead from .enc_head import EncHead from .fcn_head import FCNHead from .fpn_head import FPNHead from .gc_head import GCHead from .ham_head import LightHamHead from .isa_head import ISAHead from .knet_head import IterativeDecodeHead, KernelUpdateHead, KernelUpdator from .lraspp_head import LRASPPHead from .nl_head import NLHead from .ocr_head import OCRHead from .point_head import PointHead from .psa_head import PSAHead from .psp_head import PSPHead from .segformer_head import SegformerHead from .segmenter_mask_head import SegmenterMaskTransformerHead from .sep_aspp_head import DepthwiseSeparableASPPHead from .sep_fcn_head import DepthwiseSeparableFCNHead from .setr_mla_head import SETRMLAHead from .setr_up_head import SETRUPHead from .stdc_head import STDCHead from .uper_head import UPerHead __all__ = [ 'FCNHead', 'PSPHead', 'ASPPHead', 'PSAHead', 'NLHead', 'GCHead', 'CCHead', 'UPerHead', 'DepthwiseSeparableASPPHead', 'ANNHead', 'DAHead', 'OCRHead', 'EncHead', 'DepthwiseSeparableFCNHead', 'FPNHead', 'EMAHead', 'DNLHead', 'PointHead', 'APCHead', 'DMHead', 'LRASPPHead', 'SETRUPHead', 'SETRMLAHead', 'DPTHead', 'SETRMLAHead', 'SegmenterMaskTransformerHead', 'SegformerHead', 'ISAHead', 'STDCHead', 'IterativeDecodeHead', 'KernelUpdateHead', 'KernelUpdator', 'LightHamHead' ]
1,626
37.738095
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/ann_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import ConvModule from ..builder import HEADS from ..utils import SelfAttentionBlock as _SelfAttentionBlock from .decode_head import BaseDecodeHead class PPMConcat(nn.ModuleList): """Pyramid Pooling Module that only concat the features of each layer. Args: pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid Module. """ def __init__(self, pool_scales=(1, 3, 6, 8)): super(PPMConcat, self).__init__( [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) def forward(self, feats): """Forward function.""" ppm_outs = [] for ppm in self: ppm_out = ppm(feats) ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) concat_outs = torch.cat(ppm_outs, dim=2) return concat_outs class SelfAttentionBlock(_SelfAttentionBlock): """Make a ANN used SelfAttentionBlock. Args: low_in_channels (int): Input channels of lower level feature, which is the key feature for self-attention. high_in_channels (int): Input channels of higher level feature, which is the query feature for self-attention. channels (int): Output channels of key/query transform. out_channels (int): Output channels. share_key_query (bool): Whether share projection weight between key and query projection. query_scale (int): The scale of query feature map. key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid Module of key feature. conv_cfg (dict|None): Config of conv layers. norm_cfg (dict|None): Config of norm layers. act_cfg (dict|None): Config of activation layers. """ def __init__(self, low_in_channels, high_in_channels, channels, out_channels, share_key_query, query_scale, key_pool_scales, conv_cfg, norm_cfg, act_cfg): key_psp = PPMConcat(key_pool_scales) if query_scale > 1: query_downsample = nn.MaxPool2d(kernel_size=query_scale) else: query_downsample = None super(SelfAttentionBlock, self).__init__( key_in_channels=low_in_channels, query_in_channels=high_in_channels, channels=channels, out_channels=out_channels, share_key_query=share_key_query, query_downsample=query_downsample, key_downsample=key_psp, key_query_num_convs=1, key_query_norm=True, value_out_num_convs=1, value_out_norm=False, matmul_norm=True, with_out=True, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) class AFNB(nn.Module): """Asymmetric Fusion Non-local Block(AFNB) Args: low_in_channels (int): Input channels of lower level feature, which is the key feature for self-attention. high_in_channels (int): Input channels of higher level feature, which is the query feature for self-attention. channels (int): Output channels of key/query transform. out_channels (int): Output channels. and query projection. query_scales (tuple[int]): The scales of query feature map. Default: (1,) key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid Module of key feature. conv_cfg (dict|None): Config of conv layers. norm_cfg (dict|None): Config of norm layers. act_cfg (dict|None): Config of activation layers. """ def __init__(self, low_in_channels, high_in_channels, channels, out_channels, query_scales, key_pool_scales, conv_cfg, norm_cfg, act_cfg): super(AFNB, self).__init__() self.stages = nn.ModuleList() for query_scale in query_scales: self.stages.append( SelfAttentionBlock( low_in_channels=low_in_channels, high_in_channels=high_in_channels, channels=channels, out_channels=out_channels, share_key_query=False, query_scale=query_scale, key_pool_scales=key_pool_scales, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) self.bottleneck = ConvModule( out_channels + high_in_channels, out_channels, 1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=None) def forward(self, low_feats, high_feats): """Forward function.""" priors = [stage(high_feats, low_feats) for stage in self.stages] context = torch.stack(priors, dim=0).sum(dim=0) output = self.bottleneck(torch.cat([context, high_feats], 1)) return output class APNB(nn.Module): """Asymmetric Pyramid Non-local Block (APNB) Args: in_channels (int): Input channels of key/query feature, which is the key feature for self-attention. channels (int): Output channels of key/query transform. out_channels (int): Output channels. query_scales (tuple[int]): The scales of query feature map. Default: (1,) key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid Module of key feature. conv_cfg (dict|None): Config of conv layers. norm_cfg (dict|None): Config of norm layers. act_cfg (dict|None): Config of activation layers. """ def __init__(self, in_channels, channels, out_channels, query_scales, key_pool_scales, conv_cfg, norm_cfg, act_cfg): super(APNB, self).__init__() self.stages = nn.ModuleList() for query_scale in query_scales: self.stages.append( SelfAttentionBlock( low_in_channels=in_channels, high_in_channels=in_channels, channels=channels, out_channels=out_channels, share_key_query=True, query_scale=query_scale, key_pool_scales=key_pool_scales, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)) self.bottleneck = ConvModule( 2 * in_channels, out_channels, 1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) def forward(self, feats): """Forward function.""" priors = [stage(feats, feats) for stage in self.stages] context = torch.stack(priors, dim=0).sum(dim=0) output = self.bottleneck(torch.cat([context, feats], 1)) return output @HEADS.register_module() class ANNHead(BaseDecodeHead): """Asymmetric Non-local Neural Networks for Semantic Segmentation. This head is the implementation of `ANNNet <https://arxiv.org/abs/1908.07678>`_. Args: project_channels (int): Projection channels for Nonlocal. query_scales (tuple[int]): The scales of query feature map. Default: (1,) key_pool_scales (tuple[int]): The pooling scales of key feature map. Default: (1, 3, 6, 8). """ def __init__(self, project_channels, query_scales=(1, ), key_pool_scales=(1, 3, 6, 8), **kwargs): super(ANNHead, self).__init__( input_transform='multiple_select', **kwargs) assert len(self.in_channels) == 2 low_in_channels, high_in_channels = self.in_channels self.project_channels = project_channels self.fusion = AFNB( low_in_channels=low_in_channels, high_in_channels=high_in_channels, out_channels=high_in_channels, channels=project_channels, query_scales=query_scales, key_pool_scales=key_pool_scales, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.bottleneck = ConvModule( high_in_channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.context = APNB( in_channels=self.channels, out_channels=self.channels, channels=project_channels, query_scales=query_scales, key_pool_scales=key_pool_scales, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, inputs): """Forward function.""" low_feats, high_feats = self._transform_inputs(inputs) output = self.fusion(low_feats, high_feats) output = self.dropout(output) output = self.bottleneck(output) output = self.context(output) output = self.cls_seg(output) return output
9,222
36.340081
77
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/apc_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule from mmseg.ops import resize from ..builder import HEADS from .decode_head import BaseDecodeHead class ACM(nn.Module): """Adaptive Context Module used in APCNet. Args: pool_scale (int): Pooling scale used in Adaptive Context Module to extract region features. fusion (bool): Add one conv to fuse residual feature. in_channels (int): Input channels. channels (int): Channels after modules, before conv_seg. conv_cfg (dict | None): Config of conv layers. norm_cfg (dict | None): Config of norm layers. act_cfg (dict): Config of activation layers. """ def __init__(self, pool_scale, fusion, in_channels, channels, conv_cfg, norm_cfg, act_cfg): super(ACM, self).__init__() self.pool_scale = pool_scale self.fusion = fusion self.in_channels = in_channels self.channels = channels self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.pooled_redu_conv = ConvModule( self.in_channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.input_redu_conv = ConvModule( self.in_channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.global_info = ConvModule( self.channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.gla = nn.Conv2d(self.channels, self.pool_scale**2, 1, 1, 0) self.residual_conv = ConvModule( self.channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) if self.fusion: self.fusion_conv = ConvModule( self.channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, x): """Forward function.""" pooled_x = F.adaptive_avg_pool2d(x, self.pool_scale) # [batch_size, channels, h, w] x = self.input_redu_conv(x) # [batch_size, channels, pool_scale, pool_scale] pooled_x = self.pooled_redu_conv(pooled_x) batch_size = x.size(0) # [batch_size, pool_scale * pool_scale, channels] pooled_x = pooled_x.view(batch_size, self.channels, -1).permute(0, 2, 1).contiguous() # [batch_size, h * w, pool_scale * pool_scale] affinity_matrix = self.gla(x + resize( self.global_info(F.adaptive_avg_pool2d(x, 1)), size=x.shape[2:]) ).permute(0, 2, 3, 1).reshape( batch_size, -1, self.pool_scale**2) affinity_matrix = F.sigmoid(affinity_matrix) # [batch_size, h * w, channels] z_out = torch.matmul(affinity_matrix, pooled_x) # [batch_size, channels, h * w] z_out = z_out.permute(0, 2, 1).contiguous() # [batch_size, channels, h, w] z_out = z_out.view(batch_size, self.channels, x.size(2), x.size(3)) z_out = self.residual_conv(z_out) z_out = F.relu(z_out + x) if self.fusion: z_out = self.fusion_conv(z_out) return z_out @HEADS.register_module() class APCHead(BaseDecodeHead): """Adaptive Pyramid Context Network for Semantic Segmentation. This head is the implementation of `APCNet <https://openaccess.thecvf.com/content_CVPR_2019/papers/\ He_Adaptive_Pyramid_Context_Network_for_Semantic_Segmentation_\ CVPR_2019_paper.pdf>`_. Args: pool_scales (tuple[int]): Pooling scales used in Adaptive Context Module. Default: (1, 2, 3, 6). fusion (bool): Add one conv to fuse residual feature. """ def __init__(self, pool_scales=(1, 2, 3, 6), fusion=True, **kwargs): super(APCHead, self).__init__(**kwargs) assert isinstance(pool_scales, (list, tuple)) self.pool_scales = pool_scales self.fusion = fusion acm_modules = [] for pool_scale in self.pool_scales: acm_modules.append( ACM(pool_scale, self.fusion, self.in_channels, self.channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) self.acm_modules = nn.ModuleList(acm_modules) self.bottleneck = ConvModule( self.in_channels + len(pool_scales) * self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) acm_outs = [x] for acm_module in self.acm_modules: acm_outs.append(acm_module(x)) acm_outs = torch.cat(acm_outs, dim=1) output = self.bottleneck(acm_outs) output = self.cls_seg(output) return output
5,580
33.88125
76
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/aspp_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn from mmcv.cnn import ConvModule from mmseg.ops import resize from ..builder import HEADS from .decode_head import BaseDecodeHead class ASPPModule(nn.ModuleList): """Atrous Spatial Pyramid Pooling (ASPP) Module. Args: dilations (tuple[int]): Dilation rate of each layer. in_channels (int): Input channels. channels (int): Channels after modules, before conv_seg. conv_cfg (dict|None): Config of conv layers. norm_cfg (dict|None): Config of norm layers. act_cfg (dict): Config of activation layers. """ def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg, act_cfg): super(ASPPModule, self).__init__() self.dilations = dilations self.in_channels = in_channels self.channels = channels self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg for dilation in dilations: self.append( ConvModule( self.in_channels, self.channels, 1 if dilation == 1 else 3, dilation=dilation, padding=0 if dilation == 1 else dilation, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) def forward(self, x): """Forward function.""" aspp_outs = [] for aspp_module in self: aspp_outs.append(aspp_module(x)) return aspp_outs @HEADS.register_module() class ASPPHead(BaseDecodeHead): """Rethinking Atrous Convolution for Semantic Image Segmentation. This head is the implementation of `DeepLabV3 <https://arxiv.org/abs/1706.05587>`_. Args: dilations (tuple[int]): Dilation rates for ASPP module. Default: (1, 6, 12, 18). """ def __init__(self, dilations=(1, 6, 12, 18), **kwargs): super(ASPPHead, self).__init__(**kwargs) assert isinstance(dilations, (list, tuple)) self.dilations = dilations self.image_pool = nn.Sequential( nn.AdaptiveAvgPool2d(1), ConvModule( self.in_channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) self.aspp_modules = ASPPModule( dilations, self.in_channels, self.channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.bottleneck = ConvModule( (len(dilations) + 1) * self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def _forward_feature(self, inputs): """Forward function for feature maps before classifying each pixel with ``self.cls_seg`` fc. Args: inputs (list[Tensor]): List of multi-level img features. Returns: feats (Tensor): A tensor of shape (batch_size, self.channels, H, W) which is feature map for last layer of decoder head. """ x = self._transform_inputs(inputs) aspp_outs = [ resize( self.image_pool(x), size=x.size()[2:], mode='bilinear', align_corners=self.align_corners) ] aspp_outs.extend(self.aspp_modules(x)) aspp_outs = torch.cat(aspp_outs, dim=1) feats = self.bottleneck(aspp_outs) return feats def forward(self, inputs): """Forward function.""" output = self._forward_feature(inputs) output = self.cls_seg(output) return output
3,947
31.097561
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/cascade_decode_head.py
# Copyright (c) OpenMMLab. All rights reserved. from abc import ABCMeta, abstractmethod from .decode_head import BaseDecodeHead class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): """Base class for cascade decode head used in :class:`CascadeEncoderDecoder.""" def __init__(self, *args, **kwargs): super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) @abstractmethod def forward(self, inputs, prev_output): """Placeholder of forward function.""" pass def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, train_cfg): """Forward function for training. Args: inputs (list[Tensor]): List of multi-level img features. prev_output (Tensor): The output of previous decode head. img_metas (list[dict]): List of image info dict where each dict has: 'img_shape', 'scale_factor', 'flip', and may also contain 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. For details on the values of these keys see `mmseg/datasets/pipelines/formatting.py:Collect`. gt_semantic_seg (Tensor): Semantic segmentation masks used if the architecture supports semantic segmentation task. train_cfg (dict): The training config. Returns: dict[str, Tensor]: a dictionary of loss components """ seg_logits = self.forward(inputs, prev_output) losses = self.losses(seg_logits, gt_semantic_seg) return losses def forward_test(self, inputs, prev_output, img_metas, test_cfg): """Forward function for testing. Args: inputs (list[Tensor]): List of multi-level img features. prev_output (Tensor): The output of previous decode head. img_metas (list[dict]): List of image info dict where each dict has: 'img_shape', 'scale_factor', 'flip', and may also contain 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. For details on the values of these keys see `mmseg/datasets/pipelines/formatting.py:Collect`. test_cfg (dict): The testing config. Returns: Tensor: Output segmentation map. """ return self.forward(inputs, prev_output)
2,399
39.677966
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/cc_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch from ..builder import HEADS from .fcn_head import FCNHead try: from mmcv.ops import CrissCrossAttention except ModuleNotFoundError: CrissCrossAttention = None @HEADS.register_module() class CCHead(FCNHead): """CCNet: Criss-Cross Attention for Semantic Segmentation. This head is the implementation of `CCNet <https://arxiv.org/abs/1811.11721>`_. Args: recurrence (int): Number of recurrence of Criss Cross Attention module. Default: 2. """ def __init__(self, recurrence=2, **kwargs): if CrissCrossAttention is None: raise RuntimeError('Please install mmcv-full for ' 'CrissCrossAttention ops') super(CCHead, self).__init__(num_convs=2, **kwargs) self.recurrence = recurrence self.cca = CrissCrossAttention(self.channels) def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) output = self.convs[0](x) for _ in range(self.recurrence): output = self.cca(output) output = self.convs[1](output) if self.concat_input: output = self.conv_cat(torch.cat([x, output], dim=1)) output = self.cls_seg(output) return output
1,331
29.272727
71
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/da_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn.functional as F from mmcv.cnn import ConvModule, Scale from torch import nn from mmseg.core import add_prefix from ..builder import HEADS from ..utils import SelfAttentionBlock as _SelfAttentionBlock from .decode_head import BaseDecodeHead class PAM(_SelfAttentionBlock): """Position Attention Module (PAM) Args: in_channels (int): Input channels of key/query feature. channels (int): Output channels of key/query transform. """ def __init__(self, in_channels, channels): super(PAM, self).__init__( key_in_channels=in_channels, query_in_channels=in_channels, channels=channels, out_channels=in_channels, share_key_query=False, query_downsample=None, key_downsample=None, key_query_num_convs=1, key_query_norm=False, value_out_num_convs=1, value_out_norm=False, matmul_norm=False, with_out=False, conv_cfg=None, norm_cfg=None, act_cfg=None) self.gamma = Scale(0) def forward(self, x): """Forward function.""" out = super(PAM, self).forward(x, x) out = self.gamma(out) + x return out class CAM(nn.Module): """Channel Attention Module (CAM)""" def __init__(self): super(CAM, self).__init__() self.gamma = Scale(0) def forward(self, x): """Forward function.""" batch_size, channels, height, width = x.size() proj_query = x.view(batch_size, channels, -1) proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1) energy = torch.bmm(proj_query, proj_key) energy_new = torch.max( energy, -1, keepdim=True)[0].expand_as(energy) - energy attention = F.softmax(energy_new, dim=-1) proj_value = x.view(batch_size, channels, -1) out = torch.bmm(attention, proj_value) out = out.view(batch_size, channels, height, width) out = self.gamma(out) + x return out @HEADS.register_module() class DAHead(BaseDecodeHead): """Dual Attention Network for Scene Segmentation. This head is the implementation of `DANet <https://arxiv.org/abs/1809.02983>`_. Args: pam_channels (int): The channels of Position Attention Module(PAM). """ def __init__(self, pam_channels, **kwargs): super(DAHead, self).__init__(**kwargs) self.pam_channels = pam_channels self.pam_in_conv = ConvModule( self.in_channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.pam = PAM(self.channels, pam_channels) self.pam_out_conv = ConvModule( self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.pam_conv_seg = nn.Conv2d( self.channels, self.num_classes, kernel_size=1) self.cam_in_conv = ConvModule( self.in_channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.cam = CAM() self.cam_out_conv = ConvModule( self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.cam_conv_seg = nn.Conv2d( self.channels, self.num_classes, kernel_size=1) def pam_cls_seg(self, feat): """PAM feature classification.""" if self.dropout is not None: feat = self.dropout(feat) output = self.pam_conv_seg(feat) return output def cam_cls_seg(self, feat): """CAM feature classification.""" if self.dropout is not None: feat = self.dropout(feat) output = self.cam_conv_seg(feat) return output def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) pam_feat = self.pam_in_conv(x) pam_feat = self.pam(pam_feat) pam_feat = self.pam_out_conv(pam_feat) pam_out = self.pam_cls_seg(pam_feat) cam_feat = self.cam_in_conv(x) cam_feat = self.cam(cam_feat) cam_feat = self.cam_out_conv(cam_feat) cam_out = self.cam_cls_seg(cam_feat) feat_sum = pam_feat + cam_feat pam_cam_out = self.cls_seg(feat_sum) return pam_cam_out, pam_out, cam_out def forward_test(self, inputs, img_metas, test_cfg): """Forward function for testing, only ``pam_cam`` is used.""" return self.forward(inputs)[0] def losses(self, seg_logit, seg_label): """Compute ``pam_cam``, ``pam``, ``cam`` loss.""" pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit loss = dict() loss.update( add_prefix( super(DAHead, self).losses(pam_cam_seg_logit, seg_label), 'pam_cam')) loss.update( add_prefix( super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam')) loss.update( add_prefix( super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam')) return loss
5,593
30.077778
77
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/decode_head.py
# Copyright (c) OpenMMLab. All rights reserved. import warnings from abc import ABCMeta, abstractmethod import torch import torch.nn as nn from mmcv.runner import BaseModule, auto_fp16, force_fp32 from mmseg.core import build_pixel_sampler from mmseg.ops import resize from ..builder import build_loss from ..losses import accuracy class BaseDecodeHead(BaseModule, metaclass=ABCMeta): """Base class for BaseDecodeHead. Args: in_channels (int|Sequence[int]): Input channels. channels (int): Channels after modules, before conv_seg. num_classes (int): Number of classes. out_channels (int): Output channels of conv_seg. threshold (float): Threshold for binary segmentation in the case of `out_channels==1`. Default: None. dropout_ratio (float): Ratio of dropout layer. Default: 0.1. conv_cfg (dict|None): Config of conv layers. Default: None. norm_cfg (dict|None): Config of norm layers. Default: None. act_cfg (dict): Config of activation layers. Default: dict(type='ReLU') in_index (int|Sequence[int]): Input feature index. Default: -1 input_transform (str|None): Transformation type of input features. Options: 'resize_concat', 'multiple_select', None. 'resize_concat': Multiple feature maps will be resize to the same size as first one and than concat together. Usually used in FCN head of HRNet. 'multiple_select': Multiple feature maps will be bundle into a list and passed into decode head. None: Only one select feature map is allowed. Default: None. loss_decode (dict | Sequence[dict]): Config of decode loss. The `loss_name` is property of corresponding loss function which could be shown in training log. If you want this loss item to be included into the backward graph, `loss_` must be the prefix of the name. Defaults to 'loss_ce'. e.g. dict(type='CrossEntropyLoss'), [dict(type='CrossEntropyLoss', loss_name='loss_ce'), dict(type='DiceLoss', loss_name='loss_dice')] Default: dict(type='CrossEntropyLoss'). ignore_index (int | None): The label index to be ignored. When using masked BCE loss, ignore_index should be set to None. Default: 255. sampler (dict|None): The config of segmentation map sampler. Default: None. align_corners (bool): align_corners argument of F.interpolate. Default: False. downsample_label_ratio (int): The ratio to downsample seg_label in losses. downsample_label_ratio > 1 will reduce memory usage. Disabled if downsample_label_ratio = 0. Default: 0. init_cfg (dict or list[dict], optional): Initialization config dict. """ def __init__(self, in_channels, channels, *, num_classes, out_channels=None, threshold=None, dropout_ratio=0.1, conv_cfg=None, norm_cfg=None, act_cfg=dict(type='ReLU'), in_index=-1, input_transform=None, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), ignore_index=255, sampler=None, align_corners=False, downsample_label_ratio=0, init_cfg=dict( type='Normal', std=0.01, override=dict(name='conv_seg'))): super(BaseDecodeHead, self).__init__(init_cfg) self._init_inputs(in_channels, in_index, input_transform) self.channels = channels self.dropout_ratio = dropout_ratio self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.in_index = in_index self.ignore_index = ignore_index self.align_corners = align_corners self.downsample_label_ratio = downsample_label_ratio if not isinstance(self.downsample_label_ratio, int) or \ self.downsample_label_ratio < 0: warnings.warn('downsample_label_ratio should ' 'be set as an integer equal or larger than 0.') if out_channels is None: if num_classes == 2: warnings.warn('For binary segmentation, we suggest using' '`out_channels = 1` to define the output' 'channels of segmentor, and use `threshold`' 'to convert seg_logist into a prediction' 'applying a threshold') out_channels = num_classes if out_channels != num_classes and out_channels != 1: raise ValueError( 'out_channels should be equal to num_classes,' 'except binary segmentation set out_channels == 1 and' f'num_classes == 2, but got out_channels={out_channels}' f'and num_classes={num_classes}') if out_channels == 1 and threshold is None: threshold = 0.3 warnings.warn('threshold is not defined for binary, and defaults' 'to 0.3') self.num_classes = num_classes self.out_channels = out_channels self.threshold = threshold if isinstance(loss_decode, dict): self.loss_decode = build_loss(loss_decode) elif isinstance(loss_decode, (list, tuple)): self.loss_decode = nn.ModuleList() for loss in loss_decode: self.loss_decode.append(build_loss(loss)) else: raise TypeError(f'loss_decode must be a dict or sequence of dict,\ but got {type(loss_decode)}') if sampler is not None: self.sampler = build_pixel_sampler(sampler, context=self) else: self.sampler = None self.conv_seg = nn.Conv2d(channels, self.out_channels, kernel_size=1) if dropout_ratio > 0: self.dropout = nn.Dropout2d(dropout_ratio) else: self.dropout = None self.fp16_enabled = False def extra_repr(self): """Extra repr.""" s = f'input_transform={self.input_transform}, ' \ f'ignore_index={self.ignore_index}, ' \ f'align_corners={self.align_corners}' return s def _init_inputs(self, in_channels, in_index, input_transform): """Check and initialize input transforms. The in_channels, in_index and input_transform must match. Specifically, when input_transform is None, only single feature map will be selected. So in_channels and in_index must be of type int. When input_transform Args: in_channels (int|Sequence[int]): Input channels. in_index (int|Sequence[int]): Input feature index. input_transform (str|None): Transformation type of input features. Options: 'resize_concat', 'multiple_select', None. 'resize_concat': Multiple feature maps will be resize to the same size as first one and than concat together. Usually used in FCN head of HRNet. 'multiple_select': Multiple feature maps will be bundle into a list and passed into decode head. None: Only one select feature map is allowed. """ if input_transform is not None: assert input_transform in ['resize_concat', 'multiple_select'] self.input_transform = input_transform self.in_index = in_index if input_transform is not None: assert isinstance(in_channels, (list, tuple)) assert isinstance(in_index, (list, tuple)) assert len(in_channels) == len(in_index) if input_transform == 'resize_concat': self.in_channels = sum(in_channels) else: self.in_channels = in_channels else: assert isinstance(in_channels, int) assert isinstance(in_index, int) self.in_channels = in_channels def _transform_inputs(self, inputs): """Transform inputs for decoder. Args: inputs (list[Tensor]): List of multi-level img features. Returns: Tensor: The transformed inputs """ if self.input_transform == 'resize_concat': inputs = [inputs[i] for i in self.in_index] upsampled_inputs = [ resize( input=x, size=inputs[0].shape[2:], mode='bilinear', align_corners=self.align_corners) for x in inputs ] inputs = torch.cat(upsampled_inputs, dim=1) elif self.input_transform == 'multiple_select': inputs = [inputs[i] for i in self.in_index] else: inputs = inputs[self.in_index] return inputs @auto_fp16() @abstractmethod def forward(self, inputs): """Placeholder of forward function.""" pass def forward_train(self, inputs, img_metas, gt_semantic_seg, train_cfg): """Forward function for training. Args: inputs (list[Tensor]): List of multi-level img features. img_metas (list[dict]): List of image info dict where each dict has: 'img_shape', 'scale_factor', 'flip', and may also contain 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. For details on the values of these keys see `mmseg/datasets/pipelines/formatting.py:Collect`. gt_semantic_seg (Tensor): Semantic segmentation masks used if the architecture supports semantic segmentation task. train_cfg (dict): The training config. Returns: dict[str, Tensor]: a dictionary of loss components """ seg_logits = self(inputs) losses = self.losses(seg_logits, gt_semantic_seg) return losses def forward_test(self, inputs, img_metas, test_cfg): """Forward function for testing. Args: inputs (list[Tensor]): List of multi-level img features. img_metas (list[dict]): List of image info dict where each dict has: 'img_shape', 'scale_factor', 'flip', and may also contain 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. For details on the values of these keys see `mmseg/datasets/pipelines/formatting.py:Collect`. test_cfg (dict): The testing config. Returns: Tensor: Output segmentation map. """ return self.forward(inputs) def cls_seg(self, feat): """Classify each pixel.""" if self.dropout is not None: feat = self.dropout(feat) output = self.conv_seg(feat) return output @force_fp32(apply_to=('seg_logit', )) def losses(self, seg_logit, seg_label): """Compute segmentation loss.""" loss = dict() if self.downsample_label_ratio > 0: seg_label = seg_label.float() target_size = (seg_label.shape[2] // self.downsample_label_ratio, seg_label.shape[3] // self.downsample_label_ratio) seg_label = resize( input=seg_label, size=target_size, mode='nearest') seg_label = seg_label.long() seg_logit = resize( input=seg_logit, size=seg_label.shape[2:], mode='bilinear', align_corners=self.align_corners) if self.sampler is not None: seg_weight = self.sampler.sample(seg_logit, seg_label) else: seg_weight = None seg_label = seg_label.squeeze(1) if not isinstance(self.loss_decode, nn.ModuleList): losses_decode = [self.loss_decode] else: losses_decode = self.loss_decode for loss_decode in losses_decode: if loss_decode.loss_name not in loss: loss[loss_decode.loss_name] = loss_decode( seg_logit, seg_label, weight=seg_weight, ignore_index=self.ignore_index) else: loss[loss_decode.loss_name] += loss_decode( seg_logit, seg_label, weight=seg_weight, ignore_index=self.ignore_index) loss['acc_seg'] = accuracy( seg_logit, seg_label, ignore_index=self.ignore_index) return loss
12,965
40.42492
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/dm_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer from ..builder import HEADS from .decode_head import BaseDecodeHead class DCM(nn.Module): """Dynamic Convolutional Module used in DMNet. Args: filter_size (int): The filter size of generated convolution kernel used in Dynamic Convolutional Module. fusion (bool): Add one conv to fuse DCM output feature. in_channels (int): Input channels. channels (int): Channels after modules, before conv_seg. conv_cfg (dict | None): Config of conv layers. norm_cfg (dict | None): Config of norm layers. act_cfg (dict): Config of activation layers. """ def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg, norm_cfg, act_cfg): super(DCM, self).__init__() self.filter_size = filter_size self.fusion = fusion self.in_channels = in_channels self.channels = channels self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.act_cfg = act_cfg self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1, 0) self.input_redu_conv = ConvModule( self.in_channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) if self.norm_cfg is not None: self.norm = build_norm_layer(self.norm_cfg, self.channels)[1] else: self.norm = None self.activate = build_activation_layer(self.act_cfg) if self.fusion: self.fusion_conv = ConvModule( self.channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, x): """Forward function.""" generated_filter = self.filter_gen_conv( F.adaptive_avg_pool2d(x, self.filter_size)) x = self.input_redu_conv(x) b, c, h, w = x.shape # [1, b * c, h, w], c = self.channels x = x.view(1, b * c, h, w) # [b * c, 1, filter_size, filter_size] generated_filter = generated_filter.view(b * c, 1, self.filter_size, self.filter_size) pad = (self.filter_size - 1) // 2 if (self.filter_size - 1) % 2 == 0: p2d = (pad, pad, pad, pad) else: p2d = (pad + 1, pad, pad + 1, pad) x = F.pad(input=x, pad=p2d, mode='constant', value=0) # [1, b * c, h, w] output = F.conv2d(input=x, weight=generated_filter, groups=b * c) # [b, c, h, w] output = output.view(b, c, h, w) if self.norm is not None: output = self.norm(output) output = self.activate(output) if self.fusion: output = self.fusion_conv(output) return output @HEADS.register_module() class DMHead(BaseDecodeHead): """Dynamic Multi-scale Filters for Semantic Segmentation. This head is the implementation of `DMNet <https://openaccess.thecvf.com/content_ICCV_2019/papers/\ He_Dynamic_Multi-Scale_Filters_for_Semantic_Segmentation_\ ICCV_2019_paper.pdf>`_. Args: filter_sizes (tuple[int]): The size of generated convolutional filters used in Dynamic Convolutional Module. Default: (1, 3, 5, 7). fusion (bool): Add one conv to fuse DCM output feature. """ def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs): super(DMHead, self).__init__(**kwargs) assert isinstance(filter_sizes, (list, tuple)) self.filter_sizes = filter_sizes self.fusion = fusion dcm_modules = [] for filter_size in self.filter_sizes: dcm_modules.append( DCM(filter_size, self.fusion, self.in_channels, self.channels, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) self.dcm_modules = nn.ModuleList(dcm_modules) self.bottleneck = ConvModule( self.in_channels + len(filter_sizes) * self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) dcm_outs = [x] for dcm_module in self.dcm_modules: dcm_outs.append(dcm_module(x)) dcm_outs = torch.cat(dcm_outs, dim=1) output = self.bottleneck(dcm_outs) output = self.cls_seg(output) return output
5,032
34.443662
79
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/dnl_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch from mmcv.cnn import NonLocal2d from torch import nn from ..builder import HEADS from .fcn_head import FCNHead class DisentangledNonLocal2d(NonLocal2d): """Disentangled Non-Local Blocks. Args: temperature (float): Temperature to adjust attention. Default: 0.05 """ def __init__(self, *arg, temperature, **kwargs): super().__init__(*arg, **kwargs) self.temperature = temperature self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1) def embedded_gaussian(self, theta_x, phi_x): """Embedded gaussian with temperature.""" # NonLocal2d pairwise_weight: [N, HxW, HxW] pairwise_weight = torch.matmul(theta_x, phi_x) if self.use_scale: # theta_x.shape[-1] is `self.inter_channels` pairwise_weight /= torch.tensor( theta_x.shape[-1], dtype=torch.float, device=pairwise_weight.device)**torch.tensor( 0.5, device=pairwise_weight.device) pairwise_weight /= torch.tensor( self.temperature, device=pairwise_weight.device) pairwise_weight = pairwise_weight.softmax(dim=-1) return pairwise_weight def forward(self, x): # x: [N, C, H, W] n = x.size(0) # g_x: [N, HxW, C] g_x = self.g(x).view(n, self.inter_channels, -1) g_x = g_x.permute(0, 2, 1) # theta_x: [N, HxW, C], phi_x: [N, C, HxW] if self.mode == 'gaussian': theta_x = x.view(n, self.in_channels, -1) theta_x = theta_x.permute(0, 2, 1) if self.sub_sample: phi_x = self.phi(x).view(n, self.in_channels, -1) else: phi_x = x.view(n, self.in_channels, -1) elif self.mode == 'concatenation': theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) else: theta_x = self.theta(x).view(n, self.inter_channels, -1) theta_x = theta_x.permute(0, 2, 1) phi_x = self.phi(x).view(n, self.inter_channels, -1) # subtract mean theta_x -= theta_x.mean(dim=-2, keepdim=True) phi_x -= phi_x.mean(dim=-1, keepdim=True) pairwise_func = getattr(self, self.mode) # pairwise_weight: [N, HxW, HxW] pairwise_weight = pairwise_func(theta_x, phi_x) # y: [N, HxW, C] y = torch.matmul(pairwise_weight, g_x) # y: [N, C, H, W] y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, *x.size()[2:]) # unary_mask: [N, 1, HxW] unary_mask = self.conv_mask(x) unary_mask = unary_mask.view(n, 1, -1) unary_mask = unary_mask.softmax(dim=-1) # unary_x: [N, 1, C] unary_x = torch.matmul(unary_mask, g_x) # unary_x: [N, C, 1, 1] unary_x = unary_x.permute(0, 2, 1).contiguous().reshape( n, self.inter_channels, 1, 1) output = x + self.conv_out(y + unary_x) return output @HEADS.register_module() class DNLHead(FCNHead): """Disentangled Non-Local Neural Networks. This head is the implementation of `DNLNet <https://arxiv.org/abs/2006.06668>`_. Args: reduction (int): Reduction factor of projection transform. Default: 2. use_scale (bool): Whether to scale pairwise_weight by sqrt(1/inter_channels). Default: False. mode (str): The nonlocal mode. Options are 'embedded_gaussian', 'dot_product'. Default: 'embedded_gaussian.'. temperature (float): Temperature to adjust attention. Default: 0.05 """ def __init__(self, reduction=2, use_scale=True, mode='embedded_gaussian', temperature=0.05, **kwargs): super(DNLHead, self).__init__(num_convs=2, **kwargs) self.reduction = reduction self.use_scale = use_scale self.mode = mode self.temperature = temperature self.dnl_block = DisentangledNonLocal2d( in_channels=self.channels, reduction=self.reduction, use_scale=self.use_scale, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, mode=self.mode, temperature=self.temperature) def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) output = self.convs[0](x) output = self.dnl_block(output) output = self.convs[1](output) if self.concat_input: output = self.conv_cat(torch.cat([x, output], dim=1)) output = self.cls_seg(output) return output
4,856
34.195652
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/dpt_head.py
# Copyright (c) OpenMMLab. All rights reserved. import math import torch import torch.nn as nn from mmcv.cnn import ConvModule, Linear, build_activation_layer from mmcv.runner import BaseModule from mmseg.ops import resize from ..builder import HEADS from .decode_head import BaseDecodeHead class ReassembleBlocks(BaseModule): """ViTPostProcessBlock, process cls_token in ViT backbone output and rearrange the feature vector to feature map. Args: in_channels (int): ViT feature channels. Default: 768. out_channels (List): output channels of each stage. Default: [96, 192, 384, 768]. readout_type (str): Type of readout operation. Default: 'ignore'. patch_size (int): The patch size. Default: 16. init_cfg (dict, optional): Initialization config dict. Default: None. """ def __init__(self, in_channels=768, out_channels=[96, 192, 384, 768], readout_type='ignore', patch_size=16, init_cfg=None): super(ReassembleBlocks, self).__init__(init_cfg) assert readout_type in ['ignore', 'add', 'project'] self.readout_type = readout_type self.patch_size = patch_size self.projects = nn.ModuleList([ ConvModule( in_channels=in_channels, out_channels=out_channel, kernel_size=1, act_cfg=None, ) for out_channel in out_channels ]) self.resize_layers = nn.ModuleList([ nn.ConvTranspose2d( in_channels=out_channels[0], out_channels=out_channels[0], kernel_size=4, stride=4, padding=0), nn.ConvTranspose2d( in_channels=out_channels[1], out_channels=out_channels[1], kernel_size=2, stride=2, padding=0), nn.Identity(), nn.Conv2d( in_channels=out_channels[3], out_channels=out_channels[3], kernel_size=3, stride=2, padding=1) ]) if self.readout_type == 'project': self.readout_projects = nn.ModuleList() for _ in range(len(self.projects)): self.readout_projects.append( nn.Sequential( Linear(2 * in_channels, in_channels), build_activation_layer(dict(type='GELU')))) def forward(self, inputs): assert isinstance(inputs, list) out = [] for i, x in enumerate(inputs): assert len(x) == 2 x, cls_token = x[0], x[1] feature_shape = x.shape if self.readout_type == 'project': x = x.flatten(2).permute((0, 2, 1)) readout = cls_token.unsqueeze(1).expand_as(x) x = self.readout_projects[i](torch.cat((x, readout), -1)) x = x.permute(0, 2, 1).reshape(feature_shape) elif self.readout_type == 'add': x = x.flatten(2) + cls_token.unsqueeze(-1) x = x.reshape(feature_shape) else: pass x = self.projects[i](x) x = self.resize_layers[i](x) out.append(x) return out class PreActResidualConvUnit(BaseModule): """ResidualConvUnit, pre-activate residual unit. Args: in_channels (int): number of channels in the input feature map. act_cfg (dict): dictionary to construct and config activation layer. norm_cfg (dict): dictionary to construct and config norm layer. stride (int): stride of the first block. Default: 1 dilation (int): dilation rate for convs layers. Default: 1. init_cfg (dict, optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, act_cfg, norm_cfg, stride=1, dilation=1, init_cfg=None): super(PreActResidualConvUnit, self).__init__(init_cfg) self.conv1 = ConvModule( in_channels, in_channels, 3, stride=stride, padding=dilation, dilation=dilation, norm_cfg=norm_cfg, act_cfg=act_cfg, bias=False, order=('act', 'conv', 'norm')) self.conv2 = ConvModule( in_channels, in_channels, 3, padding=1, norm_cfg=norm_cfg, act_cfg=act_cfg, bias=False, order=('act', 'conv', 'norm')) def forward(self, inputs): inputs_ = inputs.clone() x = self.conv1(inputs) x = self.conv2(x) return x + inputs_ class FeatureFusionBlock(BaseModule): """FeatureFusionBlock, merge feature map from different stages. Args: in_channels (int): Input channels. act_cfg (dict): The activation config for ResidualConvUnit. norm_cfg (dict): Config dict for normalization layer. expand (bool): Whether expand the channels in post process block. Default: False. align_corners (bool): align_corner setting for bilinear upsample. Default: True. init_cfg (dict, optional): Initialization config dict. Default: None. """ def __init__(self, in_channels, act_cfg, norm_cfg, expand=False, align_corners=True, init_cfg=None): super(FeatureFusionBlock, self).__init__(init_cfg) self.in_channels = in_channels self.expand = expand self.align_corners = align_corners self.out_channels = in_channels if self.expand: self.out_channels = in_channels // 2 self.project = ConvModule( self.in_channels, self.out_channels, kernel_size=1, act_cfg=None, bias=True) self.res_conv_unit1 = PreActResidualConvUnit( in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) self.res_conv_unit2 = PreActResidualConvUnit( in_channels=self.in_channels, act_cfg=act_cfg, norm_cfg=norm_cfg) def forward(self, *inputs): x = inputs[0] if len(inputs) == 2: if x.shape != inputs[1].shape: res = resize( inputs[1], size=(x.shape[2], x.shape[3]), mode='bilinear', align_corners=False) else: res = inputs[1] x = x + self.res_conv_unit1(res) x = self.res_conv_unit2(x) x = resize( x, scale_factor=2, mode='bilinear', align_corners=self.align_corners) x = self.project(x) return x @HEADS.register_module() class DPTHead(BaseDecodeHead): """Vision Transformers for Dense Prediction. This head is implemented of `DPT <https://arxiv.org/abs/2103.13413>`_. Args: embed_dims (int): The embed dimension of the ViT backbone. Default: 768. post_process_channels (List): Out channels of post process conv layers. Default: [96, 192, 384, 768]. readout_type (str): Type of readout operation. Default: 'ignore'. patch_size (int): The patch size. Default: 16. expand_channels (bool): Whether expand the channels in post process block. Default: False. act_cfg (dict): The activation config for residual conv unit. Default dict(type='ReLU'). norm_cfg (dict): Config dict for normalization layer. Default: dict(type='BN'). """ def __init__(self, embed_dims=768, post_process_channels=[96, 192, 384, 768], readout_type='ignore', patch_size=16, expand_channels=False, act_cfg=dict(type='ReLU'), norm_cfg=dict(type='BN'), **kwargs): super(DPTHead, self).__init__(**kwargs) self.in_channels = self.in_channels self.expand_channels = expand_channels self.reassemble_blocks = ReassembleBlocks(embed_dims, post_process_channels, readout_type, patch_size) self.post_process_channels = [ channel * math.pow(2, i) if expand_channels else channel for i, channel in enumerate(post_process_channels) ] self.convs = nn.ModuleList() for channel in self.post_process_channels: self.convs.append( ConvModule( channel, self.channels, kernel_size=3, padding=1, act_cfg=None, bias=False)) self.fusion_blocks = nn.ModuleList() for _ in range(len(self.convs)): self.fusion_blocks.append( FeatureFusionBlock(self.channels, act_cfg, norm_cfg)) self.fusion_blocks[0].res_conv_unit1 = None self.project = ConvModule( self.channels, self.channels, kernel_size=3, padding=1, norm_cfg=norm_cfg) self.num_fusion_blocks = len(self.fusion_blocks) self.num_reassemble_blocks = len(self.reassemble_blocks.resize_layers) self.num_post_process_channels = len(self.post_process_channels) assert self.num_fusion_blocks == self.num_reassemble_blocks assert self.num_reassemble_blocks == self.num_post_process_channels def forward(self, inputs): assert len(inputs) == self.num_reassemble_blocks x = self._transform_inputs(inputs) x = self.reassemble_blocks(x) x = [self.convs[i](feature) for i, feature in enumerate(x)] out = self.fusion_blocks[0](x[-1]) for i in range(1, len(self.fusion_blocks)): out = self.fusion_blocks[i](out, x[-(i + 1)]) out = self.project(out) out = self.cls_seg(out) return out
10,399
34.254237
78
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/ema_head.py
# Copyright (c) OpenMMLab. All rights reserved. import math import torch import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule from ..builder import HEADS from .decode_head import BaseDecodeHead def reduce_mean(tensor): """Reduce mean when distributed training.""" if not (dist.is_available() and dist.is_initialized()): return tensor tensor = tensor.clone() dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) return tensor class EMAModule(nn.Module): """Expectation Maximization Attention Module used in EMANet. Args: channels (int): Channels of the whole module. num_bases (int): Number of bases. num_stages (int): Number of the EM iterations. """ def __init__(self, channels, num_bases, num_stages, momentum): super(EMAModule, self).__init__() assert num_stages >= 1, 'num_stages must be at least 1!' self.num_bases = num_bases self.num_stages = num_stages self.momentum = momentum bases = torch.zeros(1, channels, self.num_bases) bases.normal_(0, math.sqrt(2. / self.num_bases)) # [1, channels, num_bases] bases = F.normalize(bases, dim=1, p=2) self.register_buffer('bases', bases) def forward(self, feats): """Forward function.""" batch_size, channels, height, width = feats.size() # [batch_size, channels, height*width] feats = feats.view(batch_size, channels, height * width) # [batch_size, channels, num_bases] bases = self.bases.repeat(batch_size, 1, 1) with torch.no_grad(): for i in range(self.num_stages): # [batch_size, height*width, num_bases] attention = torch.einsum('bcn,bck->bnk', feats, bases) attention = F.softmax(attention, dim=2) # l1 norm attention_normed = F.normalize(attention, dim=1, p=1) # [batch_size, channels, num_bases] bases = torch.einsum('bcn,bnk->bck', feats, attention_normed) # l2 norm bases = F.normalize(bases, dim=1, p=2) feats_recon = torch.einsum('bck,bnk->bcn', bases, attention) feats_recon = feats_recon.view(batch_size, channels, height, width) if self.training: bases = bases.mean(dim=0, keepdim=True) bases = reduce_mean(bases) # l2 norm bases = F.normalize(bases, dim=1, p=2) self.bases = (1 - self.momentum) * self.bases + self.momentum * bases return feats_recon @HEADS.register_module() class EMAHead(BaseDecodeHead): """Expectation Maximization Attention Networks for Semantic Segmentation. This head is the implementation of `EMANet <https://arxiv.org/abs/1907.13426>`_. Args: ema_channels (int): EMA module channels num_bases (int): Number of bases. num_stages (int): Number of the EM iterations. concat_input (bool): Whether concat the input and output of convs before classification layer. Default: True momentum (float): Momentum to update the base. Default: 0.1. """ def __init__(self, ema_channels, num_bases, num_stages, concat_input=True, momentum=0.1, **kwargs): super(EMAHead, self).__init__(**kwargs) self.ema_channels = ema_channels self.num_bases = num_bases self.num_stages = num_stages self.concat_input = concat_input self.momentum = momentum self.ema_module = EMAModule(self.ema_channels, self.num_bases, self.num_stages, self.momentum) self.ema_in_conv = ConvModule( self.in_channels, self.ema_channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) # project (0, inf) -> (-inf, inf) self.ema_mid_conv = ConvModule( self.ema_channels, self.ema_channels, 1, conv_cfg=self.conv_cfg, norm_cfg=None, act_cfg=None) for param in self.ema_mid_conv.parameters(): param.requires_grad = False self.ema_out_conv = ConvModule( self.ema_channels, self.ema_channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=None) self.bottleneck = ConvModule( self.ema_channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) if self.concat_input: self.conv_cat = ConvModule( self.in_channels + self.channels, self.channels, kernel_size=3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) def forward(self, inputs): """Forward function.""" x = self._transform_inputs(inputs) feats = self.ema_in_conv(x) identity = feats feats = self.ema_mid_conv(feats) recon = self.ema_module(feats) recon = F.relu(recon, inplace=True) recon = self.ema_out_conv(recon) output = F.relu(identity + recon, inplace=True) output = self.bottleneck(output) if self.concat_input: output = self.conv_cat(torch.cat([x, output], dim=1)) output = self.cls_seg(output) return output
5,824
33.264706
77
py
mmsegmentation
mmsegmentation-master/mmseg/models/decode_heads/enc_head.py
# Copyright (c) OpenMMLab. All rights reserved. import torch import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule, build_norm_layer from mmseg.ops import Encoding, resize from ..builder import HEADS, build_loss from .decode_head import BaseDecodeHead class EncModule(nn.Module): """Encoding Module used in EncNet. Args: in_channels (int): Input channels. num_codes (int): Number of code words. conv_cfg (dict|None): Config of conv layers. norm_cfg (dict|None): Config of norm layers. act_cfg (dict): Config of activation layers. """ def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): super(EncModule, self).__init__() self.encoding_project = ConvModule( in_channels, in_channels, 1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) # TODO: resolve this hack # change to 1d if norm_cfg is not None: encoding_norm_cfg = norm_cfg.copy() if encoding_norm_cfg['type'] in ['BN', 'IN']: encoding_norm_cfg['type'] += '1d' else: encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( '2d', '1d') else: # fallback to BN1d encoding_norm_cfg = dict(type='BN1d') self.encoding = nn.Sequential( Encoding(channels=in_channels, num_codes=num_codes), build_norm_layer(encoding_norm_cfg, num_codes)[1], nn.ReLU(inplace=True)) self.fc = nn.Sequential( nn.Linear(in_channels, in_channels), nn.Sigmoid()) def forward(self, x): """Forward function.""" encoding_projection = self.encoding_project(x) encoding_feat = self.encoding(encoding_projection).mean(dim=1) batch_size, channels, _, _ = x.size() gamma = self.fc(encoding_feat) y = gamma.view(batch_size, channels, 1, 1) output = F.relu_(x + x * y) return encoding_feat, output @HEADS.register_module() class EncHead(BaseDecodeHead): """Context Encoding for Semantic Segmentation. This head is the implementation of `EncNet <https://arxiv.org/abs/1803.08904>`_. Args: num_codes (int): Number of code words. Default: 32. use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to regularize the training. Default: True. add_lateral (bool): Whether use lateral connection to fuse features. Default: False. loss_se_decode (dict): Config of decode loss. Default: dict(type='CrossEntropyLoss', use_sigmoid=True). """ def __init__(self, num_codes=32, use_se_loss=True, add_lateral=False, loss_se_decode=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.2), **kwargs): super(EncHead, self).__init__( input_transform='multiple_select', **kwargs) self.use_se_loss = use_se_loss self.add_lateral = add_lateral self.num_codes = num_codes self.bottleneck = ConvModule( self.in_channels[-1], self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) if add_lateral: self.lateral_convs = nn.ModuleList() for in_channels in self.in_channels[:-1]: # skip the last one self.lateral_convs.append( ConvModule( in_channels, self.channels, 1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg)) self.fusion = ConvModule( len(self.in_channels) * self.channels, self.channels, 3, padding=1, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) self.enc_module = EncModule( self.channels, num_codes=num_codes, conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) if self.use_se_loss: self.loss_se_decode = build_loss(loss_se_decode) self.se_layer = nn.Linear(self.channels, self.num_classes) def forward(self, inputs): """Forward function.""" inputs = self._transform_inputs(inputs) feat = self.bottleneck(inputs[-1]) if self.add_lateral: laterals = [ resize( lateral_conv(inputs[i]), size=feat.shape[2:], mode='bilinear', align_corners=self.align_corners) for i, lateral_conv in enumerate(self.lateral_convs) ] feat = self.fusion(torch.cat([feat, *laterals], 1)) encode_feat, output = self.enc_module(feat) output = self.cls_seg(output) if self.use_se_loss: se_output = self.se_layer(encode_feat) return output, se_output else: return output def forward_test(self, inputs, img_metas, test_cfg): """Forward function for testing, ignore se_loss.""" if self.use_se_loss: return self.forward(inputs)[0] else: return self.forward(inputs) @staticmethod def _convert_to_onehot_labels(seg_label, num_classes): """Convert segmentation label to onehot. Args: seg_label (Tensor): Segmentation label of shape (N, H, W). num_classes (int): Number of classes. Returns: Tensor: Onehot labels of shape (N, num_classes). """ batch_size = seg_label.size(0) onehot_labels = seg_label.new_zeros((batch_size, num_classes)) for i in range(batch_size): hist = seg_label[i].float().histc( bins=num_classes, min=0, max=num_classes - 1) onehot_labels[i] = hist > 0 return onehot_labels def losses(self, seg_logit, seg_label): """Compute segmentation and semantic encoding loss.""" seg_logit, se_seg_logit = seg_logit loss = dict() loss.update(super(EncHead, self).losses(seg_logit, seg_label)) se_loss = self.loss_se_decode( se_seg_logit, self._convert_to_onehot_labels(seg_label, self.num_classes)) loss['loss_se'] = se_loss return loss
6,792
34.941799
78
py