- Thanks Google and UIUC researchers. A modified HRNet combined with semantic and instance multi-scale context achieves SOTA panoptic segmentation result on the Mapillary Vista challenge. See [the paper](https://arxiv.org/pdf/1910.04751.pdf).
- Small HRNet models for Cityscapes segmentation. Superior to MobileNetV2Plus ....
- Rank \#1 (83.7) in [Cityscapes leaderboard](https://www.cityscapes-dataset.com/benchmarks/). HRNet combined with an extension of [object context](https://arxiv.org/pdf/1809.00916.pdf)
- Pytorch-v1.1 and the official Sync-BN supported. We have reproduced the cityscapes results on the new codebase. Please check the pytorch-v1.1 branch.
- We have unified PyTorch 0.4.1 and 1.1.0 support into one codebase, and reproduce HRNet+OCR results on it. Please check the ocr branch.
- We have reproduced some main results of HRNetV2-W48 + OCR in this repo.
## Introduction
This is the official code of [high-resolution representations for Semantic Segmentation](https://arxiv.org/abs/1904.04514).
...
...
@@ -14,6 +12,8 @@ We augment the HRNet with a very simple segmentation head shown in the figure be

Besides, we further combine HRNet with [Object Contextual Representation](https://arxiv.org/pdf/1909.11065.pdf) and achieve higher performance on the three datasets. The code of HRNet+OCR is contained in this branch.
## Segmentation models
HRNetV2 Segmentation models are now available. All the results are reproduced by using this repo!!!
...
...
@@ -26,40 +26,38 @@ If multi-scale testing is used, we adopt scales: 0.5,0.75,1.0,1.25,1.5,1.75.
| model | Train Set | Test Set |#Params | GFLOPs | OHEM | Multi-scale| Flip | mIoU | Link |
| HRNetV2-W48 | Train | Val | 65.8M | 696.2 | No | No | No | 80.9 | [GoogleDrive](https://drive.google.com/file/d/15DCds5j95hI-nsjg4eBM1G3sIUWR9tmf/view?usp=sharing)|
| HRNetV2-W48 + OCR | Train | Val | 70.3M | 1206.4 | No | No | No | 81.6 | [GoogleDrive](https://drive.google.com/file/d/1QDxjWQhkBX_B3qVJykmtYUC3KkXVZIzT/view?usp=sharing)|
| HRNetV2-W48 | Train | Val | 65.8M | 696.2 | No | No | No | 80.9 | [GoogleDrive](https://drive.google.com/file/d/15DCds5j95hI-nsjg4eBM1G3sIUWR9tmf/view?usp=sharing)/[BaiduYun(Access Code:pmix)](https://pan.baidu.com/s/1KyiOUOR0SYxKtJfIlD5o-w)|
| HRNetV2-W48 + OCR | Train | Val | 70.3M | 1206.4 | No | No | No | 81.6 | [GoogleDrive](https://drive.google.com/file/d/1QDxjWQhkBX_B3qVJykmtYUC3KkXVZIzT/view?usp=sharing)/[BaiduYun(Access Code:fa6i)](https://pan.baidu.com/s/1BGNt4Xmx3yfXUS8yjde0hQ)|
| HRNetV2-W48 + OCR | Train + Val | Test | 70.3M | 1206.4 | No | Yes | Yes | 82.3 | [GoogleDrive](https://drive.google.com/file/d/1HiB3pdFhhTtQnrM-zuKrNTmexz_7WmQa/view?usp=sharing)/[BaiduYun(Access Code:ycrk)](https://pan.baidu.com/s/16mD81UnGzjUBD-haDQfzIQ)|
2. Performance on the LIP dataset. The models are trained and tested with the input size of 473x473.
| model |#Params | GFLOPs | OHEM | Multi-scale| Flip | mIoU | Link |
1. For LIP dataset, install PyTorch=0.4.1 following the [official instructions](https://pytorch.org/). For other datasets, either PyTorch 0.4.1 or 1.1.0 is OK.