Skip to content

Latest commit

 

History

History
57 lines (43 loc) · 3.84 KB

paper.md

File metadata and controls

57 lines (43 loc) · 3.84 KB

Semantic Connectivity-aware Segmentation With a Large-Scale Teleconferencing Video Dataset

Official resource for the paper PP-HumanSeg: Connectivity-Aware Portrait Segmentation With a Large-Scale Teleconferencing Video Dataset. [Paper | Poster | YouTube]

Semantic Connectivity-aware Learning

SCL (Semantic Connectivity-aware Learning) framework, which introduces a SC Loss (Semantic Connectivity-aware Loss) to improve the quality of segmentation results from the perspective of connectivity. SCL can improve the integrity of segmentation objects and increase segmentation accuracy. Support multi-class segmentation. [Source code]

Connected Components Calculation and Matching

(a) It indicates prediction and ground truth, i.e. P and G. (b) Connected components are generated through the CCL algorithm, respectively. (c) Connected components are matched using the IoU value.

Segmentation Results

Perfermance on Cityscapes

The experimental results on our Teleconferencing Video Dataset are shown in paper, and the experimental results on Cityscapes are as follows:

Model Backbone Learning Strategy GPUs * Batch Size(Per Card) Training Iters mIoU (%) Config
OCRNet HRNet-W48 - 2*2 40000 76.23 config
OCRNet HRNet-W48 SCL 2*2 40000 78.29(+2.06) config
FCN HRNet-W18 - 2*4 80000 77.81 config
FCN HRNet-W18 SCL 2*4 80000 78.68(+0.87) config
Fast SCNN - - 2*4 40000 56.41 config
Fast SCNN - SCL 2*4 40000 57.37(+0.96) config

PP-HumanSeg14K: A Large-Scale Teleconferencing Video Dataset

A large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K frames. This dataset contains various teleconferencing scenes, various actions of the participants, interference of passers-by and illumination change. The data can be obtained by sending an email to [email protected] via an official email (not use qq, gmail, etc.) including your institution/company information and the purpose on the dataset.

Citation

If our project is useful in your research, please citing:

@InProceedings{Chu_2022_WACV,
    author    = {Chu, Lutao and Liu, Yi and Wu, Zewu and Tang, Shiyu and Chen, Guowei and Hao, Yuying and Peng, Juncai and Yu, Zhiliang and Chen, Zeyu and Lai, Baohua and Xiong, Haoyi},
    title     = {PP-HumanSeg: Connectivity-Aware Portrait Segmentation With a Large-Scale Teleconferencing Video Dataset},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
    month     = {January},
    year      = {2022},
    pages     = {202-209}
}