Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels
Abstract
The crux of semi-supervised semantic segmentation is to assign adequate pseudo-labels to the pixels of unlabeled images. A common practice is to select the highly confident <PRE_TAG>prediction</POST_TAG>s as the pseudo ground-truth, but it leads to a problem that most pixels may be left unused due to their unreliability. We argue that every pixel matters to the model training, even its <PRE_TAG>prediction</POST_TAG> is ambiguous. Intuitively, an unreliable <PRE_TAG>prediction</POST_TAG> may get confused among the top classes (i.e., those with the highest probabilities), however, it should be confident about the pixel not belonging to the remaining classes. Hence, such a pixel can be convincingly treated as a negative sample to those most unlikely categories. Based on this insight, we develop an effective pipeline to make sufficient use of unlabeled data. Concretely, we separate reliable and unreliable pixels via the entropy of <PRE_TAG>prediction</POST_TAG>s, push each unreliable pixel to a category-wise queue that consists of negative samples, and manage to train the model with all candidate pixels. Considering the training evolution, where the <PRE_TAG>prediction</POST_TAG> becomes more and more accurate, we adaptively adjust the threshold for the reliable-<PRE_TAG>unreliable partition</POST_TAG>. Experimental results on various benchmarks and training settings demonstrate the superiority of our approach over the state-of-the-art alternatives.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper