Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Adaptive Fuzzy Positive Learning for Annotation-Scarce Semantic Segmentation.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Author(s): Qiao, Pengchong1,2 (AUTHOR); Wang, Yu (AUTHOR); Liu, Chang (AUTHOR); Shang, Lei (AUTHOR); Sun, Baigui (AUTHOR); Wang, Zhennan (AUTHOR); Zheng, Xiawu3,4 (AUTHOR); Ji, Rongrong3,4 (AUTHOR); Chen, Jie1,3 (AUTHOR)
- Source:
International Journal of Computer Vision. Sep2024, p1-19.
- Subject Terms:
- Additional Information
- Abstract:
Annotation-scarce semantic segmentation aims to obtain meaningful pixel-level discrimination with scarce or even no manual annotations, of which the crux is how to utilize unlabeled data by pseudo-label learning. Typical works focus on ameliorating the error-prone pseudo-labeling, e.g., only utilizing high-confidence pseudo labels and filtering low-confidence ones out. But we think differently and resort to exhausting informative semantics from multiple probably correct candidate labels. This brings our method the ability to learn more accurately even though pseudo labels are unreliable. In this paper, we propose Adaptive Fuzzy Positive Learning (A-FPL) for correctly learning unlabeled data in a plug-and-play fashion, targeting adaptively encouraging fuzzy positive predictions and suppressing highly probable negatives. Specifically, A-FPL comprises two main components: (1) Fuzzy positive assignment (FPA) that adaptively assigns fuzzy positive labels to each pixel, while ensuring their quality through a T-value adaption algorithm (2) Fuzzy positive regularization (FPR) that restricts the predictions of fuzzy positive categories to be larger than those of negative categories. Being conceptually simple yet practically effective, A-FPL remarkably alleviates interference from wrong pseudo labels, progressively refining semantic discrimination. Theoretical analysis and extensive experiments on various training settings with consistent performance gain justify the superiority of our approach. Codes are at A-FPL. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of International Journal of Computer Vision is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.