Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Context-Aware Feature Learning for Noise Robust Person Search.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Additional Information
- Subject Terms:
- Abstract:
Person search aims to localize and identify specific pedestrians from numerous surveillance scene images. In this work, we focus on the noise in person search. We categorize the noise into scene-inherent noise and human-introduced noise. Scene-inherent noise comes from congestion, occlusion, and illumination changes. Human-introduced noise originates from the labeling process. For scene-inherent noise, we propose a novel context contrastive loss to take advantage of the latent contextual information from scene images. Features from context regions are utilized to construct contrastive pairs to constrain the feature discrimination among pedestrians in scene images while maintaining the feature consistency of the same identity. The network can thus learn to distinguish congested and overlapped pedestrians and more robust features can be obtained. For human-introduced noise, we propose a noise-discovery and noise-suppression training process for mislabeling robust person search. After the first training pass, the relation between feature prototypes of different identities is analyzed and the mislabeled pedestrians are discovered. During the second training pass, the label noise is suppressed to reduce the negative influence of mislabeled data. Experiments show that the proposed context-aware noise-robust (CANR) person search can achieve competitive performance. Further ablation studies confirm the effectiveness of CANR. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of IEEE Transactions on Circuits & Systems for Video Technology is the property of IEEE and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.