Domain Adaptation


2023-01-16 更新

Self-Training Guided Disentangled Adaptation for Cross-Domain Remote Sensing Image Semantic Segmentation

Authors:Qi Zhao, Shuchang Lyu, Binghao Liu, Lijiang Chen, Hongbo Zhao

Deep convolutional neural networks (DCNNs) based remote sensing (RS) image semantic segmentation technology has achieved great success used in many real-world applications such as geographic element analysis. However, strong dependency on annotated data of specific scene makes it hard for DCNNs to fit different RS scenes. To solve this problem, recent works gradually focus on cross-domain RS image semantic segmentation task. In this task, different ground sampling distance, remote sensing sensor variation and different geographical landscapes are three main factors causing dramatic domain shift between source and target images. To decrease the negative influence of domain shift, we propose a self-training guided disentangled adaptation network (ST-DASegNet). We first propose source student backbone and target student backbone to respectively extract the source-style and target-style feature for both source and target images. Towards the intermediate output feature maps of each backbone, we adopt adversarial learning for alignment. Then, we propose a domain disentangled module to extract the universal feature and purify the distinct feature of source-style and target-style features. Finally, these two features are fused and served as input of source student decoder and target student decoder to generate final predictions. Based on our proposed domain disentangled module, we further propose exponential moving average (EMA) based cross-domain separated self-training mechanism to ease the instability and disadvantageous effect during adversarial optimization. Extensive experiments and analysis on benchmark RS datasets show that ST-DASegNet outperforms previous methods on cross-domain RS image semantic segmentation task and achieves state-of-the-art (SOTA) results. Our code is available at https://github.com/cv516Buaa/ST-DASegNet.
PDF 18 pages, 9 figures, 8 tables, 22 formulas

点此查看论文截图

DINF: Dynamic Instance Noise Filter for Occluded Pedestrian Detection

Authors:Li Xiang, He Miao, Luo Haibo, Xiao Jiajie

Occlusion issue is the biggest challenge in pedestrian detection. RCNN-based detectors extract instance features by cropping rectangle regions of interest in the feature maps. However, the visible pixels of the occluded objects are limited, making the rectangle instance feature mixed with a lot of instance-irrelevant noise information. Besides, by counting the number of instances with different degrees of overlap of CrowdHuman dataset, we find that the number of severely overlapping objects and the number of slightly overlapping objects are unbalanced, which may exacerbate the challenges posed by occlusion issues. Regarding to the noise issue, from the perspective of denoising, an iterable dynamic instance noise filter (DINF) is proposed for the RCNN-based pedestrian detectors to improve the signal-noise ratio of the instance feature. Simulating the wavelet denoising process, we use the instance feature vector to generate dynamic convolutional kernels to transform the RoIs features to a domain in which the near-zero values represent the noise information. Then, soft thresholding with channel-wise adaptive thresholds is applied to convert the near-zero values to zero to filter out noise information. For the imbalance issue, we propose an IoU-Focal factor (IFF) to modulate the contributions of the well-regressed boxes and the bad-regressed boxes to the loss in the training process, paying more attention to the minority severely overlapping objects. Extensive experiments conducted on CrowdHuman and CityPersons demonstrate that our methods can help RCNN-based pedestrian detectors achieve state-of-the-art performance.
PDF 15 pages, 8 figures

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录