检测/分割/跟踪


2022-05-20 更新

Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection

Authors:Yuxin Fang, Shusheng Yang, Shijie Wang, Yixiao Ge, Ying Shan, Xinggang Wang

We present an approach to efficiently and effectively adapt a masked image modeling (MIM) pre-trained vanilla Vision Transformer (ViT) for object detection, which is based on our two novel observations: (i) A MIM pre-trained vanilla ViT encoder can work surprisingly well in the challenging object-level recognition scenario even with randomly sampled partial observations, e.g., only 25% $\sim$ 50% of the input embeddings. (ii) In order to construct multi-scale representations for object detection from single-scale ViT, a randomly initialized compact convolutional stem supplants the pre-trained large kernel patchify stem, and its intermediate features can naturally serve as the higher resolution inputs of a feature pyramid network without further upsampling or other manipulations. While the pre-trained ViT is only regarded as the 3$^{rd}$-stage of our detector’s backbone instead of the whole feature extractor. This results in a ConvNet-ViT hybrid feature extractor. The proposed detector, named MIMDet, enables a MIM pre-trained vanilla ViT to outperform hierarchical Swin Transformer by 2.5 box AP and 2.6 mask AP on COCO, and achieves better results compared with the previous best adapted vanilla ViT detector using a more modest fine-tuning recipe while converging 2.8$\times$ faster. Code and pre-trained models are available at https://github.com/hustvl/MIMDet.
PDF v2: more analysis & stronger results. Preprint. Work in progress. Code and pre-trained models are available at https://github.com/hustvl/MIMDet

论文截图

GroupViT: Semantic Segmentation Emerges from Text Supervision

Authors:Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang

Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3\% mIoU on the PASCAL VOC 2012 and 22.4\% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision. We open-source our code at \href{https://github.com/NVlabs/GroupViT}{https://github.com/NVlabs/GroupViT}.
PDF CVPR 2022. Project page and code: https://jerryxu.net/GroupViT

论文截图

Achieving Domain Generalization in Underwater Object Detection by Domain Mixup and Contrastive Learning

Authors:Pinhao Song, Linhui Dai, Peipei Yuan, Hong Liu, Runwei Ding

The performance of existing underwater object detection methods degrades seriously when facing domain shift caused by complicated underwater environments. Due to the limitation of the number of domains in the dataset, deep detectors easily memorize a few seen domains, which leads to low generalization ability. There are two common ideas to improve the domain generalization performance. First, it can be inferred that the detector trained on as many domains as possible is domain-invariant. Second, for the images with the same semantic content in different domains, their hidden features should be equivalent. This paper further excavates these two ideas and proposes a domain generalization framework (named DMC) that learns how to generalize across domains from Domain Mixup and Contrastive Learning. First, based on the formation of underwater images, an image in an underwater environment is the linear transformation of another underwater environment. Thus, a style transfer model, which outputs a linear transformation matrix instead of the whole image, is proposed to transform images from one source domain to another, enriching the domain diversity of the training data. Second, mixup operation interpolates different domains on the feature level, sampling new domains on the domain manifold. Third, contrastive loss is selectively applied to features from different domains to force the model to learn domain invariant features but retain the discriminative capacity. With our method, detectors will be robust to domain shift. Also, a domain generalization benchmark S-UODAC2020 for detection is set up to measure the performance of our method. Comprehensive experiments on S-UODAC2020 and two object recognition benchmarks (PACS and VLCS) demonstrate that the proposed method is able to learn domain-invariant representations, and outperforms other domain generalization methods.
PDF

论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录