检测/分割/跟踪


2022-10-12 更新

Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning

Authors:Bo Li, Yongqiang Yao, Jingru Tan, Xin Lu, Fengwei Yu, Ye Luo, Jianwei Lu

Data in real-world object detection often exhibits the long-tailed distribution. Existing solutions tackle this problem by mitigating the competition between the head and tail categories. However, due to the scarcity of training samples, tail categories are still unable to learn discriminative representations. Bringing more data into the training may alleviate the problem, but collecting instance-level annotations is an excruciating task. In contrast, image-level annotations are easily accessible but not fully exploited. In this paper, we propose a novel framework CLIS (multi-task Collaborative Learning with Image-level Supervision), which leverage image-level supervision to enhance the detection ability in a multi-task collaborative way. Specifically, there are an object detection task (consisting of an instance-classification task and a localization task) and an image-classification task in our framework, responsible for utilizing the two types of supervision. Different tasks are trained collaboratively by three key designs: (1) task-specialized sub-networks that learn specific representations of different tasks without feature entanglement. (2) a siamese sub-network for the image-classification task that shares its knowledge with the instance-classification task, resulting in feature enrichment of detectors. (3) a contrastive learning regularization that maintains representation consistency, bridging feature gaps of different supervision. Extensive experiments are conducted on the challenging LVIS dataset. Without sophisticated loss engineering, CLIS achieves an overall AP of 31.1 with 10.1 point improvement on tail categories, establishing a new state-of-the-art. Code will be at https://github.com/waveboo/CLIS.
PDF

点此查看论文截图

Lightweight Transformer Backbone for Medical Object Detection

Authors:Yifan Zhang, Haoyu Dong, Nicholas Konz, Hanxue Gu, Maciej A. Mazurowski

Lesion detection in digital breast tomosynthesis (DBT) is an important and a challenging problem characterized by a low prevalence of images containing tumors. Due to the label scarcity problem, large deep learning models and computationally intensive algorithms are likely to fail when applied to this task. In this paper, we present a practical yet lightweight backbone to improve the accuracy of tumor detection. Specifically, we propose a novel modification of visual transformer (ViT) on image feature patches to connect the feature patches of a tumor with healthy backgrounds of breast images and form a more robust backbone for tumor detection. To the best of our knowledge, our model is the first work of Transformer backbone object detection for medical imaging. Our experiments show that this model can considerably improve the accuracy of lesion detection and reduce the amount of labeled data required in typical ViT. We further show that with additional augmented tumor data, our model significantly outperforms the Faster R-CNN model and state-of-the-art SWIN transformer model.
PDF

点此查看论文截图

Hypergraph Convolutional Networks for Weakly-Supervised Semantic Segmentation

Authors:Jhony H. Giraldo, Vincenzo Scarrica, Antonino Staiano, Francesco Camastra, Thierry Bouwmans

Semantic segmentation is a fundamental topic in computer vision. Several deep learning methods have been proposed for semantic segmentation with outstanding results. However, these models require a lot of densely annotated images. To address this problem, we propose a new algorithm that uses HyperGraph Convolutional Networks for Weakly-supervised Semantic Segmentation (HyperGCN-WSS). Our algorithm constructs spatial and k-Nearest Neighbor (k-NN) graphs from the images in the dataset to generate the hypergraphs. Then, we train a specialized HyperGraph Convolutional Network (HyperGCN) architecture using some weak signals. The outputs of the HyperGCN are denominated pseudo-labels, which are later used to train a DeepLab model for semantic segmentation. HyperGCN-WSS is evaluated on the PASCAL VOC 2012 dataset for semantic segmentation, using scribbles or clicks as weak signals. Our algorithm shows competitive performance against previous methods.
PDF Accepted in IEEE International Conference on Image Processing 2022

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录