2022-09-02 更新
Supervised Contrastive Learning with Hard Negative Samples
Authors:Ruijie Jiang, Thuan Nguyen, Prakash Ishwar, Shuchin Aeron
Unsupervised contrastive learning (UCL) is a self-supervised learning technique that aims to learn a useful representation function by pulling positive samples close to each other while pushing negative samples far apart in the embedding space. To improve the performance of UCL, several works introduced hard-negative unsupervised contrastive learning (H-UCL) that aims to select the “hard” negative samples in contrast to a random sampling strategy used in UCL. In another approach, under the assumption that the label information is available, supervised contrastive learning (SCL) has developed recently by extending the UCL to a fully-supervised setting. In this paper, motivated by the effectiveness of hard-negative sampling strategies in H-UCL and the usefulness of label information in SCL, we propose a contrastive learning framework called hard-negative supervised contrastive learning (H-SCL). Our numerical results demonstrate the effectiveness of H-SCL over both SCL and H-UCL on several image datasets. In addition, we theoretically prove that, under certain conditions, the objective function of H-SCL can be bounded by the objective function of H-UCL but not by the objective function of UCL. Thus, minimizing the H-UCL loss can act as a proxy to minimize the H-SCL loss while minimizing UCL loss cannot. As we numerically showed that H-SCL outperforms other contrastive learning methods, our theoretical result (bounding H-SCL loss by H-UCL loss) helps to explain why H-UCL outperforms UCL in practice.
PDF
点此查看论文截图
ProCo: Prototype-aware Contrastive Learning for Long-tailed Medical Image Classification
Authors:Zhixiong Yang, Junwen Pan, Yanzhan Yang, Xiaozhou Shi, Hong-Yu Zhou, Zhicheng Zhang, Cheng Bian
Medical image classification has been widely adopted in medical image analysis. However, due to the difficulty of collecting and labeling data in the medical area, medical image datasets are usually highly-imbalanced. To address this problem, previous works utilized class samples as prior for re-weighting or re-sampling but the feature representation is usually still not discriminative enough. In this paper, we adopt the contrastive learning to tackle the long-tailed medical imbalance problem. Specifically, we first propose the category prototype and adversarial proto-instance to generate representative contrastive pairs. Then, the prototype recalibration strategy is proposed to address the highly imbalanced data distribution. Finally, a unified proto-loss is designed to train our framework. The overall framework, namely as Prototype-aware Contrastive learning (ProCo), is unified as a single-stage pipeline in an end-to-end manner to alleviate the imbalanced problem in medical image classification, which is also a distinct progress than existing works as they follow the traditional two-stage pipeline. Extensive experiments on two highly-imbalanced medical image classification datasets demonstrate that our method outperforms the existing state-of-the-art methods by a large margin.
PDF Accept to MICCAI 2022. Zhixiong Yang and Junwen Pan contributed equally to this work