无监督/半监督/对比学习


2023-02-08 更新

On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study

Authors:Fatemeh Ghofrani, Mehdi Yaghouti, Pooyan Jamshidi

Self-supervised contrastive learning has solved one of the significant obstacles in deep learning by alleviating the annotation cost. This advantage comes with the price of false negative-pair selection without any label information. Supervised contrastive learning has emerged as an extension of contrastive learning to eliminate this issue. However, aside from accuracy, there is a lack of understanding about the impacts of adversarial training on the representations learned by these learning schemes. In this work, we utilize supervised learning as a baseline to comprehensively study the robustness of contrastive and supervised contrastive learning under different adversarial training scenarios. Then, we begin by looking at how adversarial training affects the learned representations in hidden layers, discovering more redundant representations between layers of the model. Our results on CIFAR-10 and CIFAR-100 image classification benchmarks demonstrate that this redundancy is highly reduced by adversarial fine-tuning applied to the contrastive learning scheme, leading to more robust representations. However, adversarial fine-tuning is not very effective for supervised contrastive learning and supervised learning schemes. Our code is released at https://github.com/softsys4ai/CL-Robustness.
PDF

点此查看论文截图

CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

Authors:Xia Xu, Jochen Triesch

Self-supervised representation learning (SSRL) methods have shown great success in computer vision. In recent studies, augmentation-based contrastive learning methods have been proposed for learning representations that are invariant or equivariant to pre-defined data augmentation operations. However, invariant or equivariant features favor only specific downstream tasks depending on the augmentations chosen. They may result in poor performance when a downstream task requires the counterpart of those features (e.g., when the task is to recognize hand-written digits while the model learns to be invariant to in-plane image rotations rendering it incapable of distinguishing “9” from “6”). This work introduces Contrastive Invariant and Predictive Equivariant Representation learning (CIPER). CIPER comprises both invariant and equivariant learning objectives using one shared encoder and two different output heads on top of the encoder. One output head is a projection head with a state-of-the-art contrastive objective to encourage invariance to augmentations. The other is a prediction head estimating the augmentation parameters, capturing equivariant features. Both heads are discarded after training and only the encoder is used for downstream tasks. We evaluate our method on static image tasks and time-augmented image datasets. Our results show that CIPER outperforms a baseline contrastive method on various tasks, especially when the downstream task requires the encoding of augmentation-related information.
PDF 10 pages, 4 figures, 3 tables

点此查看论文截图

Transform, Contrast and Tell: Coherent Entity-Aware Multi-Image Captioning

Authors:Jingqiang Chen

Coherent entity-aware multi-image captioning aims to generate coherent captions for multiple adjacent images in a news document. There are coherence relationships among adjacent images because they often describe same entities or events. These relationships are important for entity-aware multi-image captioning, but are neglected in entity-aware single-image captioning. Most existing work focuses on single-image captioning, while multi-image captioning has not been explored before. Hence, this paper proposes a coherent entity-aware multi-image captioning model by making use of coherence relationships. The model consists of a Transformer-based caption generation model and two types of contrastive learning-based coherence mechanisms. The generation model generates the caption by paying attention to the image and the accompanying text. The horizontal coherence mechanism aims to the make the caption coherent with captions of adjacent images. The vertical coherence mechanism aims to make the caption coherent with the image and the accompanying text. To evaluate coherence between captions, two coherence evaluation metrics are proposed. The new dataset DM800K is constructed that has more images per document than two existing datasets GoodNews and NYT800K, and are more suitable for multi-image captioning. Experiments on three datasets show the proposed captioning model outperforms 6 baselines according to single-image captioning evaluations, and the generated captions are more coherent than that of baselines according to coherence evaluations and human evaluations.
PDF 28 pages, 9 tables, 3 figures

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录