I2I Translation


2022-09-27 更新

Clustering-Based Representation Learning through Output Translation and Its Application to Remote—Sensing Images

Authors:Qinglin Li, Bin Li, Jonathan M Garibaldi, Guoping Qiu

In supervised deep learning, learning good representations for remote—sensing images (RSI) relies on manual annotations. However, in the area of remote sensing, it is hard to obtain huge amounts of labeled data. Recently, self—supervised learning shows its outstanding capability to learn representations of images, especially the methods of instance discrimination. Comparing methods of instance discrimination, clustering—based methods not only view the transformations of the same image as ``positive” samples but also similar images. In this paper, we propose a new clustering-based method for representation learning. We first introduce a quantity to measure representations’ discriminativeness and from which we show that even distribution requires the most discriminative representations. This provides a theoretical insight into why evenly distributing the images works well. We notice that only the even distributions that preserve representations’ neighborhood relations are desirable. Therefore, we develop an algorithm that translates the outputs of a neural network to achieve the goal of evenly distributing the samples while preserving outputs’ neighborhood relations. Extensive experiments have demonstrated that our method can learn representations that are as good as or better than the state of the art approaches, and that our method performs computationally efficiently and robustly on various RSI datasets.
PDF 14 pages

点此查看论文截图

Mega: Moving Average Equipped Gated Attention

Authors:Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer

The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models.
PDF 13 pages, 4 figures and 7 tables. Fixed typos and added source code link

点此查看论文截图

Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models

Authors:Chong Wang, Yuanhong Chen, Yuyuan Liu, Yu Tian, Fengbei Liu, Davis J. McCarthy, Michael Elliott, Helen Frazer, Gustavo Carneiro

State-of-the-art (SOTA) deep learning mammogram classifiers, trained with weakly-labelled images, often rely on global models that produce predictions with limited interpretability, which is a key barrier to their successful translation into clinical practice. On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity. We address these two issues with the proposal of BRAIxProtoPNet++, which adds interpretability to a global model by ensembling it with a prototype-based model. BRAIxProtoPNet++ distills the knowledge of the global model when training the prototype-based model with the goal of increasing the classification accuracy of the ensemble. Moreover, we propose an approach to increase prototype diversity by guaranteeing that all prototypes are associated with different training images. Experiments on weakly-labelled private and public datasets show that BRAIxProtoPNet++ has higher classification accuracy than SOTA global and prototype-based models. Using lesion localisation to assess model interpretability, we show BRAIxProtoPNet++ is more effective than other prototype-based models and post-hoc explanation of global models. Finally, we show that the diversity of the prototypes learned by BRAIxProtoPNet++ is superior to SOTA prototype-based approaches.
PDF 11 pages, 4 figures, conference. Springer, Cham, 2022

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录