2022-08-12 更新
Prototypical Contrastive Language Image Pretraining
Authors:Delong Chen, Zhao Wu, Fan Liu, Zaiquan Yang, Yixiang Huang, Yiping Bao, Erjin Zhou
Contrastive Language Image Pretraining (CLIP) received widespread attention since its learned representations can be transferred well to various downstream tasks. During CLIP training, the InfoNCE objective aims to align positive image-text pairs and separate negative ones. In this paper, we show a representation grouping effect during this process: the InfoNCE objective indirectly groups semantically similar representations together via randomly emerged within-modal anchors. We introduce Prototypical Contrastive Language Image Pretraining (ProtoCLIP) to enhance such grouping by boosting its efficiency and increasing its robustness against modality gap. Specifically, ProtoCLIP sets up prototype-level discrimination between image and text spaces, which efficiently transfers higher-level structural knowledge. We further propose Prototypical Back Translation (PBT) to decouple representation grouping from representation alignment, resulting in effective learning of meaningful representations under large modality gap. PBT also enables us to introduce additional external teachers with richer prior knowledge. ProtoCLIP is trained with an online episodic training strategy, which makes it can be scaled up to unlimited amounts of data. We train our ProtoCLIP on Conceptual Captions and achieved an +5.81% ImageNet linear probing improvement and an +2.01% ImageNet zero-shot classification improvement. On larger YFCC dataset, ProtoCLIP matches the performance of CLIP with 4$\times$fewer pretraining epochs. Codes are available at https://github.com/megvii-research/protoclip.
PDF Preprint
点此查看论文截图
ProCST: Boosting Semantic Segmentation Using Progressive Cyclic Style-Transfer
Authors:Shahaf Ettedgui, Shady Abu-Hussein, Raja Giryes
Using synthetic data for training neural networks that achieve good performance on real-world data is an important task as it can reduce the need for costly data annotation. Yet, synthetic and real world data have a domain gap. Reducing this gap, also known as domain adaptation, has been widely studied in recent years. Closing the domain gap between the source (synthetic) and target (real) data by directly performing the adaptation between the two is challenging. In this work, we propose a novel two-stage framework for improving domain adaptation techniques on image data. In the first stage, we progressively train a multi-scale neural network to perform image translation from the source domain to the target domain. We denote the new transformed data as “Source in Target” (SiT). Then, we insert the generated SiT data as the input to any standard UDA approach. This new data has a reduced domain gap from the desired target domain, which facilitates the applied UDA approach to close the gap further. We emphasize the effectiveness of our method via a comparison to other leading UDA and image-to-image translation techniques when used as SiT generators. Moreover, we demonstrate the improvement of our framework with three state-of-the-art UDA methods for semantic segmentation, HRDA, DAFormer and ProDA, on two UDA tasks, GTA5 to Cityscapes and Synthia to Cityscapes.
PDF Code available at https://github.com/shahaf1313/ProCST