I2I Translation


2022-03-15 更新

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

Authors:Linfeng Zhang, Xin Chen, Xiaobing Tu, Pengfei Wan, Ning Xu, Kaisheng Ma

Remarkable achievements have been attained with Generative Adversarial Networks (GANs) in image-to-image translation. However, due to a tremendous amount of parameters, state-of-the-art GANs usually suffer from low efficiency and bulky memory usage. To tackle this challenge, firstly, this paper investigates GANs performance from a frequency perspective. The results show that GANs, especially small GANs lack the ability to generate high-quality high frequency information. To address this problem, we propose a novel knowledge distillation method referred to as wavelet knowledge distillation. Instead of directly distilling the generated images of teachers, wavelet knowledge distillation first decomposes the images into different frequency bands with discrete wavelet transformation and then only distills the high frequency bands. As a result, the student GAN can pay more attention to its learning on high frequency bands. Experiments demonstrate that our method leads to 7.08 times compression and 6.80 times acceleration on CycleGAN with almost no performance drop. Additionally, we have studied the relation between discriminators and generators which shows that the compression of discriminators can promote the performance of compressed generators.
PDF Accepted by CVPR2022

论文截图

Image Translation using Texture Co-occurrence and Spatial Self-Similarity for Texture Debiasing

Authors:Myeongkyun Kang, Dongkyu Won, Miguel Luna, Philip Chikontwe, Kyung Soo Hong, June Hong Ahn, Sang Hyun Park

Classification models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various debiasing methods have attempted to disentangle biased representations, but discarding texture biased features without altering other relevant information is still a challenging task. In this paper, we propose a novel texture debiasing approach to generate additional training images using the content of a source image and the texture of a target image with a different semantic label to explicitly mitigate texture biases when training a classifier. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving the content details from the source image with a spatial self-similarity loss. Both the generated and original training images are combined to train an improved classifier that is robust against inconsistent texture bias representations. We employ five datasets with known texture biases to demonstrate the ability of our method to mitigate texture bias. In all cases, our method outperformed existing state-of-the-art methods.
PDF

论文截图

文章作者: Harvey
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Harvey !
  目录