GAN


2023-03-13 更新

Self-Supervised One-Shot Learning for Automatic Segmentation of StyleGAN Images

Authors:Ankit Manerikar, Avinash C. Kak

We propose in this paper a framework for automatic one-shot segmentation of synthetic images generated using StyleGANs. As to the need for `one-shot segmentation’, we want the network to carry out a semantic segmentation of the images on the fly, that is, as they are being produced at inference time. The implementation of our framework is based on the observation that the multi-scale hidden features produced by a GAN during image synthesis hold useful semantic information that can be utilized for automatic segmentation. Using these features, our proposed framework learns to segment synthetic images using a novel self-supervised, contrastive clustering algorithm that projects the hidden features in the generator onto a compact feature space for per-pixel classification. This contrastive learner uses a swapped prediction loss for image segmentation that is computed using pixel-wise cluster assignments for the image and its transformed variants. Using the hidden features from an already pre-trained GAN for clustering, this leads to a much faster learning of the pixel-wise feature vectors for one-shot segmentation. We have tested our implementation on a number of standard benchmarks (CelebA, LSUN, PASCAL-Part) for object and part segmentation. The results of our experiments yield a segmentation performance that not only outperforms the semi-supervised baseline methods with an average wIoU margin of 1.02 % but also improves the inference speeds by a peak factor of 4.5. Finally, we also show the results of using the proposed framework in the implementation of BagGAN, a GAN-based framework for the production of annotated synthetic baggage X-ray scans for threat detection. This one-shot learning framework was trained and tested on the PIDRay baggage screening benchmark for 5 different threat categories to yield a segmentation performance which stands close to its baseline segmenter.
PDF

点此查看论文截图

EHRDiff: Exploring Realistic EHR Synthesis with Diffusion Models

Authors:Hongyi Yuan, Songchi Zhou, Sheng Yu

Electronic health records (EHR) contain vast biomedical knowledge and are rich resources for developing precise medicine systems. However, due to privacy concerns, there are limited high-quality EHR data accessible to researchers hence hindering the advancement of methodologies. Recent research has explored using generative modelling methods to synthesize realistic EHR data, and most proposed methods are based on the generative adversarial network (GAN) and its variants for EHR synthesis. Although GAN-style methods achieved state-of-the-art performance in generating high-quality EHR data, such methods are hard to train and prone to mode collapse. Diffusion models are recently proposed generative modelling methods and set cutting-edge performance in image generation. The performance of diffusion models in realistic EHR synthesis is rarely explored. In this work, we explore whether the superior performance of diffusion models can translate to the domain of EHR synthesis and propose a novel EHR synthesis method named EHRDiff. Through comprehensive experiments, EHRDiff achieves new state-of-the-art performance for the quality of synthetic EHR data and can better protect private information in real training EHRs in the meanwhile.
PDF Working in progress

点此查看论文截图

Feature Unlearning for Generative Models via Implicit Feedback

Authors:Saemi Moon, Seunghyuk Cho, Dongwoo Kim

We tackle the problem of feature unlearning from a pretrained image generative model. Unlike a common unlearning task where an unlearning target is a subset of the training set, we aim to unlearn a specific feature, such as hairstyle from facial images, from the pretrained generative models. As the target feature is only presented in a local region of an image, unlearning the entire image from the pretrained model may result in losing other details in the remaining region of the image. To specify which features to unlearn, we develop an implicit feedback mechanism where a user can select images containing the target feature. From the implicit feedback, we identify a latent representation corresponding to the target feature and then use the representation to unlearn the generative model. Our framework is generalizable for the two well-known families of generative models: GANs and VAEs. Through experiments on MNIST and CelebA datasets, we show that target features are successfully removed while keeping the fidelity of the original models.
PDF

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录