GAN


2022-06-09 更新

Progressive GANomaly: Anomaly detection with progressively growing GANs

Authors:Djennifer K. Madzia-Madzou, Hugo J. Kuijf

In medical imaging, obtaining large amounts of labeled data is often a hurdle, because annotations and pathologies are scarce. Anomaly detection is a method that is capable of detecting unseen abnormal data while only being trained on normal (unannotated) data. Several algorithms based on generative adversarial networks (GANs) exist to perform this task, yet certain limitations are in place because of the instability of GANs. This paper proposes a new method by combining an existing method, GANomaly, with progressively growing GANs. The latter is known to be more stable, considering its ability to generate high-resolution images. The method is tested using Fashion MNIST, Medical Out-of-Distribution Analysis Challenge (MOOD), and in-house brain MRI; using patches of sizes 16x16 and 32x32. Progressive GANomaly outperforms a one-class SVM or regular GANomaly on Fashion MNIST. Artificial anomalies are created in MOOD images with varying intensities and diameters. Progressive GANomaly detected the most anomalies with varying intensity and size. Additionally, the intermittent reconstructions are proven to be better from progressive GANomaly. On the in-house brain MRI dataset, regular GANomaly outperformed the other methods.
PDF SPIE Medical Imaging 2022: Image Processing conference

论文截图

Progressive Distillation for Fast Sampling of Diffusion Models

Authors:Tim Salimans, Jonathan Ho

Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.
PDF Published as a conference paper at ICLR 2022

论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录