2022-08-27 更新
AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using Denoising Diffusion Probabilistic Models
Authors:Nithin Gopalakrishnan Nair, Kangfu Mei, Vishal M Patel
Although many long-range imaging systems are designed to support extended vision applications, a natural obstacle to their operation is degradation due to atmospheric turbulence. Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion. In recent years, various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed in the literature which attempt to remove the distortion in the image. However, some of these methods are difficult to train and often fail to reconstruct facial features and produce unrealistic results especially in the case of high turbulence. Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images. In this paper, we propose the first DDPM-based solution for the problem of atmospheric turbulence mitigation. We also propose a fast sampling technique for reducing the inference times for conditional DDPMs. Extensive experiments are conducted on synthetic and real-world data to show the significance of our model. To facilitate further research, all codes and pretrained models will be made public after the review process.
PDF Accepted to WACV 2023
点此查看论文截图
Generative Adversarial Network (GAN) based Image-Deblurring
Authors:Yuhong Lu, Nicholas Polydorides
This thesis analyzes the challenging problem of Image Deblurring based on classical theorems and state-of-art methods proposed in recent years. By spectral analysis we mathematically show the effective of spectral regularization methods, and point out the linking between the spectral filtering result and the solution of the regularization optimization objective. For ill-posed problems like image deblurring, the optimization objective contains a regularization term (also called the regularization functional) that encodes our prior knowledge into the solution. We demonstrate how to craft a regularization term by hand using the idea of maximum a posterior estimation. Then, we point out the limitations of such regularization-based methods, and step into the neural-network based methods. Based on the idea of Wasserstein generative adversarial models, we can train a CNN to learn the regularization functional. Such data-driven approaches are able to capture the complexity, which may not be analytically modellable. Besides, in recent years with the improvement of architectures, the network has been able to output an image closely approximating the ground truth given the blurry observation. The Generative Adversarial Network (GAN) works on this Image-to-Image translation idea. We analyze the DeblurGAN-v2 method proposed by Orest Kupyn et al. [14] in 2019 based on numerical tests. And, based on the experimental results and our knowledge, we put forward some suggestions for improvement on this method.
PDF 90 pages, 35 figures, MS Thesis at the University of Edinburgh
点此查看论文截图
GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation
Authors:Yu Deng, Jiaolong Yang, Jianfeng Xiang, Xin Tong
3D-aware image generative modeling aims to generate 3D-consistent images with explicitly controllable camera poses. Recent works have shown promising results by training neural radiance field (NeRF) generators on unstructured 2D images, but still can not generate highly-realistic images with fine details. A critical reason is that the high memory and computation cost of volumetric representation learning greatly restricts the number of point samples for radiance integration during training. Deficient sampling not only limits the expressive power of the generator to handle fine details but also impedes effective GAN training due to the noise caused by unstable Monte Carlo sampling. We propose a novel approach that regulates point sampling and radiance field learning on 2D manifolds, embodied as a set of learned implicit surfaces in the 3D volume. For each viewing ray, we calculate ray-surface intersections and accumulate their radiance generated by the network. By training and rendering such radiance manifolds, our generator can produce high quality images with realistic fine details and strong visual 3D consistency.
PDF CVPR2022 Oral. Project page: https://yudeng.github.io/GRAM/