2023-03-04 更新
Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using Pixel-aligned Reconstruction Priors
Authors:Zhangyang Xiong, Di Kang, Derong Jin, Weikai Chen, Linchao Bao, Xiaoguang Han
Fast generation of high-quality 3D digital humans is important to a vast number of applications ranging from entertainment to professional concerns. Recent advances in differentiable rendering have enabled the training of 3D generative models without requiring 3D ground truths. However, the quality of the generated 3D humans still has much room to improve in terms of both fidelity and diversity. In this paper, we present Get3DHuman, a novel 3D human framework that can significantly boost the realism and diversity of the generated outcomes by only using a limited budget of 3D ground-truth data. Our key observation is that the 3D generator can profit from human-related priors learned through 2D human generators and 3D reconstructors. Specifically, we bridge the latent space of Get3DHuman with that of StyleGAN-Human via a specially-designed prior network, where the input latent code is mapped to the shape and texture feature volumes spanned by the pixel-aligned 3D reconstructor. The outcomes of the prior network are then leveraged as the supervisory signals for the main generator network. To ensure effective training, we further propose three tailored losses applied to the generated feature volumes and the intermediate feature maps. Extensive experiments demonstrate that Get3DHuman greatly outperforms the other state-of-the-art approaches and can support a wide range of applications including shape interpolation, shape re-texturing, and single-view reconstruction through latent inversion.
PDF
点此查看论文截图
One-Shot Face Video Re-enactment using Hybrid Latent Spaces of StyleGAN2
Authors:Trevine Oorloff, Yaser Yacoob
While recent research has progressively overcome the low-resolution constraint of one-shot face video re-enactment with the help of StyleGAN’s high-fidelity portrait generation, these approaches rely on at least one of the following: explicit 2D/3D priors, optical flow based warping as motion descriptors, off-the-shelf encoders, etc., which constrain their performance (e.g., inconsistent predictions, inability to capture fine facial details and accessories, poor generalization, artifacts). We propose an end-to-end framework for simultaneously supporting face attribute edits, facial motions and deformations, and facial identity control for video generation. It employs a hybrid latent-space that encodes a given frame into a pair of latents: Identity latent, $\mathcal{W}_{ID}$, and Facial deformation latent, $\mathcal{S}_F$, that respectively reside in the $W+$ and $SS$ spaces of StyleGAN2. Thereby, incorporating the impressive editability-distortion trade-off of $W+$ and the high disentanglement properties of $SS$. These hybrid latents employ the StyleGAN2 generator to achieve high-fidelity face video re-enactment at $1024^2$. Furthermore, the model supports the generation of realistic re-enactment videos with other latent-based semantic edits (e.g., beard, age, make-up, etc.). Qualitative and quantitative analyses performed against state-of-the-art methods demonstrate the superiority of the proposed approach.
PDF The project page is located at https://trevineoorloff.github.io/FaceVideoReenactment_HybridLatents.io/
点此查看论文截图
Attribute-Specific Manipulation Based on Layer-Wise Channels
Authors:Yuanjie Yan, Jian Zhao, Furao Shen
Image manipulation on the latent space of the pre-trained StyleGAN can control the semantic attributes of the generated images. Recently, some studies have focused on detecting channels with specific properties to directly manipulate the latent code, which is limited by the entanglement of the latent space. To detect the attribute-specific channels, we propose a novel detection method in the context of pre-trained classifiers. We analyse the gradients layer by layer on the style space. The intensities of the gradients indicate the channel’s responses to specific attributes. The latent style codes of channels control separate attributes in the layers. We choose channels with top-$k$ gradients to control specific attributes in the maximum response layer. We implement single-channel and multi-channel manipulations with a certain attribute. Our methods can accurately detect relevant channels for a large number of face attributes. Extensive qualitative and quantitative results demonstrate that the proposed methods outperform state-of-the-art methods in generalization and scalability.
PDF
点此查看论文截图
MorphGANFormer: Transformer-based Face Morphing and De-Morphing
Authors:Na Zhang, Xudong Liu, Xin Li, Guo-Jun Qi
Semantic face image manipulation has received increasing attention in recent years. StyleGAN-based approaches to face morphing are among the leading techniques; however, they often suffer from noticeable blurring and artifacts as a result of the uniform attention in the latent feature space. In this paper, we propose to develop a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods. Our contributions are threefold. First, inspired by GANformer, we introduce a bipartite structure to exploit long-range interactions in face images for iterative propagation of information from latent variables to salient facial features. Special loss functions are designed to support the optimization of face morphing. Second, we extend the study of transformer-based face morphing to demorphing by presenting an effective defense strategy with access to a reference image using the same generator of MorphGANFormer. Such demorphing is conceptually similar to unmixing of hyperspectral images but operates in the latent (instead of pixel) space. Third, for the first time, we address a fundamental issue of vulnerability-detectability trade-off for face morphing studies. It is argued that neither doppelganger norrandom pair selection is optimal, and a Lagrangian multiplier-based approach should be used to achieve an improved trade-off between recognition vulnerability and attack detectability.
PDF 13 pages, 13 figures
点此查看论文截图
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
Authors:Xiaojian Yuan, Kejiang Chen, Jie Zhang, Weiming Zhang, Nenghai Yu, Yang Zhang
Model inversion (MI) attacks have raised increasing concerns about privacy, which can reconstruct training data from public models. Indeed, MI attacks can be formalized as an optimization problem that seeks private data in a certain space. Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space, and can successfully reconstruct even the high-dimensional data (e.g., face images). However, these generative MI attacks do not fully exploit the potential capabilities of the target model, still leading to a vague and coupled search space, i.e., different classes of images are coupled in the search space. Besides, the widely used cross-entropy loss in these attacks suffers from gradient vanishing. To address these problems, we propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN). At first, a top-n selection strategy is proposed to provide pseudo-labels for public data, and use pseudo-labels to guide the training of the cGAN. In this way, the search space is decoupled for different classes of images. Then a max-margin loss is introduced to improve the search process on the subspace of a target class. Extensive experiments demonstrate that our PLG-MI attack significantly improves the attack success rate and visual quality for various datasets and models, notably, 2~3 $\times$ better than state-of-the-art attacks under large distributional shifts. Our code is available at: https://github.com/LetheSec/PLG-MI-Attack.
PDF Accepted by AAAI 2023
点此查看论文截图
Towards Universal Fake Image Detectors that Generalize Across Generative Models
Authors:Utkarsh Ojha, Yuheng Li, Yong Jae Lee
With generative models proliferating at a rapid rate, there is a growing need for general purpose fake image detectors. In this work, we first show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models when trained to detect GAN fake images. Upon analysis, we find that the resulting classifier is asymmetrically tuned to detect patterns that make an image fake. The real class becomes a sink class holding anything that is not fake, including generated images from models not accessible during training. Building upon this discovery, we propose to perform real-vs-fake classification without learning; i.e., using a feature space not explicitly trained to distinguish real from fake images. We use nearest neighbor and linear probing as instantiations of this idea. When given access to the feature space of a large pretrained vision-language model, the very simple baseline of nearest neighbor classification has surprisingly good generalization ability in detecting fake images from a wide variety of generative models; e.g., it improves upon the SoTA by +15.07 mAP and +25.90% acc when tested on unseen diffusion and autoregressive models.
PDF
点此查看论文截图
Gradient Adjusting Networks for Domain Inversion
Authors:Erez Sheffi, Michael Rotman, Lior Wolf
StyleGAN2 was demonstrated to be a powerful image generation engine that supports semantic editing. However, in order to manipulate a real-world image, one first needs to be able to retrieve its corresponding latent representation in StyleGAN’s latent space that is decoded to an image as close as possible to the desired image. For many real-world images, a latent representation does not exist, which necessitates the tuning of the generator network. We present a per-image optimization method that tunes a StyleGAN2 generator such that it achieves a local edit to the generator’s weights, resulting in almost perfect inversion, while still allowing image editing, by keeping the rest of the mapping between an input latent representation tensor and an output image relatively intact. The method is based on a one-shot training of a set of shallow update networks (aka. Gradient Modification Modules) that modify the layers of the generator. After training the Gradient Modification Modules, a modified generator is obtained by a single application of these networks to the original parameters, and the previous editing capabilities of the generator are maintained. Our experiments show a sizable gap in performance over the current state of the art in this very active domain. Our code is available at \url{https://github.com/sheffier/gani}.
PDF
点此查看论文截图
RGI: robust GAN-inversion for mask-free image inpainting and unsupervised pixel-wise anomaly detection
Authors:Shancong Mou, Xiaoyi Gu, Meng Cao, Haoping Bai, Ping Huang, Jiulong Shan, Jianjun Shi
Generative adversarial networks (GANs), trained on a large-scale image dataset, can be a good approximator of the natural image manifold. GAN-inversion, using a pre-trained generator as a deep generative prior, is a promising tool for image restoration under corruptions. However, the performance of GAN-inversion can be limited by a lack of robustness to unknown gross corruptions, i.e., the restored image might easily deviate from the ground truth. In this paper, we propose a Robust GAN-inversion (RGI) method with a provable robustness guarantee to achieve image restoration under unknown \textit{gross} corruptions, where a small fraction of pixels are completely corrupted. Under mild assumptions, we show that the restored image and the identified corrupted region mask converge asymptotically to the ground truth. Moreover, we extend RGI to Relaxed-RGI (R-RGI) for generator fine-tuning to mitigate the gap between the GAN learned manifold and the true image manifold while avoiding trivial overfitting to the corrupted input image, which further improves the image restoration and corrupted region mask identification performance. The proposed RGI/R-RGI method unifies two important applications with state-of-the-art (SOTA) performance: (i) mask-free semantic inpainting, where the corruptions are unknown missing regions, the restored background can be used to restore the missing content; (ii) unsupervised pixel-wise anomaly detection, where the corruptions are unknown anomalous regions, the retrieved mask can be used as the anomalous region’s segmentation mask.
PDF
点此查看论文截图
Learning Input-agnostic Manipulation Directions in StyleGAN with Text Guidance
Authors:Yoonjeon Kim, Hyunsu Kim, Junho Kim, Yunjey Choi, Eunho Yang
With the advantages of fast inference and human-friendly flexible manipulation, image-agnostic style manipulation via text guidance enables new applications that were not previously available. The state-of-the-art text-guided image-agnostic manipulation method embeds the representation of each channel of StyleGAN independently in the Contrastive Language-Image Pre-training (CLIP) space, and provides it in the form of a Dictionary to quickly find out the channel-wise manipulation direction during inference time. However, in this paper we argue that this dictionary which is constructed by controlling single channel individually is limited to accommodate the versatility of text guidance since the collective and interactive relation among multiple channels are not considered. Indeed, we show that it fails to discover a large portion of manipulation directions that can be found by existing methods, which manually manipulates latent space without texts. To alleviate this issue, we propose a novel method that learns a Dictionary, whose entry corresponds to the representation of a single channel, by taking into account the manipulation effect coming from the interaction with multiple other channels. We demonstrate that our strategy resolves the inability of previous methods in finding diverse known directions from unsupervised methods and unknown directions from random text while maintaining the real-time inference speed and disentanglement ability.
PDF Accepted to ICLR 2023
点此查看论文截图
Adversarial Attack with Raindrops
Authors:Jiyuan Liu, Bingyi Lu, Mingkang Xiong, Tao Zhang, Huilin Xiong
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples, which are usually designed artificially to fool DNNs, but rarely exist in real-world scenarios. In this paper, we study the adversarial examples caused by raindrops, to demonstrate that there exist plenty of natural phenomena being able to work as adversarial attackers to DNNs. Moreover, we present a new approach to generate adversarial raindrops, denoted as AdvRD, using the generative adversarial network (GAN) technique to simulate natural raindrops. The images crafted by our AdvRD look very similar to the real-world raindrop images, statistically close to the distribution of true raindrop images, and more importantly, can perform strong adversarial attack to the state-of-the-art DNN models. On the other side, we show that the adversarial training using our AdvRD images can significantly improve the robustness of DNNs to the real-world raindrop attacks. Extensive experiments are carried out to demonstrate that the images crafted by AdvRD are visually and statistically close to the natural raindrop images, can work as strong attackers to DNN models, and also help improve the robustness of DNNs to raindrop attacks.
PDF 10 pages, 7 figures, cvpr2023
点此查看论文截图
Synthesizing Mixed-type Electronic Health Records using Diffusion Models
Authors:Taha Ceritli, Ghadeer O. Ghosheh, Vinod Kumar Chauhan, Tingting Zhu, Andrew P. Creagh, David A. Clifton
Electronic Health Records (EHRs) contain sensitive patient information, which presents privacy concerns when sharing such data. Synthetic data generation is a promising solution to mitigate these risks, often relying on deep generative models such as Generative Adversarial Networks (GANs). However, recent studies have shown that diffusion models offer several advantages over GANs, such as generation of more realistic synthetic data and stable training in generating data modalities, including image, text, and sound. In this work, we investigate the potential of diffusion models for generating realistic mixed-type tabular EHRs, comparing TabDDPM model with existing methods on four datasets in terms of data quality, utility, privacy, and augmentation. Our experiments demonstrate that TabDDPM outperforms the state-of-the-art models across all evaluation metrics, except for privacy, which confirms the trade-off between privacy and utility.
PDF
点此查看论文截图
Level Up the Deepfake Detection: a Method to Effectively Discriminate Images Generated by GAN Architectures and Diffusion Models
Authors:Luca Guarnera, Oliver Giudice, Sebastiano Battiato
The image deepfake detection task has been greatly addressed by the scientific community to discriminate real images from those generated by Artificial Intelligence (AI) models: a binary classification task. In this work, the deepfake detection and recognition task was investigated by collecting a dedicated dataset of pristine images and fake ones generated by 9 different Generative Adversarial Network (GAN) architectures and by 4 additional Diffusion Models (DM). A hierarchical multi-level approach was then introduced to solve three different deepfake detection and recognition tasks: (i) Real Vs AI generated; (ii) GANs Vs DMs; (iii) AI specific architecture recognition. Experimental results demonstrated, in each case, more than 97% classification accuracy, outperforming state-of-the-art methods.
PDF