2022-09-21 更新
HiMFR: A Hybrid Masked Face Recognition Through Face Inpainting
Authors:Md Imran Hosen, Md Baharul Islam
To recognize the masked face, one of the possible solutions could be to restore the occluded part of the face first and then apply the face recognition method. Inspired by the recent image inpainting methods, we propose an end-to-end hybrid masked face recognition system, namely HiMFR, consisting of three significant parts: masked face detector, face inpainting, and face recognition. The masked face detector module applies a pretrained Vision Transformer (ViT_b32) to detect whether faces are covered with masked or not. The inpainting module uses a fine-tune image inpainting model based on a Generative Adversarial Network (GAN) to restore faces. Finally, the hybrid face recognition module based on ViT with an EfficientNetB3 backbone recognizes the faces. We have implemented and evaluated our proposed method on four different publicly available datasets: CelebA, SSDMNV2, MAFA, {Pubfig83} with our locally collected small dataset, namely Face5. Comprehensive experimental results show the efficacy of the proposed HiMFR method with competitive performance. Code is available at https://github.com/mdhosen/HiMFR
PDF 7 pages, 6 figures, International Conference on Pattern Recognition Workshop: Deep Learning for Visual Detection and Recognition
点此查看论文截图
StyleGAN Encoder-Based Attack for Block Scrambled Face Images
Authors:AprilPyone MaungMaung, Hitoshi Kiya
In this paper, we propose an attack method to block scrambled face images, particularly Encryption-then-Compression (EtC) applied images by utilizing the existing powerful StyleGAN encoder and decoder for the first time. Instead of reconstructing identical images as plain ones from encrypted images, we focus on recovering styles that can reveal identifiable information from the encrypted images. The proposed method trains an encoder by using plain and encrypted image pairs with a particular training strategy. While state-of-the-art attack methods cannot recover any perceptual information from EtC images, the proposed method discloses personally identifiable information such as hair color, skin color, eyeglasses, gender, etc. Experiments were carried out on the CelebA dataset, and results show that reconstructed images have some perceptual similarities compared to plain images.
PDF To appear in APSIPA ASC 2022
点此查看论文截图
One-Shot Synthesis of Images and Segmentation Masks
Authors:Vadim Sushko, Dan Zhang, Juergen Gall, Anna Khoreva
Joint synthesis of images and segmentation masks with generative adversarial networks (GANs) is promising to reduce the effort needed for collecting image data with pixel-wise annotations. However, to learn high-fidelity image-mask synthesis, existing GAN approaches first need a pre-training phase requiring large amounts of image data, which limits their utilization in restricted image domains. In this work, we take a step to reduce this limitation, introducing the task of one-shot image-mask synthesis. We aim to generate diverse images and their segmentation masks given only a single labelled example, and assuming, contrary to previous models, no access to any pre-training data. To this end, inspired by the recent architectural developments of single-image GANs, we introduce our OSMIS model which enables the synthesis of segmentation masks that are precisely aligned to the generated images in the one-shot regime. Besides achieving the high fidelity of generated masks, OSMIS outperforms state-of-the-art single-image GAN models in image synthesis quality and diversity. In addition, despite not using any additional data, OSMIS demonstrates an impressive ability to serve as a source of useful data augmentation for one-shot segmentation applications, providing performance gains that are complementary to standard data augmentation techniques. Code is available at https://github.com/ boschresearch/one-shot-synthesis
PDF Accepted as a conference paper at IEEE Winter Conference on Applications of Computer Vision (WACV) 2023
点此查看论文截图
AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using Denoising Diffusion Probabilistic Models
Authors:Nithin Gopalakrishnan Nair, Kangfu Mei, Vishal M. Patel
Although many long-range imaging systems are designed to support extended vision applications, a natural obstacle to their operation is degradation due to atmospheric turbulence. Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion. In recent years, various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed in the literature which attempt to remove the distortion in the image. However, some of these methods are difficult to train and often fail to reconstruct facial features and produce unrealistic results especially in the case of high turbulence. Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images. In this paper, we propose the first DDPM-based solution for the problem of atmospheric turbulence mitigation. We also propose a fast sampling technique for reducing the inference times for conditional DDPMs. Extensive experiments are conducted on synthetic and real-world data to show the significance of our model. To facilitate further research, all codes and pretrained models are publically available at http://github.com/Nithin-GK/AT-DDPM
PDF Accepted to IEEE WACV 2023
点此查看论文截图
Continuously Controllable Facial Expression Editing in Talking Face Videos
Authors:Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan Sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu
Recently audio-driven talking face video generation has attracted considerable attention. However, very few researches address the issue of emotional editing of these talking face videos with continuously controllable expressions, which is a strong demand in the industry. The challenge is that speech-related expressions and emotion-related expressions are often highly coupled. Meanwhile, traditional image-to-image translation methods cannot work well in our application due to the coupling of expressions with other attributes such as poses, i.e., translating the expression of the character in each frame may simultaneously change the head pose due to the bias of the training data distribution. In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously. We present a new perspective for this task as a special case of motion information editing, where we use a 3DMM to capture major facial movements and an associated texture map modeled by a StyleGAN to capture appearance details. Both representations (3DMM and texture map) contain emotional information and can be continuously modified by neural networks and easily smoothed by averaging in coefficient/latent spaces, making our method simple yet effective. We also introduce a mouth shape preservation loss to control the trade-off between lip synchronization and the degree of exaggeration of the edited expression. Extensive experiments and a user study show that our method achieves state-of-the-art performance across various evaluation criteria.
PDF Demo video: https://youtu.be/WD-bNVya6kM
点此查看论文截图
Enhancing vehicle detection accuracy in thermal infrared images using multiple GANs
Authors:Shivom Bhargava, Pranamesh Chakraborty
Vehicle detection accuracy is fairly accurate in good-illumination conditions but susceptible to poor detection accuracy under low-light conditions. The combined effect of low-light and glare from vehicle headlight or tail-light results in misses in vehicle detection more likely by state-of-the-art object detection models. However, thermal infrared images are robust to illumination changes and are based on thermal radiations. Recently, Generative Adversarial Networks (GANs) have been extensively used in image domain transfer tasks. State-of-the-art GAN models have attempted to improve vehicle detection accuracy in night-time by converting infrared images to day-time RGB images. However, these models have been found to under-perform during night-time conditions compared to day-time conditions. Therefore, this study attempts to alleviate this shortcoming by proposing three different approaches based on combination of GAN models at two different levels that tries to reduce the feature distribution gap between day-time and night-time infrared images. Quantitative analysis to compare the performance of the proposed models with the state-of-the-art models have been done by testing the models using state-of-the-art object detection models. Both the quantitative and qualitative analyses have shown that the proposed models outperform the state-of-the-art GAN models for vehicle detection in night-time conditions, showing the efficacy of the proposed models.
PDF