2022-05-06 更新
YOLOPose: Transformer-based Multi-Object 6D Pose Estimation using Keypoint Regression
Authors:Arash Amini, Arul Selvam Periyasamy, Sven Behnke
6D object pose estimation is a crucial prerequisite for autonomous robot manipulation applications. The state-of-the-art models for pose estimation are convolutional neural network (CNN)-based. Lately, Transformers, an architecture originally proposed for natural language processing, is achieving state-of-the-art results in many computer vision tasks as well. Equipped with the multi-head self-attention mechanism, Transformers enable simple single-stage end-to-end architectures for learning object detection and 6D object pose estimation jointly. In this work, we propose YOLOPose (short form for You Only Look Once Pose estimation), a Transformer-based multi-object 6D pose estimation method based on keypoint regression. In contrast to the standard heatmaps for predicting keypoints in an image, we directly regress the keypoints. Additionally, we employ a learnable orientation estimation module to predict the orientation from the keypoints. Along with a separate translation estimation module, our model is end-to-end differentiable. Our method is suitable for real-time applications and achieves results comparable to state-of-the-art methods.
PDF
论文截图
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
Authors:Zongze Wu, Yotam Nitzan, Eli Shechtman, Dani Lischinski
In this paper, we perform an in-depth study of the properties and applications of aligned generative models. We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation. Here, we perform the first detailed exploration of model alignment, also focusing on StyleGAN. First, we empirically analyze aligned models and provide answers to important questions regarding their nature. In particular, we find that the child model’s latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing. We further show that zero-shot vision tasks may be performed in the child domain, while relying exclusively on supervision in the parent domain. We demonstrate qualitatively and quantitatively that our approach yields state-of-the-art results, while requiring only simple fine-tuning and inversion.
PDF 44 pages, 37 figures
论文截图
Structured Domain Adaptation with Online Relation Regularization for Unsupervised Person Re-ID
Authors:Yixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, Xiaogang Wang, Hongsheng Li
Unsupervised domain adaptation (UDA) aims at adapting the model trained on a labeled source-domain dataset to an unlabeled target-domain dataset. The task of UDA on open-set person re-identification (re-ID) is even more challenging as the identities (classes) do not have overlap between the two domains. One major research direction was based on domain translation, which, however, has fallen out of favor in recent years due to inferior performance compared to pseudo-label-based methods. We argue that the domain translation has great potential on exploiting the valuable source-domain data but existing methods did not provide proper regularization on the translation process. Specifically, previous methods only focus on maintaining the identities of the translated images while ignoring the inter-sample relations during translation. To tackle the challenges, we propose an end-to-end structured domain adaptation framework with an online relation-consistency regularization term. During training, the person feature encoder is optimized to model inter-sample relations on-the-fly for supervising relation-consistency domain translation, which in turn, improves the encoder with informative translated images. The encoder can be further improved with pseudo labels, where the source-to-target translated images with ground-truth identities and target-domain images with pseudo identities are jointly used for training. In the experiments, our proposed framework is shown to achieve state-of-the-art performance on multiple UDA tasks of person re-ID. With the synthetic-to-real translated images from our structured domain-translation network, we achieved second place in the Visual Domain Adaptation Challenge (VisDA) in 2020.
PDF Accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS)