I2I Translation


2022-10-09 更新

Transferring dense object detection models to event-based data

Authors:Vincenz Mechler, Pavel Rojtberg

Event-based image representations are fundamentally different to traditional dense images. This poses a challenge to apply current state-of-the-art models for object detection as they are designed for dense images. In this work we evaluate the YOLO object detection model on event data. To this end we replace dense-convolution layers by either sparse convolutions or asynchronous sparse convolutions which enables direct processing of event-based images and compare the performance and runtime to feeding event-histograms into dense-convolutions. Here, hyper-parameters are shared across all variants to isolate the effect sparse-representation has on detection performance. At this, we show that current sparse-convolution implementations cannot translate their theoretical lower computation requirements into an improved runtime.
PDF

点此查看论文截图

TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive SubPixel Refinement of AprilTags

Authors:Bardienus P Duisterhof, Yaoyu Hu, Si Heng Teng, Michael Kaess, Sebastian Scherer

Wide-angle cameras are uniquely positioned for mobile robots, by virtue of the rich information they provide in a small, light, and cost-effective form factor. An accurate calibration of the intrinsics and extrinsics is a critical pre-requisite for using the edge of a wide-angle lens for depth perception and odometry. Calibrating wide-angle lenses with current state-of-the-art techniques yields poor results due to extreme distortion at the edge, as most algorithms assume a lens with low to medium distortion closer to a pinhole projection. In this work we present our methodology for accurate wide-angle calibration. Our pipeline generates an intermediate model, and leverages it to iteratively improve feature detection and eventually the camera parameters. We test three key methods to utilize intermediate camera models: (1) undistorting the image into virtual pinhole cameras, (2) reprojecting the target into the image frame, and (3) adaptive subpixel refinement. Combining adaptive subpixel refinement and feature reprojection significantly improves reprojection errors by up to 26.59 %, helps us detect up to 42.01 % more features, and improves performance in the downstream task of dense depth mapping. Finally, TartanCalib is open-source and implemented into an easy-to-use calibration toolbox. We also provide a translation layer with other state-of-the-art works, which allows for regressing generic models with thousands of parameters or using a more robust solver. To this end, TartanCalib is the tool of choice for wide-angle calibration. Project website and code: http://tartancalib.com.
PDF

点此查看论文截图

Multi-Channel Attention Selection GANs for Guided Image-to-Image Translation

Authors:Hao Tang, Philip H. S. Torr, Nicu Sebe

We propose a novel model named Multi-Channel Attention Selection Generative Adversarial Network (SelectionGAN) for guided image-to-image translation, where we translate an input image into another while respecting an external semantic guidance. The proposed SelectionGAN explicitly utilizes the semantic guidance information and consists of two stages. In the first stage, the input image and the conditional semantic guidance are fed into a cycled semantic-guided generation network to produce initial coarse results. In the second stage, we refine the initial results by using the proposed multi-scale spatial pooling & channel selection module and the multi-channel attention selection module. Moreover, uncertainty maps automatically learned from attention maps are used to guide the pixel loss for better network optimization. Exhaustive experiments on four challenging guided image-to-image translation tasks (face, hand, body, and street view) demonstrate that our SelectionGAN is able to generate significantly better results than the state-of-the-art methods. Meanwhile, the proposed framework and modules are unified solutions and can be applied to solve other generation tasks such as semantic image synthesis. The code is available at https://github.com/Ha0Tang/SelectionGAN.
PDF Accepted to TPAMI, an extended version of a paper published in CVPR2019. arXiv admin note: substantial text overlap with arXiv:1904.06807

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录