NeRF


2022-09-11 更新

Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations

Authors:Vadim Tschernezki, Iro Laina, Diane Larlus, Andrea Vedaldi

We present Neural Feature Fusion Fields (N3F), a method that improves dense 2D image feature extractors when the latter are applied to the analysis of multiple images reconstructible as a 3D scene. Given an image feature extractor, for example pre-trained using self-supervision, N3F uses it as a teacher to learn a student network defined in 3D space. The 3D student network is similar to a neural radiance field that distills said features and can be trained with the usual differentiable rendering machinery. As a consequence, N3F is readily applicable to most neural rendering formulations, including vanilla NeRF and its extensions to complex dynamic scenes. We show that our method not only enables semantic understanding in the context of scene-specific neural fields without the use of manual labels, but also consistently improves over the self-supervised 2D baselines. This is demonstrated by considering various tasks, such as 2D object retrieval, 3D segmentation, and scene editing, in diverse sequences, including long egocentric videos in the EPIC-KITCHENS benchmark.
PDF 3DV2022, Oral. Project page: https://www.robots.ox.ac.uk/~vadim/n3f/

点此查看论文截图

Volume Rendering Digest (for NeRF)

Authors:Andrea Tagliasacchi, Ben Mildenhall

Neural Radiance Fields employ simple volume rendering as a way to overcome the challenges of differentiating through ray-triangle intersections by leveraging a probabilistic notion of visibility. This is achieved by assuming the scene is composed by a cloud of light-emitting particles whose density changes in space. This technical report summarizes the derivations for differentiable volume rendering. It is a condensed version of previous reports, but rewritten in the context of NeRF, and adopting its commonly used notation.
PDF Overleaf: https://www.overleaf.com/read/fkhpkzxhnyws

点此查看论文截图

PixTrack: Precise 6DoF Object Pose Tracking using NeRF Templates and Feature-metric Alignment

Authors:Prajwal Chidananda, Saurabh Nair, Douglas Lee, Adrian Kaehler

We present PixTrack, a vision based object pose tracking framework using novel view synthesis and deep feature-metric alignment. Our evaluations demonstrate that our method produces highly accurate, robust, and jitter-free 6DoF pose estimates of objects in RGB images without the need of any data annotation or trajectory smoothing. Our method is also computationally efficient making it easy to have multi-object tracking with no alteration to our method and just using CPU multiprocessing.
PDF

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录