Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Factored Neural Representation for Scene Understanding.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Author(s): Wong, Yu‐Shiang1 (AUTHOR); Mitra, Niloy J.1,2 (AUTHOR)
- Source:
Computer Graphics Forum. Aug2023, Vol. 42 Issue 5, p1-14. 14p. 6 Color Photographs, 4 Diagrams, 3 Charts.
- Subject Terms:
- Additional Information
- Abstract:
A long‐standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB‐D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end‐to‐end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB‐D video to produce object‐level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of Computer Graphics Forum is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.