College - Author 1
College of Engineering
Department - Author 1
Computer Science Department
College - Author 2
College of Engineering
Department - Author 2
Computer Science Department
College - Author 3
College of Engineering
Department - Author 3
Computer Engineering Department
College - Author 4
College of Engineering
Department - Author 4
Computer Science Department
Advisor
Jonathan Ventura, College of Engineering, Computer Science
Funding Source
This material is based upon work supported by the National Science Foundation Grant No. 2144822
Date
10-2024
Abstract/Summary
We introduce a novel method to convert a single input panorama into a 3D colored mesh representation of the scene. Unlike recent methods based on neural rendering, which are limited to low-resolution inputs and offline rendering, our approach supports 4k resolution inputs and real time rendering in a virtual reality headset. We first estimate a depth map and produce an initial layered depth image (LDI) representation. We fill unseen regions behind objects by iteratively cutting and inpainting the LDI. We then convert the LDI into an optimized, texture mapped mesh to achieve a compact representation
October 1, 2024.
Included in
URL: https://digitalcommons.calpoly.edu/ceng_surp/80