Available at: https://digitalcommons.calpoly.edu/theses/2995
Date of Award
6-2025
Degree Name
MS in Computer Science
Department/Program
Computer Science
College
College of Engineering
Advisor
Jonathan Ventura
Advisor Department
Computer Science
Advisor College
College of Engineering
Abstract
Advances in neural field representations have led to a significant improvement in view synthesis quality. However, many current novel view synthesis methods rely on a dense set of input views, which can be impractical and inefficient in real-world applications. We propose DeepPanoRF, a novel method for 360◦ scene reconstruction from a sparse set of input equirectangular panoramas. Built upon K-Planes, a radiance field representation that encodes explicit features on orthogonal feature planes, our method does not directly learn feature grids. Instead, we parameterize the feature grids to enable sparse view reconstruction without pretraining or additional regularization. We implement a custom U-Net architecture to take advantage of the encoder-decoder network and the skip connections. We evaluate our method’s effectiveness on the Habitat-Matterport 3D (HM3D) dataset, which consists of diverse, high-quality indoor environments. Our results demonstrate that DeepPanoRF outperforms K-Planes in both reconstruction quality and structural coherence when dealing with sparse input views.