Available at: https://digitalcommons.calpoly.edu/theses/2769
Date of Award
3-2024
Degree Name
MS in Computer Science
Department/Program
Computer Science
College
College of Engineering
Advisor
Jonathan Ventura
Advisor Department
Computer Science
Advisor College
College of Engineering
Abstract
Creating 360-degree 3D content has gained traction in the past few years, being used for Virtual Reality environments. However, creating such content is challenging because it requires a multi-camera setup or a collection of images from different perspectives. This paper proposes 3D Pano Inpainting, a pipeline capable of transforming a single equirectangular panoramic RGBD image into a complete 360° 3D virtual reality scene represented as a textured mesh. Our methodology is as follows: we estimate a consistent depth map for the input panorama; we use a pre built framework to convert the image and its depth map into a textured mesh with inpainted background edges; we account for wrapping the resulting mesh around the viewer’s perspective for better immersion in VR headsets. Additionally, we evaluate our method’s effectiveness in producing consistent novel views through the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPS) between a rendering produced from the ground truth image and depth map to that produced from our model. Furthermore, we compare our model’s scores with those of a non-inpainted textured mesh.