Available at: https://digitalcommons.calpoly.edu/theses/2799
Date of Award
5-2024
Degree Name
MS in Computer Science
Department/Program
Computer Science
College
College of Engineering
Advisor
John Seng
Advisor Department
Computer Science
Advisor College
College of Engineering
Abstract
As the field of mobile robotics rapidly expands, precise understanding of a robot’s position and orientation becomes critical for autonomous navigation and efficient task performance. In this thesis, we present a snapshot-based global localization machine learning model for a mobile robot, the e-puck, in a simulated environment. Our model uses multimodal data to predict both position and orientation using the robot’s on-board cameras and LiDAR sensor. In an effort to minimize localization error, we explore different sensor configurations by varying the number of cameras and LiDAR layers used. Additionally, we investigate the performance benefits of different multimodal fusion strategies while leveraging the EfficientNet CNN architecture as our model’s foundation. Data collection and testing is conducted using Webots simulation software, and our results show that, when tested in a 12m x 12m simulated apartment environment, our model is able to achieve positional accuracy within 0.2m for each of the x and y coordinates and orientation accuracy within 2°, all without the need for sequential data history. Our results demonstrate the potential for accurate global localization of mobile robots in simulated environments without the need for existing maps or temporal data.