Abstract

While computers assist humans with tasks such as navigation that involve spatial aspects, agents that can interact in a meaningful way in this context are still in their infancy. One core issue is the mismatch in the representation of spatial information a computer-based system is likely to use, and the one a human is likely to use. Computers are better suited for quantitative schemes such as maps or diagrams that rely on measurable distances between entities. Humans frequently use higher-level, domain-specific conceptual representations such as buildings, rooms, or streets for orientation purposes. Combined with the person-centric world view that we often assume when we refer to spatial information, it is challenging for agents to convert statements using spatial references into assertions that match their own internal representation. In this paper, we discuss an approach that uses natural language processing and information extraction tool kits to identify entities and statements about their spatial relations. These extractions are then processed by a spatial reasoner to convert them from the human conceptual space into the quantitative space used by the computer-based agent.

Disciplines

Computer Sciences

Share

COinS
 

URL: https://digitalcommons.calpoly.edu/csse_fac/180