Date of Award

6-2021

Degree Name

MS in Computer Science

Department/Program

Computer Science

College

College of Engineering

Advisor

Franz Kurfess

Advisor Department

Computer Science

Advisor College

College of Engineering

Abstract

Usability testing, or user experience (UX) testing, is increasingly recognized as an important part of the user interface design process. However, evaluating usability tests can be expensive in terms of time and resources and can lack consistency between human evaluators. This makes automation an appealing expansion or alternative to conventional usability techniques.

Early usability automation focused on evaluating human behavior through quantitative metrics but the explosion of opinion mining and sentiment analysis applications in recent decades has led to exciting new possibilities for usability evaluation methods.

This paper presents a survey of modern, open-source sentiment analyzers’ usefulness in extracting and correctly identifying moments of semantic significance in the context of recorded mock usability evaluations. Though our results did not find a text-based sentiment analyzer that could correctly parse moments as well as human evaluators, one analyzer was found to be able to parse positive moments found through audio-only cues as well as human evaluators. Further research into adjusting settings on current sentiment analyzers for usability evaluations and using multimodal tools instead of text-based analyzers could produce valuable tools for usability evaluations when used in conjunction with human evaluators.

Share

COinS