Available at: https://digitalcommons.calpoly.edu/theses/1014
Date of Award
MS in Computer Science
We live in an age when consumers can now shop and browse the web using hand-held devices. This means that competitive companies need to have a website to represent their brand and to conduct business. E-commerce sites need to pay special attention to the usability of their sites, since it has such an impact on how potential costumers view their brand.
Jakob Nielsen defines usability as a "quality attribute that assesses how easy user interfaces are to use"; he separates usability into five quality components: learnability, efficiency, memorability, errors and satisfaction. The current standard for testing usability involves having a number of users physically use a site in order to determine where they have trouble. This kind of usability testing can be time consuming and costly.
In order to mitigate some of these costs, many tools are being developed to help automate the process. However, many automated tools evaluate only one of the five components, or simply look for errors. In an attempt to increase the reliability and scope of such testing, this paper investigates the effectiveness of automated usability evaluators and proposes methods for future researchers to test them. Specifically, this paper details an experiment performed to test the some freely available usability evaluators against more traditional usability evaluations. The experiment attempts to determine whether automatic usability evaluations might be used as a cheaper alternative to more traditional usability evaluations.