DOI: https://doi.org/10.15368/theses.2010.64
Available at: https://digitalcommons.calpoly.edu/theses/317
Date of Award
5-2010
Degree Name
MS in Computer Science
Department/Program
Computer Science
Advisor
Alex Dekhtyar
Abstract
The requirements traceability matrix (RTM) supports many software engineering and software verification and validation (V&V) activities such as change impact analysis, reverse engineering, reuse, and regression testing. The generation of RTMs is tedious and error-prone, though. Thus RTMs are often not generated or maintained. Automated techniques have been developed to generate candidate RTMs with some success. Automating the process can save time and potentially improve the quality of the results. When using RTMs to support the V&V of mission- or safety-critical systems, however, a human analyst is required to vet the candidate RTMs. The focus thus becomes the quality of the final RTM. This thesis introduces an experimental framework for studying human interactions with decision support software and reports on the results from a study which applies the framework to investigate how human analysts perform when vetting candidate RTMs generated by automated methods. Specifically, a study was undertaken at two universities and had 33 participants analyze RTMs of varying accuracy for a Java code formatter program. The study found that analyst behavior differs depending on the initial candidate RTM given to the analyst, but that all analysts tend to converge their final RTMs toward a hot spot in the recall-precision space.