Preprint version. Published in International Workshop on Predictor Models in Software Engineering Proceedings (PROMISE'07: ICSE Workshops 2007): Minneapolis, MN, May 20, 2007, pages 4-4.
NOTE: At the time of publication, the author Alex Dekhtyar was not yet affiliated with Cal Poly.
The definitive version is available at https://doi.org/10.1109/PROMISE.2007.8.
Several recent studies employed traditional information retrieval (IR) methods to assist in the mapping of elements of software engineering artifacts to each other. This activity is referred to as candidate link generation because the final say in determining the final mapping belongs to the human analyst. Feedback techniques that utilize information from the analyst (on whether the candidate links are correct or not) have been shown to improve the quality of the mappings. Yet the analyst is making an investment of time in providing the feedback. This leads to the question of whether or not guidance can be provided to the analyst on how to best utilize that time. This paper simulates a number of approaches an analyst might take to evaluating the same candidate link list, and discovers that more structured and organized approaches appear to save time/effort of the analyst.
This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution.