Postprint version. Published in International Journal on Software Engineering and Knowledge Engineering (IJSEKE), Volume 15, Issue 5, October 1, 2005, pages 751-782.
NOTE: At the time of publication, the author Alex Dekhtyar was not yet affiliated with Cal Poly.
The definitive version is available at https://doi.org/10.1142/S021819400500252X.
The building of traceability matrices by those other than the original developers is an arduous, error prone, prolonged, and labor intensive task. Thus, after-the-fact requirements tracing is a process where the right kind of automation can definitely assist an analyst. Recently, a number of researchers have studied the application of various methods, often based on information retrieval after-the-fact tracing. The studies are diverse enough to warrant a means for comparing them easily as well as for determining areas that require further investigation. To that end, we present here an experimental framework for evaluating requirements tracing and traceability studies. Common methods, metrics and measures are described. Recent experimental requirements tracing journal and conference papers are catalogued using the framework. We compare these studies and identify areas for future research. Finally, we provide suggestions on how the field of tracing and traceability research may move to a more mature level.
This is an electronic version of an article published in International Journal of Software Engineering and Knowledge Engineering.