Date of Award

12-2017

Degree Name

MS in Biomedical Engineering

Department/Program

Biomedical and General Engineering

Advisor

Lanny Griffin and Xiao-Hua (Helen) Yu

Abstract

Alzheimer’s disease (AD) is a chronic and progressive, irreversible syndrome that deteriorates the cognitive functions. Official death certificates of 2013 reported 84,767 deaths from Alzheimer’s disease, making it the 6th leading cause of death in the United States. The rate of AD is estimated to double by 2050. The neurodegeneration of AD occurs decades before symptoms of dementia are evident. Therefore, having an efficient methodology for the early and proper diagnosis can lead to more effective treatments.

Neuroimaging techniques such as magnetic resonance imaging (MRI) can detect changes in the brain of living subjects. Moreover, medical imaging techniques are the best diagnostic tools to determine brain atrophies; however, a significant limitation is the level of training, methodology, and experience of the diagnostician. Thus, Computer aided diagnosis (CAD) systems are part of a promising tool to help improve the diagnostic outcomes. No publications addressing the use of Feedforward Artificial Neural Networks (ANN), and MRI image attributes for the classification of AD were found.

Consequently, the focus of this study is to investigate if the use of MRI images, specifically texture and frequency attributes along with a feedforward ANN model, can lead to the classification of individuals with AD. Moreover, this study compared the use of a single view versus a multi-view of MRI images and their performance. The frequency, texture, and MRI views in combination with the feedforward artificial neural network were tested to determine if they were comparable to the clinician’s performance. The clinician’s performances used were 78 percent accuracy, 87 percent sensitivity, 71 percent specificity, and 78 percent precision from a study with 1,073 individuals.

The study found that the use of the Discrete Wavelet Transform (DWT) and Fourier Transform (FT) low frequency give comparable results to the clinicians; however, the FT outperformed the clinicians with an accuracy of 85 percent, precision of 87 percent, sensitivity of 90 percent and specificity of 75 percent. In the case of texture, a single texture feature, and the combination of two or more features gave results comparable to the clinicians. However, the Gray level co-occurrence matrix (GLCOM), which is the combination of texture features, was the highest performing texture method with 82 percent accuracy, 86 percent sensitivity, 76 percent specificity, and 86 percent precision. Combination CII (energy and entropy) outperformed all other combinations with 78 percent accuracy, 88 percent sensitivity, 72 percent specificity, and 78 percent precision. Additionally, a combination of views can increase performance for certain texture attributes; however, the axial view outperformed the sagittal and coronal views in the case of frequency attributes. In conclusion, this study found that both texture and frequency characteristics in combinations with a feedforward backpropagation neural network can perform at the level of the clinician and even higher depending on the attribute and the view or combination of views used.

Share

COinS