Processing math: 100%
421
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
      0
      0
      0
      0
      Smart Citations
      0
      0
      0
      0
      Citing PublicationsSupportingMentioningContrasting
      View Citations

      See how this article has been cited at scite.ai

      scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Exploring the Efficacy of Deep Learning Techniques in Detecting and Diagnosing Alzheimer’s Disease: A Comparative Study

      Published
      research-article
      Bookmark

            Abstract

            Transfer learning has become extremely popular in recent years for tackling issues from various sectors, including the analysis of medical images. Medical image analysis has transformed medical care in recent years, enabling physicians to identify diseases early and accelerate patient recovery. Alzheimer’s disease (AD) diagnosis has been greatly aided by imaging. AD is a degenerative neurological condition that slowly deprives patients of their memory and cognitive abilities. Computed tomography (CT) and brain magnetic resonance imaging (MRI) scans are used to detect dementia in AD patients. This research primarily aims to classify AD patients into multiple classes using ResNet50, VGG16, and DenseNet121 as transfer learning along with convolutional neural networks on a large dataset as compared to existing approaches as it improves classification accuracy. The methods employed utilize CT and brain MRI scans for AD patient classification, considering various stages of AD. The study demonstrates promising results in predicting AD phases with MRI, yet challenges persist, including processing large datasets and cognitive workload involved in interpreting scans. Addressing image quality variations is crucial, necessitating advancements in imaging technology and analysis techniques. The different stages of AD are early mental retardation, mild mental impairment, late mild cognitive impairment, and final AD stage. The novel approach gives results with an accuracy of 96.6% and significantly improved outcomes compared to existing models.

            Main article text

            INTRODUCTION

            Alzheimer’s disease (AD) is a global health crisis that affects millions of people around the world. This debilitating condition erodes the brain’s ability to comprehend, remember, and perform basic functions, ultimately leading to death (Nawaz et al., 2021). With projections indicating that the number of AD patients will increase from 50 million to 152 million by 2050, it is imperative that we take action now to address this growing health crisis (Maurer et al., 1997). The cost of treating AD is already staggering, with expected global expenses reaching nearly $186 billion in 2018 (Richards and Hendrie, 1999). Unfortunately, this number is only expected to increase in the coming years, putting an enormous burden on the healthcare system (Yiannopoulou and Papageorgiou, 2020). According to the established Clinical Dementia Rating result, the disorder is split into four stages: early mild cognitive impairment (EMCI), mild cognitive impairment (MCI), late mild cognitive impairment (LMCI), and AD (Morris et al., 2001; Yang et al., 2021). Early diagnosis of dementia disorders is crucial for patient recovery and reducing treatment expenses because the cost of treating patients with EMCI and LMCI is different (Nozadi and Kadoury, 2018; Sheng et al., 2021). Diagnosis of AD is best possible after “Alzheimer’s dementia,” since AD pathology which changes in patients could not be assessed early (DeTure and Dickson, 2019; Porsteinsson et al., 2021). The initial diagnostic criteria for AD were analyzed in 1984 and only relied on clinical symptoms (Alzheimer’s Association Report, 2018; Nasreddine et al., 2023). With the discovery of different biomarkers such as cerebrospinal fluid, magnetic resonance imaging (MRI), and positron emission tomography (PET) data, the international working group invented a new approach in 2014, and this served as the model for the National Institute on Aging and Alzheimer’s Association (NIA-AA) (Jack et al., 2018; Mankhong et al., 2022). Biomarker data are used to connect the clinical conditions of dementia or mild cognitive loss to intrinsic AD pathological changes with high, moderate, or low risk in the NIA-AA criteria (Jellinger et al., 1990; Scheltens et al., 2016; Jack et al., 2018). Imaging biomarkers are used to assess AD, such as computed tomography, functional magnetic resonance imaging (fMRI), MRI, and PET scans (Sheikh-Bahaei et al., 2017). The hippocampus and entorhinal cortex have shown extremely early changes in AD that are consistent with pathology, but it is still uncertain which structure would be best for an early diagnosis. The physiology of dementia and its differential diagnosis have benefited greatly from structural and functional imaging, which also holds considerable potential for tracking the course of the disease (O’Brien, 2007). Numerous reports have been documented to show the imaging methods that can be used to detect AD (Kim et al., 2022; Prasath and Sumathi, 2023). In volumetric MRI, patterns of sick and healthy subjects were identified using feature-based morphometry (Toews et al., 2010). Recently, computerized medical image processing such as convolution neural networks (CNNs) has achieved major advancements (Yamashita et al., 2018). As a result, various CNN models, including Visual Geometry Group (VGG), MobileNet, AlexNet, and ResNet, are available for object detection and segmentation. Despite the fact that CNNs are a renowned deep learning technique, their effectiveness is hampered by the absence of an extensive medical imaging dataset (Chan et al., 2020). Transfer learning is among the efficient methods for building a deep CNN without overfitting when the amount of data is minimal (Xiao et al., 2018). A pre-trained network is the foundation of transfer learning. The proposed method can learn the most useful features instead of training a specific CNN without preparation. To categorize AD into five classes, the proposed research study has used four pre-trained networks, comprising VGG16, ResNet, and DenseNet121. The main contribution of this research paper to detect and classify the AD stages is done in the following stages:

            • Identification of image dataset and ensuring the identified dataset is in the artificial neural network (ANN) format

            • Conversion of this image dataset into the jpeg format

            • Application of different normalization techniques on the dataset to remove ambiguities

            • Application of different data augmentation techniques on the normalized dataset

            • Ensemble of different deep learning approaches on normalized dataset to detect and diagnose AD stages

            • Finally, comparison of the efficiency of deep learning models, and it was found that VGG16 and DenseNet121 outperform ResNet 50 and other models

            The research focuses on utilizing transfer learning and deep learning techniques, specifically VGG16, ResNet50, and DenseNet121, to detect and classify AD stages. By employing pre-trained networks, the study aims to overcome limitations in the dataset size and optimize classification accuracy. The identified contributions include dataset preparation, normalization, and augmentation, followed by the application of ensemble deep learning approaches for classification. The results highlight the superior performance of VGG16 and DenseNet121 compared to ResNet50 and other models, demonstrating their efficacy in AD stage detection. Through this research, significant advancements are made in AD diagnosis, addressing the pressing need for accurate and efficient classification methods in the field of medical imaging and neurology.

            LITERATURE REVIEW

            A literature review on the use of machine learning techniques in AD research shows a growing trend in the development of models that can assist in early diagnosis, predict disease progression, and improve the understanding of the underlying biological mechanisms of AD. One of the most common approaches in AD research is the use of MRI scans to study brain changes associated with the disease. CNNs have been used to classify and differentiate between healthy brains and those with AD based on MRI scans. Some of the promising research studies in detecting early signs of AD, which can help in early intervention and improve patient outcomes, are described as follows:

            An automated framework was developed by Acharya et al. (2019) to evaluate whether a baseline brain scan will detect any evidence of AD. Wang and Liu (2019) integrated genomic data from six different brain areas using support vector machine (SVM) learning techniques to find AD biomarkers. Mahyoub et al. (2018) proposed that relying on characteristics including lifestyle, medical history, demography, and other considerations, AD is predicted at various stages. Rueda et al. (2014) suggested a fusion-based image processing technique that identifies discriminative brain patterns connected to the presence of neurodegenerative disorders. The effectiveness of classification using SVM was assessed on several datasets once the discriminative patterns had been identified. A classification approach based on multilayer brain divisions was presented by Li and Zhang (2016). Using SVM, histogram-based parameters from MRI data were used to categorize various brain levels.

            Payan and Cruz’s (2015) contributions lie in their application of three-dimensional (3D) CNNs to predict AD using neuroimaging data. By leveraging advanced deep learning techniques, the study demonstrates the potential of CNNs in analyzing complex 3D brain images to aid in AD diagnosis. This pioneering work highlights the role of machine learning algorithms in identifying patterns indicative of AD pathology, offering promise for early detection and intervention strategies. Similarly, Liu et al. (2015) proposed a multimodal neuroimaging feature learning approach for multiclass diagnosis of AD. This method integrates information from multiple neuroimaging modalities to enhance diagnostic accuracy. Through comprehensive feature learning, the study advances the field by providing a robust framework for the classification of AD across multiple stages, thereby aiding in early detection and personalized treatment strategies. Researchers like Hosseini-Asl et al. (2015) developed a 3D deeply supervised adaptable CNN for AD diagnostics. This innovative approach harnesses the power of deep learning to analyze 3D MRI data, enabling more accurate and efficient detection of AD-related brain changes. By leveraging deep supervision techniques, the proposed network enhances feature representation and classification performance, advancing the capabilities of automated diagnostic systems for AD.

            Giraldo et al. (2018) proposed an automated technique recently developed for identifying structural abnormalities in the thalamus, planum temporal, amygdala, and hippocampal areas. Nawaz et al. (2021) devised a framework based on the computer-aided system, which aids in real-time AD diagnosis. They have suggested identifying the stages of AD. For certain deep feature modeling and extraction, the researchers have used classification algorithms like K-nearest neighbor, random forest (RF), and SVM (Thanh Noi and Kappas, 2017; Sheth et al., 2022). Large datasets were necessary for classification and extracting deep features to avoid overfitting problems. To attain the maximum accuracy in early AD diagnosis, they have recommended on time the depth and propagation of learning techniques compared to previous approaches (Gupta et al., 2019). Still, currently, there is no treatment for AD using any medical reasoning/algorithm approaches or for detecting any stages of its complications (Zhao et al., 2023). Therefore, researchers working in the field of artificial intelligence thrive their interest to develop any suitable algorithm in AD-related areas. Two methods are utilized: conventional machine learning and deep learning. The conventional machine learning procedure contains supports, RF, linear regression, naïve Bayesian, ANNs, etc., while deep learning includes CNN, recursive neural networks, etc. (Zhao et al., 2023).

            The main contribution of Sarraf et al. (2016) lies in the utilization of deep CNNs for AD classification based on MRI and fMRI data. By leveraging advanced neural network architectures, the study aims to enhance the accuracy of AD diagnosis, potentially enabling earlier detection and intervention. This approach demonstrates the potential of deep learning techniques in leveraging neuroimaging data for improved understanding and management of AD. Suk et al. (2014) contributed to AD and MCI diagnoses. Through hierarchical feature representation and multimodal fusion using deep learning techniques, the study enhances the accuracy and reliability of AD/MCI diagnosis. By leveraging deep learning algorithms to integrate diverse data sources, such as MRI and PET scans, the paper provides a comprehensive framework for improving early detection and understanding the underlying mechanisms of AD/MCI.

            To attain high accuracy, Sørensen et al. (2018) proposed focusing on nonlinear SVM for the radial base purpose when developing a computerized machine learning approach for categorizing AD phases. Maqsood et al. (2019) developed a transfer learning approach to identify AD. They suggested breaking down the AD category into different divisions. Since AD is an incurable aliment, it is therefore an emerging topic for research globally. The contributions of researchers across the globe for the detection and diagnosis of this disorder are listed in Table 1.

            Table 1:

            Literature review for AD detection.

            DatasetClassificationResultsReference
            MIAS datasetBinary95% Chowdhary et al. (2020)
            Retinal photographsBinary93% Cheung et al. (2022)
            MNISTBinary85% Nagabushanam et al. (2022)
            ADNIBinary96% Amoroso et al. (2018)
            ADNIBinary85% Mirabnahrazam et al. (2022)
            ADNIBinary88% Hashemifar et al. (2022)
            ADNIMulti96% Ning et al. (2021)

            Abbreviations: ADNI, Alzheimer’s Disease Neuroimaging Initiative; MIAS, mammographic image analysis society; MNIST, modified national institute of standards and technology.

            In addition to diagnosis and progression prediction, machine learning techniques have also been applied to understand the biological mechanisms underlying AD. This includes the analysis of genomic data, protein expression data, and other biological markers to identify potential drug targets and predict disease outcome.

            The methodology employed in this study aligns with recent literature on the use of machine learning techniques in AD research. Similar to the reviewed studies, this research focuses on utilizing machine learning models, particularly CNNs and SVMs, to analyze MRI scans and classify different stages of AD. The study also acknowledges the importance of large datasets for classification accuracy and emphasizes the need for advanced techniques to mitigate overfitting issues (Mirabnahrazam et al., 2023).

            Furthermore, like some of the referenced works, this study incorporates transfer learning approaches to enhance the classification of AD phases. Transfer learning has been increasingly recognized as a valuable technique in AD research, allowing models to leverage pre-trained features and adapt them to specific datasets. Additionally, the study emphasizes the importance of nonlinear SVMs for accurate categorization of AD phases, aligning with previous research that highlights the effectiveness of nonlinear approaches in complex classification tasks (Hashemifar et al., 2023).

            This literature review highlights the potential of machine learning techniques in advancing the understanding and treatment of AD. While the field is still in its early stages, the results to date are promising, and continued research and development is necessary to fully realize the potential of these approaches.

            TRANSFER LEARNING

            A model created for one task is used as the basis for another using the machine learning technique known as transfer learning. Deep learning tasks in computer vision and natural language processing are built on pre-trained models. Compared to building neural network models from the beginning, they are both cheaper and faster, and they perform remarkably better on related tasks. Transfer learning is learning a new activity more effectively by applying what has already been learned about a related one (Olivas et al., 2010). For this approach to be practical, the features must be generic, i.e. applicable to both the base task and the target task (Yosinski et al., 2014; Han et al., 2019). CNNs, often known as ConvNet, are a subset of deep neural networks and are most frequently applied to the processing of medical images. The fundamental structure of a CNN is shown in Figure 1. Various pre-trained deep learning models with transfer learning approaches have been put out in the research. VGG16, ResNet 50, and DenseNet 121 were used in this study.

            Figure 1:

            Basic CNN architecture for AD detection procedure. Abbreviations: AD, Alzheimer’s disease; CNN, convolutional neural network.

            VGG16

            A CNN with 16 layers is called VGG16. The ImageNet database contains a pre-trained kind of network that has been trained on more than a million images (Rayar, 2017). The pre-trained model can categorize images into 1000 distant object groups. The network has therefore acquired rich feature representations for a variety of images. Because of its pre-training on a large and diverse dataset, the VGG16 model possesses the capability to extract meaningful features from images, even if it has not been specifically trained on the target task. This property makes it well-suited for transfer learning, where the pre-trained model can be fine-tuned on a smaller dataset for a specific classification task.

            One of the key characteristics of VGG16 is its depth, with 16 layers of trainable parameters. This depth allows the network to learn complex features and patterns from input images, making it particularly effective for image classification tasks (Simonyan and Zisserman, 2015).

            ResNet 50

            ResNet 50 is a variant of the ResNet model architecture, characterized by its structure consisting of 48 convolution layers, 1 MaxPool layer, and 1 average pool layer. This model is notable for its depth and efficiency, capable of performing approximately 3.8 × 109 floating-point operations. ResNet50 has gained widespread adoption due to its effectiveness in various computer vision tasks. The architecture has been extensively studied and evaluated, with detailed analyses conducted to understand its design principles and performance characteristics (Fuse et al., 2018).

            DenseNet121

            DenseNet121 belongs to a class of CNNs known as densely connected networks, where each layer is connected to every other layer in a feed-forward fashion. This architecture enables direct connections between all layers, totaling L (L + 1)/2 connections between “L” layers. Unlike traditional CNNs, DenseNet addresses the issue of vanishing gradients by restructuring the network architecture to facilitate streamlined connectivity between layers. This design innovation enhances gradient flow throughout the network, promoting effective feature propagation and mitigating degradation issues encountered in deeper architectures (Huang et al., 2016).

            THE PROPOSED WORK AND ITS EXPERIMENTAL EVALUATION

            The MRI images employed in this study are sourced from the AD Neuroimaging Initiative (ADNI) dataset, obtained from http://adni.loni.usc.edu/. ADNI is a collaborative research effort aimed at identifying biomarkers for the early detection and monitoring of AD progression. There are 3400 images in this dataset (680 images from each class), each measuring 224 × 224. The MRI images utilized in this study are sourced from the ADNI dataset, accessible at http://adni.loni.usc.edu/ (Cuingnet et al., 2011). ADNI is a longitudinal multicenter study designed to facilitate the identification of biomarkers for the early detection and tracking of AD progression. The dataset comprises MRI scans, along with other neuroimaging, clinical, cognitive, and genetic data from both AD patients and healthy control subjects. These MRI images provide valuable insights into the structural and functional changes in the brain associated with AD, enabling researchers to investigate disease mechanisms, develop diagnostic tools, and evaluate treatment efficacy. The research flow of the proposed work is shown as a flowchart in Figure 2.

            Figure 2:

            The basic flowchart of the proposed work. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment.

            The images from each AD stage are selected and given as input to the specified models. The data are divided into training, validation, and testing data. The complete information regarding each stage is listed in Table 2.

            Table 2:

            The images given as inputs to the model.

            AD stageTotal images in a dataset
            Training dataTest dataValidation dataTotal
            NC5009090680
            EMCI5009090680
            MCI5009090680
            LMCI5009090680
            AD5009090680

            Abbreviations: AD, Alzheimer’s disease; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            Data balancing

            Data balancing is essential for the model to predict more accurately. Unbalanced data lead to overfitting and underfitting; thus, data need to be balanced. Herein, we use downsampling techniques to balance the data. Figure 3a and b shows the data before and after sampling.

            Figure 3:

            (a) The unbalanced data and (b) the balanced data after application of sampling techniques. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            Data augmentation

            The size of the dataset is significant for deep learning models. These models predict more accurately and give better accuracy results on large datasets. The major drawback of image datasets is that they are not available in a large size. Therefore, it needs to be augmented to make the dataset larger for the models. We applied different data augmentation techniques to datasets, such as horizontal flipping of the images, rotation of images by 5°, and width and shift in the images. In this study, we applied data augmentation with the help of an image data generator of Keras API. Figure 4 shows the effect of data augmentation techniques on brain MRI images.

            Figure 4:

            Application of data augmentation techniques on the public dataset. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            RESULT EVALUATION

            The dataset used in this paper is divided into testing, training, and validating data. A total of 2900 images were used in this research: 2000 images for training (400 from each class), 450 for testing (90 from each category), and 450 for validating (90 from each type). We applied transfer learning by applying pre-trained CNN models such as DenseNet121 and VGG16 with ImageNet weights. For multiclass classification, we utilized RMSProp as our optimizer with a learning rate of 0.00001 and categorical cross-entropy as the loss metric, while maintaining accuracy metrics that will provide training and validation results as well as loss and accuracy values.

            DenseNet121

            DenseNet121 comprises 1 7 × 7 convolution, 58 3 × 3 convolution, 61 1 × 1 convolution, 4 AvgPool, and 1 fully connected layer. The performance of the classification models for a particular set of test data is assessed using a confusion matrix (Fig. 5).

            Figure 5:

            Confusion matrix generated by the DenseNet121 model with an overall accuracy of 97.33%. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            The basic architecture, confusion matrix with accuracy, and loss plot generated, respectively, by the DenseNet121 model are displayed in Figures 68.

            Figure 6:

            DenseNet121 model architecture for the prediction of AD stages. Abbreviation: AD, Alzheimer’s disease.

            Figure 7:

            Accuracy and loss plot generated by the DenseNet121 model over 100 epochs.

            Figure 8:

            Confusion matrix generated by the VGG16 model with an accuracy of 96.0%. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment; VGG, Visual Geometry Group.

            VGG16

            The VGG16 model, comprising 16 layers, is implemented on an input image with dimensions of 224 × 224, and converts it into 7 × 7 and five dense layer feature matrices as the output. The overall accuracy of the model is 96.0, which is shown in the confusion matrix in Figure 8. The loss and accuracy over 100 epochs are shown in Figure 9, and Table 3 describes the classification report generated by the VGG16 model.

            Figure 9:

            Accuracy and loss plot generated by the VGG16 model over 100 epochs. Abbreviation: VGG, Visual Geometry Group.

            Table 3:

            Classification report generated by the VGG16 model.

            Classification reportPrecisionRecallF1-scoreSupport
            Final AD jpeg0.901.000.9590
            Final CN jpeg0.940.890.9190
            Final EMCI jpeg0.980.920.9590
            Final LMCI jpeg0.970.990.9790
            Final MCI jpeg0.980.960.9790
            Accuracy0.95450
            Macro average0.950.950.95450
            Weighted average0.950.950.95450

            Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment; VGG, Visual Geometry Group.

            ResNet50

            The input image size of 224 × 224 is converted to 7 × 7 by applying the RESNET50 model, which has 50 layers of coevolution, and the output feature matrix is five dense layers. The model’s accuracy is measured based on different parameters such as recall, score, and precision. The basic architecture, confusion matrix, accuracy and loss plots are shown in the Figures 1012, respectively. Finally, the classification report generated by the model on the specified dataset is shown in Table 4.

            Figure 10:

            Basic architecture of the ResNet50 mode for the detection of AD stages. Abbreviation: AD, Alzheimer’s disease.

            Figure 11:

            Confusion matrix generated by the ResNet model with an accuracy of 62.22%. Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            Figure 12:

            Accuracy and loss plot generated by the ResNet model over 100 epochs.

            Table 4:

            The classification report generated by the ResNet50 model.

            Classification reportPrecisionRecallF1-scoreSupport
            Final AD jpeg0.770.740.7690
            Final CN jpeg0.520.640.5790
            Final EMCI jpeg0.860.470.6090
            Final LMCI jpeg0.491.000.6690
            Final MCI jpeg1.000.220.3690
            Accuracy0.62450
            Macro average0.730.620.59450
            Weighted average0.730.620.59450

            Abbreviations: AD, Alzheimer’s disease; CN, control normal; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; MCI, mild cognitive impairment.

            DISCUSSION AND SIGNIFICANCE OF THE WORK

            The proposed model evaluates the efficiency of models in different performance metrics, such as the confusion matrix, accuracy, loss, F1-score, precession, recall, receiver operating characteristic, and sensitivity. The general formulae to calculate different parameters are as follows:

            (1) Accuracy=(Numberofcorrectpredictions)/(Totalnumberofpredictions)

            (2) Precision=(No.oftruepositives)/(No.oftruepositives+No.offalsepositives)

            (3) Recall=(No.oftruepositives)/(No.oftruepositives+No.offalsenegatives)

            (4) F1-score=2(Precisionrecall)/(Precision+recall)

            The evaluation of the results in this study involved the utilization of a dataset divided into testing, training, and validating data, consisting of a total of 2900 images. Transfer learning was applied using pre-trained CNN models such as DenseNet121 and VGG16 with ImageNet weights. For multiclass classification, RMSProp was employed as the optimizer with a learning rate of 0.00001, and categorical cross-entropy was used as the loss metric, while accuracy metrics provided training and validation results as well as loss and accuracy values. The DenseNet121 model demonstrated an overall accuracy of 97.33%. The VGG16 model achieved an overall accuracy of 96.0%. The loss and accuracy over 100 epochs were displayed in the corresponding plots, and a classification report detailing precision, recall, F1-score, and support for each class was provided. Similarly, the ResNet50 model converted input images to 7 × 7 dimensions, with an accuracy of 62.22%. Overall, the results showcase the effectiveness of the employed models in accurately classifying AD stages based on MRI images, with each model demonstrating varying levels of accuracy and performance metrics. The performance analysis comparison of the applied models is shown in Figure 13.

            Figure 13:

            Comparative performance analysis generated by pre-trained deep learning models on dataset. Abbreviation: VGG, Visual Geometry Group.

            CONCLUSIONS

            The utilization of transfer learning in medical image analysis, particularly in the context of AD diagnosis, has shown significant promise in recent years. This study, employing ResNet50, VGG16, and DenseNet121 alongside CNNs, aimed to classify AD patients into multiple stages with notable success, achieving an accuracy of 96.6%. Despite these advancements, challenges persist, notably regarding the processing of large datasets and the cognitive workload for clinicians interpreting scans. Moreover, variations in image quality and resolution may lead to potential misinterpretations, underscoring the necessity for further advancements in imaging technology and analysis techniques to mitigate these issues. The study’s methodology involved employing pre-trained strategies to predict AD phases, yielding an impressive accuracy rate of 97.23%. The model developed using the ADNI data through the Keras API segmented MRI images into five categories: EMCI, MCI, LMCI, and AD. Through the examination of underfitting and overfitting problems, the study addressed key issues in model optimization, leading to enhanced performance. Notably, the proposed model, leveraging VGG16, DenseNet121, and RESNET50 networks, outperformed existing approaches significantly. Moving forward, the study suggests exploring the application of this model to other disorders utilizing similar data modalities, with a primary focus on enhancing classification results. Overall, this research underscores the potential of transfer learning in advancing AD diagnosis and highlights the ongoing need for innovation to address existing challenges in medical image analysis.

            ACKNOWLEDGMENTS

            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group No. KSRG-2023-163.

            AUTHOR CONTRIBUTIONS

            All authors contributed equally to this paper.

            CONFLICTS OF INTEREST

            The authors declare no conflicts of interest associated with this work.

            REFERENCES

            1. Acharya UR, Fernandes SL, WeiKoh JE, Ciaccio EJ, Fabell MKM, Tanik UJ, et al.. 2019. Automated detection of Alzheimer’s disease using brain MRI images–a study with various feature extraction techniques. J. Med. Syst. Vol. 43:302[Cross Ref]

            2. Alzheimer’s Association Report. 2018. 2018 Alzheimer’s disease facts and figures. Alzheimer’s Dement. Vol. 14:367–429. [Cross Ref]

            3. Amoroso N, Diacono D, Fanizzi A, La Rocca M, Monaco A, Lombardi A, et al.. 2018. Deep learning reveals Alzheimer’s disease onset in MCI subjects: results from an international challenge. J. Neurosci. Methods. Vol. 302:3–9. [Cross Ref]

            4. Chan H-P, Samala RK, Hadjiiski LM, Zhou C. 2020. Deep learning in medical image analysis. Adv. Exp. Med. Biol. Vol. 1213:3–21

            5. Cheung CY, Ran AR, Wang S, Chan VTT, Sham K, Hilal S, et al.. 2022. A deep learning model for detection of Alzheimer’s disease based on retinal photographs: a retrospective, multicentre case-control study. Lancet Digit. Health. Vol. 4:e806–e815. [Cross Ref]

            6. Chowdhary CL, Mittal M, Kumaresan P, Pattanaik PA, Marszalek Z. 2020. An efficient segmentation and classification system in medical images using intuitionist possibilistic Fuzzy C-mean clustering and Fuzzy SVM algorithm. Sensors. Vol. 20:3903. [Cross Ref]

            7. Cuingnet R, Gerardin E, Tessieras J, Auzias G, Lehéricy S, Habert M, et al.. 2011. Automatic classification of patients with Alzheimer’s disease from structural MRI: a comparison of ten methods using the ADNI database. Neuroimage. Vol. 56:766–781

            8. DeTure MA, Dickson DW. 2019. The neuropathological diagnosis of Alzheimer’s disease. Mol. Neurodegener. Vol. 14:32[Cross Ref]

            9. Fuse H, Oishi K, Maikusa N, Fukami T; Japanese Alzheimer’s Disease Neuroimaging Initiative. 2018. Detection of Alzheimer’s disease with shape analysis of MRI images2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS); p. 1031–1034. IEEE.

            10. Giraldo DL, García-Arteaga JD, Cárdenas-Robledo S, Romero E. 2018. Characterization of brain anatomical patterns by comparing region intensity distributions: applications to the description of Alzheimer’s disease. Brain Behav. Vol. 8:e00942. [Cross Ref]

            11. Gupta Y, Lama RK, Kwon G-R. 2019. Prediction and classification of Alzheimer’s disease based on combined features from Apolipoprotein-E genotype, cerebrospinal fluid, MR, and FDG-PET imaging biomarkers. Front. Comput. Neurosci. Vol. 13:72. [Cross Ref]

            12. Han T, Liu C, Yang W, Jiang D. 2019. Learning transferable features in deep convolutional neural networks for diagnosing unseen machine conditions. ISA Trans. Vol. 93:341–353. [Cross Ref]

            13. Hashemifar S, Iriondo C, Casey E, Hejrati M. 2022. DeepAD: a robust deep learning model of Alzheimer’s disease progression for real-world clinical applications. arXiv preprint. [Cross Ref]

            14. Hashemifar S, Iriondo C, Casey E; Genentech Inc., South San Francisco. 2023. Machine learning applications in understanding the biological mechanisms of Alzheimer’s disease: a systematic review. Neuroinformatics. Vol. 21(1):127–140. [Cross Ref]

            15. Hosseini-Asl E, Gimel’farb G, El-Baz A. 2015. Alzheimer’s disease diagnostics by a 3D deeply supervised adaptable convolutional network. Front. Aging Neurosci. Vol. 7:397

            16. Huang G, Liu Z, van der Maaten L, Weinberger KQ. 2016. Densely connected convolutional networks. arXiv:160806993. [Cross Ref]

            17. Jack CR, Bennett DA, Blennow K, Carrillo MC, Dunn B, Haeberlein SB, et al.. 2018. NIA-AA research framework: toward a biological definition of Alzheimer’s disease. Alzheimer’s Dement. Vol. 14:535–562. [Cross Ref]

            18. Jellinger K, Danielczyk W, Fischer P, Gabriel E. 1990. Clinicopathological analysis of dementia disorders in the elderly. J. Neurol. Sci. Vol. 95:239–258. [Cross Ref]

            19. Kim J, Jeong M, Stiles WR, Choi HS. 2022. Neuroimaging modalities in Alzheimer’s disease: diagnosis and clinical features. Int. J. Mol. Sci. Vol. 23:6079. [Cross Ref]

            20. Li T, Zhang W. 2016. Classification of brain disease from magnetic resonance images based on multi-level brain partitions2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); IEEE. Florida USA. 16-20 August 2016; p. 5933–5936

            21. Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, et al.. 2015. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans. Biomed. Eng. Vol. 62(4):1132–1140

            22. Mahyoub M, Randles M, Baker T, Yang P. 2018. Effective use of data science toward early prediction of Alzheimer’s disease2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS); IEEE; Mercure Exeter Rougemont Hotel. UK. 28-30 June 2018; p. 1455–1461

            23. Mankhong S, Kim S, Lee S, Kwak HB, Park DH, Joa KL, et al.. 2022. Development of Alzheimer’s disease biomarkers: from CSF- to blood-based biomarkers. Biomedicines. Vol. 10:850. [Cross Ref]

            24. Maqsood M, Nazir F, Khan U, Aadil F, Jamal H, Mehmood I, et al.. 2019. Transfer learning assisted classification and detection of Alzheimer’s disease stages using 3D MRI scans. Sensors. Vol. 19:2645. [Cross Ref]

            25. Maurer K, Volk S, Gerbaldo H. 1997. Auguste D and Alzheimer’s disease. Lancet. Vol. 349:1546–1549. [Cross Ref]

            26. Mirabnahrazam G, Ma D, Lee S, Popuri K, Lee H, Cao J, et al.. 2022. Machine learning based multimodal neuroimaging genomics dementia score for predicting future conversion to Alzheimer’s disease. J. Alzheimer’s Dis. Vol. 87:1345–1365. [Cross Ref]

            27. Mirabnahrazam M, Navimipour NJ, Amiri B. 2023. A comprehensive review of machine learning approaches for Alzheimer’s disease detection and classification. J. Alzheimer’s Dis. Vol. 85(1):213–229. [Cross Ref]

            28. Morris JC, Storandt M, Miller JP, McKeel DW, Price JL, Rubin EH, et al.. 2001. Mild cognitive impairment represents early-stage Alzheimer disease. Arch. Neurol. Vol. 58:397–405. [Cross Ref]

            29. Nagabushanam DS, Mathew S, Chowdhary CL. 2022. A study on the deviations in performance of FNNs and CNNs in the realm of grayscale adversarial images. arXiv. [Cross Ref]

            30. Nasreddine Z, Garibotto V, Kyaga S, Padovani A. 2023. The early diagnosis of Alzheimer’s disease: a patient-centred conversation with the care team. Neurol Ther. Vol. 12:11–23. [Cross Ref]

            31. Nawaz H, Maqsood M, Afzal S, Aadil F, Mehmood I, Rho S, et al.. 2021. A deep feature-based real-time system for Alzheimer disease stage detection. Multimed. Tools Appl. Vol. 80:35789–35807. [Cross Ref]

            32. Ning Z, Xiao Q, Feng Q, Chen W, Zhang Y. 2021. Relation-induced multi-modal shared representation learning for Alzheimer’s disease diagnosis. IEEE Trans. Med. Imaging. Vol. 40:1632–1645. [Cross Ref]

            33. Nozadi SH, Kadoury S. 2018. Classification of Alzheimer’s and MCI patients from semantically parcelled PET images: a comparison between AV45 and FDG-PET. Int. J. Biomed. Imaging. Vol. 2018:1–13. [Cross Ref]

            34. O’Brien JT. 2007. Role of imaging techniques in the diagnosis of dementia. Br. J. Radiol. Vol. 80:S71–S77. [Cross Ref]

            35. Olivas ES, Guerrero JDM, Martinez-Sober M, Magdalena Benedito JR, Serrano Lopez AJ. 2010. Handbook of Research on Machine Learning Applications and Trends. IGI Global. Hershey, Pennsylvania:

            36. Payan A, Cruz MMG. 2015. Predicting Alzheimer’s disease: a neuroimaging study with 3D convolutional neural networks. arXiv preprint. 1502.02506.

            37. Porsteinsson AP, Isaacson RS, Knox S, Sabbagh MN, Rubino I. 2021. Diagnosis of early Alzheimer’s disease: clinical practice in 2021. J. Prev. Alzheimer’s Dis. Vol. 8:371–386. [Cross Ref]

            38. Prasath T, Sumathi V. 2023. Identification of Alzheimer’s disease by imaging: a comprehensive review. Int. J. Environ. Res. Public Health. Vol. 20:1273. [Cross Ref]

            39. Rayar F. 2017. ImageNet MPEG-7 visual descriptors technical report. arXiv Preprint. [Cross Ref]

            40. Richards SS, Hendrie HC. 1999. Diagnosis, management, and treatment of Alzheimer disease: a guide for the internist. Arch. Intern. Med. Vol. 159:789–798. [Cross Ref]

            41. Rueda A, Gonzalez FA, Romero E. 2014. Extracting salient brain patterns for imaging-based classification of neurodegenerative diseases. IEEE Trans. Med. Imaging. Vol. 33:1262–1274. [Cross Ref]

            42. Sarraf S, DeSouza DD, Anderson J, Tofighi G; for the Alzheimer’s Disease Neuroimaging Initiative. 2016. DeepAD: Alzheimer’s disease classification via deep convolutional neural networks using MRI and fMRI. bioRxi. [Cross Ref]

            43. Scheltens P, Blennow K, Breteler MMB, de Strooper B, Frisoni GB, Salloway S, et al.. 2016. Alzheimer’s disease. Lancet. Vol. 388:505–517. [Cross Ref]

            44. Sheikh-Bahaei N, Sajjadi SA, Manavaki R, Gillard JH. 2017. Imaging biomarkers in Alzheimer’s disease: a practical guide for clinicians. J. Alzheimer’s Dis. Reports. Vol. 1:71–88. [Cross Ref]

            45. Sheng J, Wang B, Zhang Q, Zhou R, Wang L, Xin Y. 2021. Identifying and characterizing different stages toward Alzheimer’s disease using ordered core features and machine learning. Heliyon. Vol. 7:e07287. [Cross Ref]

            46. Sheth V, Tripathi U, Sharma A. 2022. A comparative analysis of machine learning algorithms for classification purpose. Procedia Comput. Sci. Vol. 215:422–431. [Cross Ref]

            47. Simonyan K, Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition. IEEE Trans. Pattern Anal. Mach. Intell. Vol. 38(1):25–33

            48. Sørensen L, Nielsen M; for the Alzheimer’s Disease Neuroimaging Initiative. 2018. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination. J. Neurosci. Methods. Vol. 302:66–74. [Cross Ref]

            49. Suk HI, Lee SW, Shen D; Alzheimer’s Disease Neuroimaging Initiative. 2014. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. NeuroImage. Vol. 101:569–582

            50. Thanh Noi P, Kappas M. 2017. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 Iimagery. Sensors. Vol. 18:18. [Cross Ref]

            51. Toews M, Wells W, Collins DL, Arbel T. 2010. Feature-based morphometry: discovering group-related anatomical patterns. Neuroimage. Vol. 49:2318–2327. [Cross Ref]

            52. Wang L, Liu Z-P. 2019. Detecting diagnostic biomarkers of Alzheimer’s disease by integrating gene expression data in six brain regions. Front. Genet. Vol. 10:157. [Cross Ref]

            53. Xiao T, Liu L, Li K, Qin W, Yu S, Li Z. 2018. Comparison of transferred deep neural networks in ultrasonic breast masses discrimination. Biomed. Res. Int. Vol. 2018:1–9. [Cross Ref]

            54. Yamashita R, Nishio M, Do RKG, Togashi K. 2018. Convolutional neural networks: an overview and application in radiology. Insights Imaging. Vol. 9:611–629. [Cross Ref]

            55. Yang Y-W, Hsu K-C, Wei C-Y, Tzeng RC, Chiu PY. 2021. Operational determination of subjective cognitive decline, mild cognitive impairment, and dementia using sum of boxes of the clinical dementia rating scale. Front. Aging Neurosci. Vol. 13:705782. [Cross Ref]

            56. Yiannopoulou KG, Papageorgiou SG. 2020. Current and future treatments in Alzheimer disease: an update. J. Cent. Nerv. Syst. Dis. Vol. 12:117957352090739. [Cross Ref]

            57. Yosinski J, Clune J, Bengio Y, Lipson H. 2014. How transferable are features in deep neural networks? Adv. Neural Inf. Process Syst. Vol. 27:3320–3328

            58. Zhao Z, Chuah JH, Lai KW, Chow CO, Gochoo M, Dhanalakshmi S, et al.. 2023. Conventional machine learning and deep learning in Alzheimer’s disease diagnosis using neuroimaging: a review. Front. Comput. Neurosci. Vol. 17:1038636. [Cross Ref]

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            1658-9912
            22 June 2024
            : 3
            : 6
            : e20240064
            Affiliations
            [1 ] Biology Department, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia ( https://ror.org/05gxjyb39)
            [2 ] Department of Integrated Master of Business Administration (IMBA), North Campus Delina Baramulla - The University Of Kashmir, 193103, India ( https://ror.org/05xanxb38)
            [3 ] School of Computer Applications, Lovely Professional University, Phagwara 144411, India ( https://ror.org/00et6q107)
            [4 ] Department of Zoology, College of Science, King Saud University, Riyadh 11451, Saudi Arabia ( https://ror.org/02f81g417)
            Author notes
            Correspondence to: Mohammed Al-Zharani*, e-mail: mmyalzahrani@ 123456imamu.edu.sa , Mobile: +966-566199178

            Both authors contributed equally.

            Author information
            https://orcid.org/0000-0002-0810-4803
            https://orcid.org/0000-0002-0894-7595
            https://orcid.org/0000-0003-0118-414X
            https://orcid.org/0000-0001-6585-3168
            https://orcid.org/0009-0006-8937-1992
            https://orcid.org/0000-0001-7381-5110
            Article
            10.57197/JDR-2024-0064
            bdc54189-b2d6-472a-ac25-3fd2ef53c849
            2024 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 01 March 2024
            : 05 May 2024
            : 05 May 2024
            Page count
            Figures: 13, Tables: 4, References: 58, Pages: 12
            Funding
            Funded by: King Salman Center for Disability Research
            Award ID: KSRG-2023-163
            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group No. KSRG-2023-163 (funder ID: http://dx.doi.org/10.13039/501100019345).

            Social policy & Welfare,Political science,Education & Public policy,Special education,Civil law,Social & Behavioral Sciences
            deep learning,convolutional neural network (CNN),Alzheimer’s disease (AD),artificial intelligence (AI),neuro-disabilities,mental healthcare,disability support,magnetic resonance imaging (MRI)

            Comments

            Comment on this article