343
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
      0
      0
      0
      0
      Smart Citations
      0
      0
      0
      0
      Citing PublicationsSupportingMentioningContrasting
      View Citations

      See how this article has been cited at scite.ai

      scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Computer Vision with Optimal Deep Stacked Autoencoder-based Fall Activity Recognition for Disabled Persons in the IoT Environment

      Published
      research-article
      Bookmark

            Abstract

            Remote monitoring of fall conditions or actions and the daily life of disabled victims is one of the indispensable purposes of contemporary telemedicine. Artificial intelligence and Internet of Things (IoT) techniques that include deep learning and machine learning methods are now implemented in the field of medicine for automating the detection process of diseased and abnormal cases. Many other applications exist that include the real-time detection of fall accidents in older patients. Owing to the articulated nature of human motion, it is unimportant to find human action with a higher level of accuracy for every application. Likewise, finding human activity is required to automate a system to monitor and find suspicious activities while executing surveillance. In this study, a new Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition (CVDSAE-FAR) for disabled persons is designed. The presented CVDSAE-FAR technique aims to determine the occurrence of fall activity among disabled persons in the IoT environment. In this work, the densely connected networks model can be exploited for feature extraction purposes. Besides, the DSAE model receives the feature vectors and classifies the activities effectually. Lastly, the fruitfly optimization method can be used for the automated parameter tuning of the DSAE method which leads to enhanced recognition performance. The simulation result analysis of the CVDSAE-FAR approach is tested on a benchmark dataset. The extensive experimental results emphasized the supremacy of the CVDSAE-FAR method compared to recent approaches.

            Main article text

            INTRODUCTION

            Automatic detection of physical actions—defined as human activity recognition (HAR)—has been a research topic for years and several problems hinder further progression ( Qian et al., 2021). HAR intends to detect the physical actions of an individual related to video and/ or sensor datasets ( Islam et al., 2023). Therefore, it refers to a system that offers data regarding user behavior and this might stop circumstances of risk or forecast events that could occur. The HAR topic presents various amounts of liberty in terms of system model and application ( Park et al., 2023). First, there is no common description or definition of human actions that explains how a particular action could be characterized ( Xu et al., 2020). The second aspect that goes with this is that human action is different and the detection of particular actions thus necessitates a precise choice of sensors and their placement. A few challenges are the collection of data and the selection of sensor measurements under realistic conditions ( Tang et al., 2022).

            The HAR problem is not possible to be resolved deterministically as the number of combinations of sensor measurements and actions can be vast ( Islam et al., 2023). Thus, machine learning (ML) methods can be generally utilized for the growth of the HAR system for detecting patterns of human action in sensor data ( Anagnostis et al., 2021). Generally, ML approaches were adopted to obtain knowledge from data with statistical approaches that intend to ascertain patterns and relations between the features or variables of the data. Like other ML approaches, HAR entails a test and training stage ( Zhang et al., 2022). In the training stage, a method can be advanced related to training datasets, whereas the test phase serves to test (termed assess) the model performance. The model performance on a test dataset becomes a pointer of how well the method may achieve in the future on formerly not-seen data ( Dahou et al., 2022). In general, advancing these systems can be carried out in four basic steps; they are feature extraction, data collection, classification, and windowing. Feature extraction can be considered as the critical step since it decides the overall performance of the method ( Mekruksavanich et al., 2020). This step can be established either using conventional ML or deep learning (DL) methods.

            In Shuaieb et al. (2020), the authors suggest a price-efficient, radio frequency identification (RFID)-dependent interior position scheme that implements received signal strength data of inactive RFID tags. This system employs RFID tags positioned at diverse locations on the aimed body. The mapping of the examined information compared to the set of reference location datasets is employed to precisely locate the horizontal position and vertical position of a patient inside a restricted space in real time. Xia et al. (2020) present a DNN that incorporates complex layers with LSTM. This technique could extract action aspects routinely and categorize them with limited method parameters. LSTM is a modified recurrent neural network (RNN) model that is more appropriate for temporal serial processing. In this model, the raw data gathered by smartphone sensors are passed to a multi-layer LSTM trailed by complex layers. Additionally, a GAP layer was enforced to replace the completely associated layer after convolution to mitigate the method parameters.

            In Oguntala et al. (2019), the authors present a new ambient HAR model by employing the multivariate Gaussian distribution. This model enhances previous data from inactive RFID tags to attain an elaborate action summary. The model is established on the multivariate Gaussian distribution through greater probability assessment implemented for learning the aspects of the human action method. Gumaei et al. (2019) suggest an efficient multi-sensor established model for human action detection by employing a fusion DL method that incorporates the simple recurrent unit (SRU) with the gated recurrent unit (GRU) of neural networking. This model employs the deep SRU for processing the multi-modal input data sequences by employing the capacity of their interior memory states. The model also utilizes the deep GRU for storing and learning how much of the previous data are fed to the future state to resolve discrepancies or variations in precision and fading gradient issues. In Islam et al. (2019), the authors present an action detection and monitoring approach that is established on a multi-class accommodating classification process for enhancing the action categorizing precision in video frames assisting the cloud or fog computing-dependent blockchain construction. In this approach, frame-based salient factors are extracted from video frames comprising diverse human actions that are additionally processed into activity vocabulary for precision and effectiveness. Likewise, the activity categorizing is accomplished by employing SVM established on the error-correction-output-codes model.

            Mihoub (2021) proposed a DL-founded model for action detection in smart homes. This new model is modeled to safeguard a deep utilization of the factor space as three major methods are examined: reduction, selection and the all-feature. Additionally, this model presents the association and analysis of various well-selected DL methods like RNN, autoencoder, and a few derivative methods. Alsarhan et al. (2022) suggest a new enhanced discriminative graph convolutional network founded on the attention system for skeleton-based activity detection. Discriminatory channel-wise factors are attained through the fusion of GCN and the squeeze and excitation component to particularly improve the crucial factors and conceal the insignificant factors. The adaptively improved factor map is later combined with the graph complex layer for enhancing the capacity of learning improved depiction.

            In this study, a new Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition (CVDSAE-FAR) for disabled persons is designed. The presented CVDSAE-FAR technique aims to determine the occurrence of fall activity among disabled persons in the Internet of Things (IoT) environment. In this work, the densely connected networks (DenseNet) model is exploited for feature extraction purposes. Besides, the DSAE model receives the feature vectors and classifies the activities effectually. Lastly, the fruitfly optimization (FFO) method is used for the automated parameter tuning of the DSAE approach which leads to enhanced recognition performance. The simulation analysis of the CVDSAE-FAR approach is tested on a benchmark dataset.

            THE PROPOSED FALL ACTIVITY RECOGNITION MODEL

            In this study, a new CVDSAE-FAR for the identification and classification of fall activities among disabled persons is designed. The presented CVDSAE-FAR technique aims to determine the occurrence of fall activity among disabled persons in the IoT environment. In this work, the CVDSAE-FAR technique comprises a three-stage process including DenseNet feature extraction, DSAE classification, and FFO-based parameter optimization. Figure 1 shows the workflow of the CVDSAE-FAR algorithm.

            Figure 1:

            Workflow of the CVDSAE-FAR algorithm. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            Feature extraction

            Initially, the DenseNet model can be exploited for feature extraction purposes. The fundamental concept of the densely connected constructive network (DenseNet) technique is similar to that of ResNet, and it determines a dense connection among every prior and subsequent layer ( Huang et al., 2023). However, the resultant of identity function and H is cumulative which can hinder the data flow from the network. To improve the difficulty of data flow among various layers, DenseNet directly links every input to the output layer. To preserve the feed-forward nature, all the layers attain more inputs in every prior layer and come on their individual mapping feature to every subsequent layer. In DenseNet, all the layers are linked to the size of channels of every preceding layer and are utilized as the input for the next layer. These features allow DenseNet to achieve optimum efficiency over ResNet with some parameters and computational costs. Also optimum parameter efficacy, one major benefit of DenseNets is its enhanced data flow and gradients throughout the network, which make them simple for training. An efficiency difference among the main broadcast procedures of these two kinds of networks exists. The nonlinear transformation formulation of ResNet is as follows:

            (1) xl=Hl(xl1)+xl1

            The nonlinear transformation formula of DenseNet is as follows:

            (2) xl=Hl([x0,x1,,xl1])

            The dense block is a fundamental element of DenseNet, and DenseNet was separated as many dense blocks. During all the nodes of the dense block, an input has a concatenated mapping feature, and the dimensional mapping feature in every dense block is similar. A Transition element was utilized for performing down-sampling transition connections among all the dense blocks. The nonlinear combined function from the dense block mentions the combination of BN + ReLU + Conv.

            A whole DenseNet infrastructure that contains three dense blocks and two transition layers. The transition layers link all the dense blocks, composed of convolutional and pooling to downsample and compress the method. All the layers of DenseNet were planned very narrowly to reduce redundancy. Concatenating mapping features learned by distinct layers improves variants from the input of the next layers and enhances efficacy. The network enhances the data flow and gradient, achieving simplicity to train, and the intensive connection was regularized uses, decreasing the over-fitting problem of smaller trained sets.

            Activity recognition using DSAE

            Here, the DSAE model receives the feature vectors and classifies the activities effectually. An AE is an unsupervised learning configuration-based type, but three layers like output, hidden, and input layers exist ( Balasubramaniam et al., 2023). An input provided to DSA is F aug . At present, the trained procedure is applied in two parts: encoded and decoded. An encoded exploits input data mapping for converting as latent illustration and decoded reconstructs input data in developed latent illustration. For the projected unlabeled input data, {lΔ}DΔ=1, where lΔQI×J, α Δ signifies the vector of hidden encoded obtained in β Δ and the vector of resultant layer decoded is defined by ˆlΔ . Therefore, the encoded procedure was expressed as follows:

            (3) βΔ=α(E1lΔ+H1)

            where the function of encoded was represented by α, the matrix of encoded weighted is E 1, and H 1 implies the bias vector. The decoded procedure was expressed as follows:

            (4) ˆlΔ=P(E2βΔ+H2)

            In which the function of decoded was signified utilizing P, the weighted matrix of decoded is E 2, and the bias vector is provided as H 2.

            For minimized reconstruction error, an AE parameter fixed is optimized as follows:

            (5) ε(O)=minφ,φ1ΔΔr=1M(ˆl,^lr)

            where M denotes the loss function M(l,ˆl)=lˆl2 .

            Therefore, SAE was executed using three phases. Primarily, input data trained an AE and then obtained the learned feature vector. Second, an input for the next layer is obtained as the preceding layer feature vector and this iteration was maintained still training completion. Lastly, the hidden state train was performed and the BP algorithm was utilized to minimize of cost function weighted can be upgraded by labeled tuning group to attain optimum training. Therefore, the output attained in DSAE is Z d .

            FFO-based parameter optimization

            Finally, the FFO algorithm can be used for the automated parameter tuning of the DSAE model, which leads to enhanced recognition performance. The FFO technique is an optimized technique which simulates the foraging performance of fruitfly swarms ( Zhang, 2023). In the FFO system, a fruitfly swarm explorations for food by always upgrading the swarm position. The parameters of the fruitfly optimized technique can be simple in infrastructure and easy to alter. When the count of fruitflies is N f , the positions of fruitflies are X axis and Y axis . The fundamental FFO technique upgrades iteration equation was defined in the following formula:

            (6) {xj,tf=Xaxis,tf+Rtf×rand,yj,tf=Yaxis,tf+Rtf×rand, 

            where j refers to the fruitfly serial number, j{1,2,,Nf} . t f stands for the fruitfly dimension, tf{1,2, , d} . rand refers to the arbitrary number, rand ∈[0,1]. Rtf represents the search radius of t he tf size of fruitflies.

            As the location of the food is unknown, it is essential to compute the distance Dist j among the present individual location of the fruitfly with serial number j and origin and then compute the taste concentration judgment value Sm j . Sm j is the reciprocal of the distance Dist j , and the calculation formula can be defined using the following equation:

            (7) {Distj=xj+yj,Smj=1Distj,

            The FFO method has determined the merit by its favor concentration values, which can be computed as described in the following equation:

            (8) Smellj=fs(Smj)

            where Smell j denotes the taste concentration function values of the jth individual fruitfly and f s indicates the formula to compute the taste concentration values.

            The fitness selection becomes a pivotal factor in the FFO approach. Solution encoding is leveraged to evaluate the candidate solution goodness. Here, to design a fitness function, the accuracy value denoted the main condition used.

            (9) Fitness=max (P)

            (10) P=TPTP+FP

            From the expression, FP designates the false-positive value and TP signifies the true-positive value.

            RESULTS AND DISCUSSION

            The HAR outcomes of the CVDSAE-FAR approach can be tested on Multiple Cameras Fall datasets ( Auvinet et al., 2010), encompassing 192 instances and 2 classes as shown in Table 1. Figure 2 signifies the sample images.

            Figure 2:

            Sample images.

            Table 1:

            Details on database.

            ClassMCF
            No. of samples
            Fall96
            Nonfall96
            Total number of samples192

            Abbreviation: MCF, Multiple Cameras Fall.

            The suggested technique is put under simulation by employing the Python 3.6.5 tool on PC i5-8600k, 250GB SSD, GeForce 1050Ti 4GB, 16GB RAM, and 1TB HDD. The setups of the parameter are as follows: learning rate: 0.01, activation: ReLU, epoch count: 50, dropout: 0.5, and size of the batch: 5.

            Figure 3 portrays the classifier outcomes of the CVDSAE-FAR approach on the test dataset. Figure 3a depicts the confusion matrix presented by the CVDSAE-FAR model on 80% of TRP. The result signified the CVDSAE-FAR approach has recognized 77 samples under Fall and 75 samples under Nonfall. Besides, Figure 3b depicts the confusion matrix offered by the CVDSAE-FAR model on 20% of TSP. The result highlighted that the CVDSAE-FAR method has identified 18 samples in Fall and 20 samples under Nonfall. Similarly, Figure 3c shows the PR study of the CVDSAE-FAR method. The figures reported that the CVDSAE-FAR model has gained higher PR performance under two classes. Eventually, Figure 3d exemplifies the ROC study of the CVDSAE-FAR method. The result portrayed that the CVDSAE-FAR model has productive results with maximal ROC values under two class labels.

            Figure 3:

            Classifier outcome of the CVDSAE-FAR system: (a,b) confusion matrices, (c) PR-curve, and (d) ROC-curve. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            The overall HAR outcomes of the CVDSAE-FAR approach are revealed in Table 2 and Figure 4. The result recognized that the CVDSAE-FAR method attains an effectual recognition rate under every class. For example, with 80% of TRP, the CVDSAE-FAR approach provides average accu y , reca l , spec y , F score , and G measure of 99.36, 99.36, 99.36, 99.35, and 99.35%, correspondingly. In the meantime, with 20% of TSP, the CVDSAE-FAR approach provides average accu y , reca l , spec y , F score , and G measure of 97.62, 97.62, 97.62, 97.43, and 97.46%, correspondingly.

            Figure 4:

            Average outcome of the CVDSAE-FAR approach on 80:20 of TRP/TSP. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            Table 2:

            HAR outcome of the CVDSAE-FAR approach on 80:20 of TRP/TSP.

            ClassAccuracyRecallSpecificity F-Score G-Measure
            Training phase (80%)
             Fall98.7298.72100.0099.3599.36
             Nonfall100.00100.0098.7299.3499.34
             Average99.3699.3699.3699.3599.35
            Testing phase (20%)
             Fall100.00100.0095.2497.3097.33
             Nonfall95.2495.24100.0097.5697.59
             Average97.6297.6297.6297.4397.46

            Abbreviations: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition; HAR, human activity recognition.

            Figure 5 inspects the accuracy of the CVDSAE-FAR method in the training and validation of the test database. The result highlighted that the CVDSAE-FAR technique attains higher accuracy values over greater epochs. Also, the greater validation accuracy over training accuracy displays that the CVDSAE-FAR method learns productively on the test database.

            Figure 5:

            Accuracy curve of the CVDSAE-FAR approach. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            The loss investigation of the CVDSAE-FAR technique in the training and validation is proven on the test database in Figure 6. The result specifies that the CVDSAE-FAR approach attains adjacent values of training and validation loss. It is observed that the CVDSAE-FAR technique learns productively on the test database.

            Figure 6:

            Loss curve of the CVDSAE-FAR approach. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            In Table 3 and Figure 7, the overall HAR results of the CVDSAE-FAR approach are compared with current methods ( Almalki et al., 2023). The figure recognized that the 1D-CNN, 2D-CNN, and ResNet-50 methods accomplish worse results. Simultaneously, the ResNet-101, VGG-19, and IMEFD-ODCNN approaches have moderately improved performance. In the meantime, the WOADTL-AFD method has attained considerable performance with accu y , reca l , spec y , and F score of 99.08, 97.98, 98.95, and 98.93% correspondingly. However, the CVDSAE-FAR method gains higher outcomes with accu y , reca l , spec y , and F score of 99.36, 99.36, 99.36, and 99.35% correspondingly.

            Figure 7:

            Comparative outcome of the CVDSAE-FAR approach with other systems. Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            Table 3:

            Comparative outcome of the CVDSAE-FAR approach with other systems.

            MethodsAccuracyRecallSpecificity F-Score
            CVDSAE-FAR99.3699.3699.3699.35
            WOADTL-AFD99.0897.9898.9598.93
            VGG-19 model98.1297.3198.2897.26
            1D-CNN technique94.5398.1698.8196.71
            2D-CNN technique95.6396.8297.3197.31
            ResNet-50 algorithm96.2197.7497.4997.22
            ResNet-101 algorithm96.5896.6297.2498.52
            IMEFD-ODCNN99.0697.9198.3798.31

            Abbreviation: CVDSAE-FAR, Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition.

            CONCLUSION

            In this article, a new CVDSAE-FAR for the classification and identification of fall activities among disabled persons is devised. The presented CVDSAE-FAR technique aims to determine the occurrence of fall activity among disabled persons in the IoT environment. In this work, the CVDSAE-FAR technique comprises a three-stage process such as DenseNet feature extraction, DSAE classification, and FFO-based parameter optimization. Initially, the DenseNet model can be exploited for feature extraction purposes. Besides, the DSAE model receives the feature vectors and classifies the activities effectually. Finally, the FFO algorithm is used for the automated parameter tuning of the DSAE method which leads to enhanced recognition performance. The simulation result analysis of the CVDSAE-FAR method is tested on a benchmark dataset. The extensive experimental results highlighted the supremacy of the CVDSAE-FAR technique compared to recent approaches. In future, the computation complexity of the proposed model will be examined. In addition, the proposed model can be tested on large-scale datasets.

            References

            1. Almalki N, Alnfiai MM, Al-Wesabi FN, Alduhayyem M, Hilal AM, Hamza MA. 2023. Deep transfer learning driven automated fall detection for quality of living of disabled persons. Comput. Mater. Contin. Vol. 74(3):6719–6736

            2. Alsarhan T, Ali U, Lu H. 2022. Enhanced discriminative graph convolutional network with adaptive temporal modelling for skeleton-based action recognition. Comput. Vis. Image Underst. Vol. 216:103348

            3. Anagnostis A, Benos L, Tsaopoulos D, Tagarakis A, Tsolakis N, Bochtis D. 2021. Human activity recognition through recurrent neural networks for human–robot interaction in agriculture. Appl. Sci. Vol. 11(5):2188

            4. Auvinet E, Rougier C, Meunier J, St-Arnaud A, Rousseau J. 2010. Multiple cameras fall dataset. http://www.iro.umontreal.ca/∼labimage/Dataset/

            5. Balasubramaniam S, Vijesh Joe C, Sivakumar TA, Prasanth A, Satheesh Kumar K, Kavitha V, et al.. 2023. Optimization enabled deep learning-based DDoS attack detection in cloud computing. Int. J. Intell. Syst. Vol. 2023:1–16

            6. Dahou A, Al-qaness MA, Abd Elaziz M, Helmi A. 2022. Human activity recognition in IoHT applications using arithmetic optimization algorithm and deep learning. Measurement. Vol. 199:111445

            7. Gumaei A, Hassan MM, Alelaiwi A, Alsalman H. 2019. A hybrid deep learning model for human activity recognition using multimodal body sensing data. IEEE Access. Vol. 7:99152–99160

            8. Huang T, Gao Y, Li Z, Hu Y, Xuan F. 2023. A hybrid deep learning framework based on diffusion model and deep residual neural network for defect detection in composite plates. Appl. Sci. Vol. 13(10):5843

            9. Islam MS, Jannat M.K.A, Hossain MN, Kim WS, Lee SW, Yang SH. 2023. STC-NLSTMNet: an improved human activity recognition method using convolutional neural network with NLSTM from WiFi CSI. Sensors. Vol. 23(1):356

            10. Islam N, Faheem Y, Din IU, Talha M, Guizani M, Khalil M. 2019. A blockchain-based fog computing framework for activity recognition as an application to e-Healthcare services. Future Gener. Comput. Syst. Vol. 100:569–578

            11. Mekruksavanich S, Jitpattanakul A, Youplao P, Yupapin P. 2020. Enhanced hand-oriented activity recognition based on smartwatch sensor data using lstms. Symmetry. Vol. 12(9):1570

            12. Mihoub A. 2021. A deep learning-based framework for human activity recognition in smart homes. Mob. Inf. Syst. Vol. 2021:1–11

            13. Oguntala GA, Abd-Alhameed RA, Ali NT, Hu YF, Noras JM, Eya NN, et al.. 2019. SmartWall: novel RFID-enabled ambient human activity recognition using machine learning for unobtrusive health monitoring. IEEE Access. Vol. 7:68022–68033

            14. Park H, Kim N, Lee GH, Choi JK. 2023. MultiCNN-FilterLSTM: resource-efficient sensor-based human activity recognition in IoT applications. Future Gener. Comput. Syst. Vol. 139:196–209

            15. Qian H, Pan SJ, Miao C. 2021. Latent independent excitation for generalizable sensor-based cross-person activity recognitionProceedings of the AAAI Conference on Artificial Intelligence; 2-9 February 2021; p. 11921–11929

            16. Shuaieb W, Oguntala G, AlAbdullah A, Obeidat H, Asif R, Abd-Alhameed RA, et al.. 2020. RFID RSS fingerprinting system for wearable human activity recognition. Future Internet. Vol. 12(2):33

            17. Tang Y, Zhang L, Min F, He J. 2022. Multiscale deep feature learning for human activity recognition using wearable sensors. IEEE Trans. Ind. Electron. Vol. 70(2):2106–2116

            18. Xia K, Huang J, Wang H. 2020. LSTM-CNN architecture for human activity recognition. IEEE Access. Vol. 8:56855–56866

            19. Xu H, Li J, Yuan H, Liu Q, Fan S, Li T, et al.. 2020. Human activity recognition based on Gramian angular field and deep convolutional neural network. IEEE Access. Vol. 8:199393–199405

            20. Zhang S, Li Y, Zhang S, Shahabi F, Xia S, Deng Y, et al.. 2022. Deep learning in human activity recognition with wearable sensors: a review on advances. Sensors. Vol. 22(4):1476

            21. Zhang Y. 2023. Large data oriented to image information fusion spark and improved fruit fly optimization based on the density clustering algorithm. Adv. Multimed. Vol. 2023:1–14

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            26 October 2023
            : 2
            : 3
            : 120-128
            Affiliations
            [1 ] Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University, Riyadh 11671, Saudi Arabia ( https://ror.org/05b0cyh02)
            [2 ] Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University, Riyadh, Saudi Arabia ( https://ror.org/05b0cyh02)
            [3 ] Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt ( https://ror.org/03q21mh05)
            [4 ] Department of Computer Science, College of Sciences and Humanities—Aflaj, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia ( https://ror.org/04jt46d36)
            [5 ] Department of Computer Science, College of Computer, Qassim University, Buraydah, Saudi Arabia ( https://ror.org/01wsfe280)
            [6 ] Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia ( https://ror.org/04jt46d36)
            Author notes
            Author information
            https://orcid.org/0000-0002-3001-6818
            Article
            10.57197/JDR-2023-0044
            4541d025-d8db-4c2f-a87e-e0db1ce78bc3
            Copyright © 2023 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 22 May 2023
            : 14 September 2023
            : 07 October 2023
            Page count
            Figures: 7, Tables: 3, References: 21, Pages: 9
            Funding
            Funded by: funder-id http://dx.doi.org/10.13039/501100019345, King Salman Center for Disability Research;
            Categories

            Computer science
            fall activity,human activity recognition,computer vision,disabled persons,Internet of Things

            Comments

            Comment on this article