Processing math: 100%
497
views
0
recommends
+1 Recommend
1 collections
    2
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
      0
      0
      0
      0
      Smart Citations
      0
      0
      0
      0
      Citing PublicationsSupportingMentioningContrasting
      View Citations

      See how this article has been cited at scite.ai

      scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection on Surveillance Videos for Visually Challenged People

      Published
      research-article
      Bookmark

            Abstract

            Deep learning technique has been efficiently used for assisting visually impaired people in different tasks and enhancing total accessibility. Designing a vision-based anomaly detection method on surveillance video specially developed for visually challenged people could considerably optimize awareness and safety. While it is a complex process, there is potential to construct a system by leveraging machine learning and computer vision algorithms. Anomaly detection in surveillance video is a tedious process because of the uncertain definition of abnormality. In the complicated surveillance scenario, the types of abnormal events might co-exist and are numerous, like long-term abnormal activities, motion and appearance anomaly of objects, etc. Conventional video anomaly detection techniques could not identify this kind of abnormal action. This study designs an Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection (ICSO-VBAD) on surveillance videos technique for visually challenged people. The purpose of the ICSO-VBAD technique is to identify and classify the occurrence of anomalies for assisting visually challenged people. To obtain this, the ICSO-VBAD technique utilizes the EfficientNet model to produce a collection of feature vectors. In the ICSO-VBAD technique, the ICSO algorithm was exploited for the hyperparameter tuning of the EfficientNet model. For the identification and classification of anomalies, the adaptive neuro fuzzy inference system model was utilized. The simulation outcome of the ICSO-VBAD system was tested on benchmark datasets and the results pointed out the improvements of the ICSO-VBAD technique compared to recent approaches with respect to different measures.

            Main article text

            INTRODUCTION

            Visual defects cruelly obstruct the daily lives of blind people. To help those people walk carefully in complicated road conditions, it is necessary to guide them on a walkable path ( Nagarajan and Gopinath, 2023). Warning of unforeseen objects is imperative for visually impaired people (VIP) as unforeseen objects frequently seem to be difficulties that hinder the flexibility of blind people. Identifying abnormalities in surveillance videos is vital to maintain security in different applications like abandoned object detection, crime recognition, parking area monitoring, accident detection, and illegal activity detection ( Abraham et al., 2020). But, manual recognition of anomalies in surveillance videos is a labor-intensive and tedious task for individuals. Owing to the vast quantity of datasets produced by serious schemes in security applications, conducting a manual study is an unreasonable solution ( Abdul-Ameer et al., 2022). Currently, there is a rise in the demand for automatic systems to find video anomalies. Such structures are biometric detection of individuals, video-based detection of abnormal behavior, alarm-based observing of closed-circuit television scenes, and automatic recognition of traffic violations ( Abdusalomov et al., 2022). Automatic systems decrease human time and labor, making them cost-effective and more efficient to find anomalies in surveillance videos ( Mukhiddinov et al., 2022). Scholars use machine learning methods and image processing approaches and have modeled many approaches. Many common vision-related guiding assistance mechanisms cannot manage obstacle detection well, as they forecast every pixel as one of the predefined simple classes ( Bhalekar and Bedekar, 2022). The unlimited number of unexpected on-road objects makes it tough to guarantee that human-driven helpful activities occur to certify safety.

            Many CV relies on works devised by focusing on processes like activity learning, data acquisition, scene learning, behavioral learning, feature extraction, etc. ( Iqbal et al., 2022). The main intention of this research is to calculate processes like anomaly prediction approaches, video processing methods, vehicle prediction and observation, activity examination, scene detection, traffic observation, human behavior learning, multi camera-relied schemes and challenges, etc. ( Busaeed et al., 2022). Anomalous forecasting is a sub-domain of behavior learning out of the visual scenes captured. The convenience of video from a public place has simulation of video analysis in addition to anomalous prediction ( Dhou et al., 2022). Likewise, anomalous estimation methods comprehend the typical behavior of the training. Any changes from normal behavior are irregular ( Choi et al., 2019). The presence of vehicles on pathways, unforeseen dispersal of people from crowd, individual faints while walking, signal evading at a traffic junction, jaywalking, and U-turns of automobiles in red signals are typical examples of anomalies.

            Akilandeswari et al. (2022) introduce a denoising AE with the CNN (DAECNN) to recognize the existing position of the user. The DAECNN model exploits the DAE to rebuild the noisy image and the CNN to categorize the present location of the user. The authors in Jiang et al. (2018), inventively leverage image quality assessment to choose the images captured by the vision sensor that ensures the input quality of scene for the last detection method. First, binocular vision sensor is used for capturing their image in a fixed frequency and selecting the helpful one according to the stereo image quality calculation. Next, the images captured are transferred to cloud for additional computing. Particularly, the automated detection outcome is completed for the expected images. In this step, CNN with big data is utilized.

            The authors in Al-Madani et al. (2019) explored indoor localization techniques through Bluetooth Lower Energy (BLE) beacons. The study presented the BLE beacon’s RSSI and the geometric distance in the present beacons to the fingerprint point from the architecture of fuzzy logic (FL) for evaluating the Euclidean distance for determining succeeding position. Based on the outcomes, the fingerprinting model using FL type-2 (hesitant fuzzy set) was fitting for using an indoor localization technique with BLE beacons. Dimas et al. (2021) developed a self-supervised technique with CNN which learns an obstacle detection technique and mimics it, with considerably low computation requirements for safer direction finding of VIP. The CNN input is RGB images, and its outputs are saliency maps, softly assessing the image region that corresponds to probably higher risk complications.

            Cheng et al. (2021) developed new hierarchical visual locality pipeline by using wearable assistive navigating devices for VIP. The presented method includes deep descriptor networks, online sequence matching, and 2D-3D geometric authentication. Images in dissimilar approaches (infrared, depth, and RGB) are given into the Dual Desc network for generating local features and strong attentive global descriptors. The global descriptor is leveraged for retrieving the coarse candidate of query image. Jasman et al. (2022) developed an IoT-based technique that assists in detecting water puddles and obstacles. The proposed method includes an Android app and a walking stick in third-party applications. The walking stick is incorporated as an ultrasound sensor, ESP32 microcontroller, and application for mobile phones. The transmission between the smartphones and ESP32 microcontrollers is conducted by the MIT App Inventor.

            This study designs an Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection (ICSO-VBAD) on surveillance videos technique for visually challenged people. The purpose of the ICSO-VBAD technique is to identify and classify the occurrence of anomalies for assisting visually challenged people. To obtain this, the ICSO-VBAD technique utilizes the EfficientNet model to produce a collection of feature vectors. In the ICSO-VBAD technique, the ICSO algorithm was exploited for the hyperparameter tuning of the EfficientNet model. For the identification and classification of anomalies, the adaptive neuro fuzzy inference system (ANFIS) model was utilized. The simulation outcome of the ICSO-VBAD algorithm was tested on benchmark datasets.

            The rest of the paper is organized as follows. The next section provides the proposed model, followed by result analysis and then the conclusion of the study is provided.

            THE PROPOSED MODEL

            In this manuscript, we have projected an automated anomaly detection approach, named the ICSO-VBAD system for visually challenged people. The purpose of the ICSO-VBAD technique is to identify and classify the occurrence of anomalies for assisting visually challenged people. It involves several subprocesses such as EfficientNet, ICSO-based parameter tuning, and ANFIS-based anomaly detection. Figure 1 illustrates the overall flow of the ICSO-VBAD algorithm.

            Figure 1:

            Overall flow of ICSO-VBAD approach. Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            EfficientNet model

            The ICSO-VBAD technique utilizes the EfficientNet model to produce a collection of feature vectors. EfficientNet has the CNN model to accomplish remarkable outcomes on image classification tasks while maintaining performance ( Ab Wahab et al., 2021). EfficientNet exploits a compound scaling technique that uniformly scales the depth, width, and resolution of CNN to enhance its efficacy while improving or maintaining its accuracy. The model is dependent on the concept that scaling up the network dimension might result in better performance. But uniformly scaling every dimension increases computational costs and minimizes the accuracy returns. EfficientNet presents a new scaling technique which includes balancing resolution, depth, and width network. The researchers present a new compound scaling technique which uniformly scales the depth, width, and resolution network with fixed scaling coefficient. It is illustrated that this scaling technique results in improved performance than conventional techniques that independently scale this dimension. EfficientNet has accomplished outstanding performance on benchmark databases involving CIFAR10, ImageNet, and CIFAR100 but needs less computation and fewer parameters compared to other CNN models.

            Parameter tuning using ICSO algorithm

            In the ICSO-VBAD technique, the ICSO system was exploited for the hyperparameter tuning of the EfficientNet model. CSO is a stochastic optimizer technique that mimics the behavior of chickens and the hierarchy in the swarm ( Li et al., 2023). The fundamental concept of CSO is to categorize chickens into roosters, chicks, and hens as per their fitness values. Assume a set of rules, various forms of chickens follow dissimilar movement rules and compete with one another to search for food. CSO is a global optimizer technique.

            The CSO applies the subsequent rules. There exist several groups in the chicken swarm. All the groups have some chicks and hens and a dominant rooster. Adaptation defines the status of the chickens. Many best-suited chickens would be processed as roosters, with all the roosters being the leading roosters in the group. Some of the least adapted chickens would be elected as chicks, and the remaining would be hens. The hens arbitrarily select that group to live in. The mother–child relationships, hierarchical order, and dominance within the group remain the same. This state was renewed each G generation. All the groups are differently renewed, with the hens in every group following the rooster to randomly steal food or forage for food from others; the chicks in every group correspondingly follow the mother hen to find food.

            The overall amount of clusters in the population is Nop, the number of hens is N h , the number of roosters is N r , the number of mother hens is N m , and the number of chicks is N h . Rooster dominates the foraging procedure and forages through a large space.

            (1) xji(t+1)=xji(t)(1+Randn(0,σ2))

            (2) σ2={1fifkexp(fkfi|fi|+ε)else,

            where ε denotes the smaller constant presented to avoid the denominator being inappropriate; k is another individual from the rooster population unlike the existing individual i, Randn (0, σ 2) denotes the Gaussian distribution with mean of 0 and variance σ 2, and k ∈ [1, N r ], k ≠ i, f k is its fitness.

            The hen place can be updated by the following expression:

            (3) xji(t+1)=xji(t)+S1Rand(xjr1(t)xji(t))+S2Rand(xjr2(t)xji(t))

            (4) S1=exp(fifr1|fi|+ε)

            (5) S2=exp (fr2fi),

            where Rand refers to an arbitrary value in zero and one, r 1 denotes the rooster equivalent to hen; i, r 2 shows the individual unlike r 1 selected randomly in rooster and hen populations.

            The chick position can be updated using Eq. (6):

            (6) xji(t+1)=xji+PL(xjm(t)xji(t)),

            where FL represents the coefficient of chick subsequent to the mother hen, whose value ranges from 0 to 2, and m indicates the mother hen of chick i. The rooster occupies the dominant location from the entire population that guides the chicks and hens near the optimum solutions. The upgraded formula in the typical CSO technique can best be suited to maintain the population diversity; however, it results in weaker local solution capability and lower solution accuracy. A reasonable X-best bootstrap model has the benefit of enhancing the accuracy solution but its unreasonable usage would create individuals from the population dependent excessively on X-best individual that sequentially results in increasing the probability of getting trapped into local optimum solution and a reduction in population diversity. Many researchers have developed the global optimal location into the rooster update location to resolve these problems.

            This study presents a rooster update model dependent upon the parallel approach including Levy flight and X-best guidance to resolve the worse result, prematurity, and accuracy caused by the overview of the X-best bootstrap process.

            First, most individual and global x gbest bootstraps are proposed in the rooster updating formula and, to evade the overdependence on x best which causes this method to get trapped in a local optima, the adjustment coefficient ω is proposed before the x best term.

            (7) xji(t+1)=xji(t)+Randn(0,σ2)(ωxjgbest(t)xji(t)).

            The fitness choice is a fundamental aspect of the ICSO algorithm. An encoding result was used to develop the goodness of candidate performances. At present, the accuracy value is a major condition exploited for scheming a fitness function.

            (8) Fitness=max (P)

            (9) P=TPTP+FP,

            where TP stands for the true-positive value and FP represents the false-positive value.

            ANFIS-based anomaly detection

            For the identification and classification of anomalies, the ANFIS model is used. The ANFIS is a robust modeling device that concurrently exploits the inference properties of FL and the learning abilities of ANN ( Alibak et al., 2022). A Takagi-Sugeno system with five interrelated consecutive layers (normalization, interference, fuzzification, target computation, and interpolation) and the subsequent fuzzy if-then rules, as Eqs. (10) and (11), are built to model these problems.

            (10) Rule1:IfXisA1andYisB1,thenF1=C1×X+D1×Y+E1

            (11) Rule2:IfXisA2andYisB2,thenF2=C2×X+D2×Y+E2

            Now, A 1, A 2, A 3 and B 2 represent the premise parameter. At the same time, the adjustable consequence parameter is represented as C 1, C 2, D 1, D 2, E 1, and E 2. Figure 2 depicts the infrastructure of ANFIS.

            Figure 2:

            ANFIS structure. Abbreviation: ANFIS, adaptive neuro fuzzy inference system.

            The first layer allocates a membership function (η) to all the nodes j and evaluates the output signal (O1j)

            by using the following expression:

            (12) O1j=ηAj(X)j=1,2

            (13) O1j=ηBj2(γ)j=3,4.

            The resultant of the second layer (O2j)

            is the product of incoming signals.

            (14) O2j=ηAj(X)ηBj(γ)=ωjj=1,2.

            The resultant of the third layer (O3j)

            computed by normalizing the entry signal is as follows:

            (15) O3j=¯ωj=ωj/(ω1+ω2)j=1,2.

            Next, Eq. (16) evaluates the output of the fourth layer (O4j).

            (16) O4j=¯ωjFj=¯ωj(CjX+Djγ+Ej)j=1,2.

            Lastly, the ANFIS prediction for the target ( O ANFIS ) is accomplished using the following expression:

            (17) OANFIS=2j=1¯ωjFj

            RESULT ANALYSIS

            The proposed model is simulated using the Python tool. The results of the ICSO-VBAD methodology on UCSD anomaly detection database ( http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm) are studied here.

            Figure 3 inspects the accuracy of the ICSO-VBAD approach in the training and validation method on the UCSDPed1 database. The result notifies that the ICSO-VBAD approach obtains higher accuracy values over maximum epochs. Additionally, the enhancing validation accuracy over training accuracy outperforms that the ICSO-VBAD algorithm learns capably on the UCSDPed1 database.

            Figure 3:

            Accuracy curve of ICSO-VBAD approach on UCSD Ped1 dataset. Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The loss examination of the ICSO-VBAD approach at the time of training and validation is illustrated in the UCSDPed1 database in Figure 4. The outcome inferred that the ICSO-VBAD approach gains adjacent values of training and validation loss. It can be clear that the ICSO-VBAD algorithm learns effectively on the UCSDPed1 database.

            Figure 4:

            Loss curve of ICSO-VBAD approach on UCSD Ped1 dataset. Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The results of the ICSO-VBAD technique with recent approaches on the UCSDPed1 dataset in terms of CT are given in Table 1. The results indicate that the ICSO-VBAD technique reaches a CT of 1.50 ms. On the other hand, the ACVD-SFN, AND-CS, ST-CNN ADLCS, DA-FCNN FADCS, and GPR-VAD and LHFR models offer higher CT values of 3.17, 3.33, 53.33, 26.67, and 3 ms, respectively.

            Table 1:

            CT outcome of the ICSO-VBAD approach with other algorithms on UCSDPed1 dataset.

            UCSD Ped1 dataset
            MethodsComputational time (ms)
            ACVD-SFN03.17
            AND-CS03.33
            ST-CNN ADLCS53.33
            DA-FCNN FADCS26.67
            GPR-VAD and LHFR03.00
            ICSO-VBAD01.50

            Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The results of the ICSO-VBAD system with recent approaches on the UCSDPed1 dataset in terms of AUC and EER are provided in Table 2. In terms of AUC, the ICSO-VBAD technique reaches a superior AUC of 87.87% while the ACVD-SFN, AND-CS, ST-CNN ADLCS, DA-FCNN FADCS, and GPR-VAD and LHFR approaches provide minimal AUC values of 67.50, 81.80, 85, 84.43, and 75%, correspondingly. Moreover, with respect to EER, the ICSO-VBAD system attains a lesser EER of 16.05% whereas the ACVD-SFN, AND-CS, ST-CNN ADLCS, DA-FCNN FADCS, and GPR-VAD and LHFR approaches offer higher EER values of 31, 25, 24, 23.01, and 31%, correspondingly.

            Table 2:

            AUC and EER outcome of the ICSO-VBAD approach with other algorithms on UCSDPed1 dataset.

            UCSD Ped1 dataset
            MethodsAUC (%)EER (%)
            ACVD-SFN67.5031.00
            AND-CS81.8025.00
            ST-CNN ADLCS85.0024.00
            DA-FCNN FADCS84.4323.01
            GPR-VAD and LHFR75.0031.00
            ICSO-VBAD87.8716.05

            Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            Figure 5 scrutinizes the accuracy of the ICSO-VBAD approach in the training and validation procedure on the UCSDPed2 database. The result implied that the ICSO-VBAD method achieves superior accuracy values over enhancing epochs. Also, the maximum validation accuracy over training accuracy displays that the ICSO-VBAD algorithm learns effectually on the UCSDPed2 database.

            Figure 5:

            Accuracy curve of ICSO-VBAD approach on UCSD Ped2 dataset. Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The loss curve of the ICSO-VBAD system at the time of training and validation is exposed on the UCSDPed2 database in Figure 6. The result indicates that the ICSO-VBAD approach gains near values of training and validation loss. It can be obvious that the ICSO-VBAD system learns capably on the UCSDPed2 database.

            Figure 6:

            Loss curve of ICSO-VBAD approach on UCSD Ped2 dataset. Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The results of the ICSO-VBAD approach with existing methods on the UCSDPed2 dataset with respect to CT are shown in Table 3. The result stated that the ICSO-VBAD technique reaches a CT of 0.18 ms. Also, the UACAD-SSS, AND-CS, ACBD-SFM, AED 150FPS-MATLAB, and ADCS-MEM approaches offer maximum CT values of 3.33, 2.67, 3.17, 1.10, and 3 ms, correspondingly.

            Table 3:

            CT outcome of the ICSO-VBAD approach with other algorithms on UCSDPed2 dataset.

            UCSD Ped2 dataset
            MethodsComputational time (ms)
            UACAD-SSS3.33
            AND-CS2.67
            ACBD-SFM3.17
            AED 150FPS-MATLAB1.10
            ADCS-MEM3.00
            ICSO-VBAD0.18

            Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            The outcomes of the ICSO-VBAD approach with other systems on the UCSDPed2 dataset in terms of AUC and EER are shown in Table 4. With respect to AUC, the ICSO-VBAD method attains a superior AUC of 88.90% whereas the UACAD-SSS, AND-CS, ACBD-SFM, AED 150FPS-MATLAB, and ADCS-MEM algorithms give decreased AUC values of 69.04, 82.90, 55.60, 85.54, and 81%, correspondingly. Furthermore, based on EER, the ICSO-VBAD technique reaches a lower EER of 15.02% while the UACAD-SSS, AND-CS, ACBD-SFM, AED 150FPS-MATLAB, and ADCS-MEM systems provide superior EER values of 25, 25, 42, 22.30, and 22%, correspondingly.

            Table 4:

            AUC and EER outcome of the ICSO-VBAD approach with other algorithms on UCSDPed2 dataset.

            UCSD Ped2 dataset
            MethodsAUC (%)EER (%)
            UACAD-SSS69.0425.00
            AND-CS82.9025.00
            ACBD-SFM55.6042.00
            AED 150FPS-MATLAB85.5422.30
            ADCS-MEM81.0022.00
            ICSO-VBAD88.9015.02

            Abbreviation: ICSO-VBAD, Improved Chicken Swarm Optimizer with Vision-based Anomaly Detection.

            CONCLUSION

            In this manuscript, we have projected an automated anomaly detection system, named the ICSO-VBAD approach, for visually challenged people. The purpose of the ICSO-VBAD technique is to identify and classify the occurrence of anomalies for assisting visually challenged people. To obtain this, the ICSO-VBAD technique utilizes the EfficientNet model to produce a collection of feature vectors. In the ICSO-VBAD technique, the ICSO algorithm was exploited for the hyperparameter tuning of the EfficientNet model. For the identification and classification of anomalies, the ANFIS model was utilized. The simulation outcome of the ICSO-VBAD algorithm was tested on benchmark datasets and the results pointed out the advantages of the ICSO-VBAD technique compared to recent approaches in terms of different measures. In future, the performance of the proposed model can be improved using hybrid deep learning classifiers.

            CONFLICTS OF INTEREST

            The authors declare no conflicts of interest in association with the present study.

            REFERENCES

            1. Ab Wahab MN, Nazir A, Ren ATZ, Noor MHM, Akbar MF, Mohamed ASA. 2021. Efficientnet-lite and hybrid CNN-KNN implementation for facial expression recognition on Raspberry Pi. IEEE Access. Vol. 9:134065–134080

            2. Abdul-Ameer HS, Hassan HJ, Abdullah SH. 2022. Development smart eyeglasses for visually impaired people based on you only look once. Telkomnika. Vol. 20(1):109–117

            3. Abdusalomov AB, Mukhiddinov M, Kutlimuratov A, Whangbo TK. 2022. Improved real-time fire warning system based on advanced technologies for visually impaired people. Sensors. Vol. 22(19):7305

            4. Abraham L, Mathew NS, George L, Sajan SS. 2020. VISION-wearable speech based feedback system for the visually impaired using computer vision2020 4th International Conference on Trends in Electronics and Informatics (ICOEI) (48184); IEEE. Tirunelveli, India. June 15-17; p. 972–976

            5. Akilandeswari J, Jothi G, Naveenkumar A, Sabeenian RS, Iyyanar P, Paramasivam ME. 2022. Design and development of an indoor navigation system using denoising autoencoder based convolutional neural network for visually impaired people. Multimed. Tools Appl. Vol. 81(3):3483–3514

            6. Alibak AH, Alizadeh SM, Davodi Monjezi S, Alizadeh AA, Alobaid F, Aghel B. 2022. Developing a hybrid neuro-fuzzy method to predict carbon dioxide (CO 2) permeability in mixed matrix membranes containing SAPO-34 zeolite. Membranes. Vol. 12(11):1147

            7. Al-Madani B, Orujov F, Maskeliūnas R, Damaševičius R, Venčkauskas A. 2019. Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings. Sensors. Vol. 19(9):2114

            8. Bhalekar M, Bedekar M. 2022. D-CNN: a new model for generating image captions with text extraction using deep learning for visually challenged individuals. Eng. Technol. Appl. Sci. Res. Vol. 12(2):8366–8373

            9. Busaeed S, Mehmood R, Katib I, Corchado JM. 2022. LidSonic for visually impaired: green machine learning-based assistive smart glasses with smart app and Arduino. Electronics. Vol. 11(7):1076

            10. Cheng R, Hu W, Chen H, Fang Y, Wang K, Xu Z, et al.. 2021. Hierarchical visual localization for visually impaired people using multimodal images. Expert Syst. Appl. Vol. 165:113743

            11. Choi J, Jung S, Park DG, Choo J, Elmqvist N. 2019. Visualizing for the non-visual: Enabling the visually impaired to use visualizationComputer Graphics Forum. Vol. Vol. 38(3):p. 249–260

            12. Dhou S, Alnabulsi A, Al-Ali AR, Arshi M, Darwish F, Almaazmi S, Alameeri R. 2022. An IoT machine learning-based mobile sensors unit for visually impaired people. Sensors. Vol. 22(14):5202

            13. Dimas G, Cholopoulou E, Iakovidis DK. 2021. Self-supervised soft obstacle detection for safe navigation of visually impaired people2021 IEEE International Conference on Imaging Systems and Techniques (IST); IEEE. Taiwan. August 24-26; p. 1–6

            14. Iqbal A, Akram F, Haq MIU, Ahmad I. 2022. A comprehensive assistive solution for visually impaired persons2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH); IEEE. KSA. May 9-11; p. 60–65

            15. Jasman NA, Jalil MFIM, Mukhtar A, Sahari KSM, Rusli ME. 2022. IoT-based obstacle detection system for visually impaired person with smartphone module. J. Adv. Inf. Technol. Vol. 13(4):368–373

            16. Jiang B, Yang J, Lv Z, Song H. 2018. Wearable vision assistance system based on binocular sensors for visually impaired users. IEEE Internet Things J. Vol. 6(2):1375–1383

            17. Li Y, Lu Y, Li D, Zhou M, Xu C, Gao X, et al.. 2023. Trajectory optimization of high-speed robotic positioning with suppressed motion jerk via improved chicken swarm algorithm. Appl. Sci. Vol. 13(7):4439

            18. Mukhiddinov M, Abdusalomov AB, Cho J. 2022. Automatic fire detection and notification system based on improved YOLOv4 for the blind and visually impaired. Sensors. Vol. 22(9):3307

            19. Nagarajan A, Gopinath MP. 2023. Hybrid optimization-enabled deep learning for indoor object detection and distance estimation to assist visually impaired persons. Adv. Eng. Softw. Vol. 176:103362

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            12 August 2023
            : 2
            : 2
            : 71-78
            Affiliations
            [1 ] Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University, Riyadh 11671, Saudi Arabia ( https://ror.org/05b0cyh02)
            [2 ] Department of Computer Science, College of Science and Art at Mahayil, King Khalid University, Saudi Arabia ( https://ror.org/052kwzs30)
            [3 ] Department of Information Systems, College of business administration, Hawtat Bani Tamim, Prince Sattam bin Abdulaziz University, Saudi Arabia ( https://ror.org/04jt46d36)
            Author notes
            Author information
            https://orcid.org/0000-0002-4389-4927
            Article
            10.57197/JDR-2023-0024
            d75bdbd2-bc47-4d8e-b6b3-ffd77e683ff6
            Copyright © 2023 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 05 June 2023
            : 27 July 2023
            : 03 August 2023
            Page count
            Figures: 6, Tables: 4, References: 19, Pages: 8
            Funding
            Funded by: King Salman Center for Disability Research
            Award ID: KSRG-2023-334
            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2023-334.
            Categories

            Computer science
            visually challenged people,vision-based model,deep learning,chicken swarm optimization,anomaly detection

            Comments

            Comment on this article