2,612
views
0
recommends
+1 Recommend
1 collections
    8
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
      0
      0
      0
      0
      Smart Citations
      0
      0
      0
      0
      Citing PublicationsSupportingMentioningContrasting
      View Citations

      See how this article has been cited at scite.ai

      scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Development of a Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

      Published
      research-article
      Bookmark

            Abstract

            An Internet of Things-based automated patient condition monitoring and detection system is discussed and built in this work. The proposed algorithm that underpins the smart-bed system is based on deep learning. The movement and posture of the patient’s body may be determined with the help of wearable sensor-based devices. In this work, an internet protocol camera device is used for monitoring the smart bed, and sensor data from five key points of the smart bed are core components of our approach. The Mask Region Convolutional Neural Network approach is used to extract data from many important areas from the body of the patient by collecting data from sensors. The distance and the time threshold are used to identify motions as being either connected with normal circumstances or uncomfortable ones. The information from these key locations is also utilised to establish the postures in which the patient is lying in while they are being treated on the bed. The patient’s body motion and bodily expression are constantly monitored for any discomfort if present. The results of the experiments demonstrate that the suggested system is valuable since it achieves a true-positive rate of 95% while only yielding a false-positive rate of 4%.

            Main article text

            INTRODUCTION

            When integrated with computer vision, machine learning, and deep learning methods, the Internet of Things (IoT) delivers rapid and precise monitoring of patients and detection systems. This is one of the many ways in which the IoT is helping to advance the area of smart healthcare ( Pace et al., 2018). A few of the traditional patient monitoring systems often include approaches based on sensors that the patients wear on their body. The challenging and time-consuming task of providing the best medical care is a tough one. Patients are often checked in when they are experiencing significant health issues. They could possibly enter for conditions that are not life-threatening but do need frequent monitoring. Also, numerous therapy practices may not be administered to a patient until after he or she has been admitted to the hospital in a private room at a medical facility for a certain amount of time. Approximately 40% of these newly admitted patients are members of a population that is mostly old, with an average age of >65 years ( Mattison et al., 2013). Additionally, there are those who are disabled who are admitted to hospitals ( Carpenter et al., 2007). The activities and relationships between hospitalised patients and the community in which they are housed are becoming prohibitive because of major health concerns, factors such as age, infirmity, and pre-established medical directives. Several recent studies concluded that the healthcare system sector suffers from a serious lack of available personnel, and healthcare experts carry a heavy burden of labour during the whole process ( Akter et al., 2019; Qureshi et al., 2019). This contributes to a further decline in the overall quality of medical attention for people who are ill ( Farid et al., 2020). Additionally, research demonstrates that the healthcare industry is under a significant amount of strain that is already developed, in addition to those that are still developing ( Akter et al., 2019). To reiterate, the situation is the way it is because the widespread COVID-19 epidemic has reached a new level of decline ( McGarry et al., 2020; Sterpetti, 2020). However, developing nations such as Finland, Canada, England, Ireland, and New Zealand are examples. Both New Zealand and Japan have put a lot of effort into developing innovative approaches to patient care with the interest of enhancing the standard of medical treatment ( Reinhold et al., 2019). However, despite the fact that scholars have concentrated on building assistance programmes for patients who are hospitalised, bedridden, and individuals who are unable to move about and have limited ability for engagement (for example, monitoring systems for patients based on brainwave activity interface systems, namely interaction systems based on hand gestures, etc.) ( Gao et al., 2003; Raheja et al., 2014; Saha et al., 2018), just a little amount of study has been carried out thus far to determine the needs of patients in this category and then come up with a plan to meet their specific needs. To mitigate the effects of this emergency on the healthcare system, the necessities that a patient in a hospital smart bed is expected to have are clearly perceived and well comprehended. A viable solution may be developed as well as created to assist the hospitalised patients alongside the professionals in the healthcare industry.

            Recent research suggests that systems based on deep learning may improve the effectiveness, precision, and adaptability of healthcare systems ( LeCun et al., 2015; Shrestha and Mahmood, 2019). Several systems created for use in the medical industry have been put into use by using technology based on deep learning (for instance, medical image analysis, illness prognostication, and other similar topics) ( Shen et al., 2017; Bakator and Radosav, 2018). Recently, researchers have focused on utilising deep learning for smart beds.

            Consequently, the goals of this work are as follows:

            1. To gain an understanding of the smart-bed control mechanism for patients who are bedridden based on the graphical user interface (GUI), Node microcontroller unit (MCU), and 12V 6-channel relay.

            2. To design and construct the control system for the smart-bed system in accordance with the five key points targeted, which correspond to the heart rate, blood pressure, body temperature, motion detector, and bed occupancy for patient body monitoring and an internet protocol (IP) camera device for monitoring the smart bed by using sensory data and deep learning technologies.

            3. To analyse the system using the proposed deep learning-based model through empirical research and analysis.

            According to the research that was carried out, it was found that quite a few different studies have been conducted to help the healthcare industry. In addition, most of the interface systems created in earlier studies concentrated on implementing a single mode of operation without considering the patients in their various states of varying degrees throughout the range of the impairment. It was discovered that just a few patient care systems had been established using various forms of deep learning technology. Because of this, the study focuses on developing a solution for the smart-bed system as well as the difficulties confronted by its challenges.

            This paper develops an IoT-based automated non-invasive patient detection and monitoring system using an algorithm based on deep learning by using line-of-sight cameras and cumbersome wearable sensing devices. The suggested system makes use of a trained and pre-trained Mask Region Convolutional Neural Network (Mask-RCNN) model for most of its detection work, particularly for key points that are identified and connected to a particular sensor.

            The remainder of the paper is laid out as follows: this research paper has five sections: Introduction, Literature Review, Proposed Technique, Experimental Findings and Discussion, and Conclusion.

            LITERATURE REVIEW

            The underlying foundation for the algorithms that are employed in this system is deep learning, and hence the effectiveness and precision of the functioning of the system in real time have been improved. There have been several different digital systems created to provide medical treatment to those who are unable to care for themselves due to illness or disability and those in advanced years ( Lakkis and Elshakankiri, 2017; Khan et al., 2018; Hossain et al., 2019; Hasan et al., 2021; Islam et al., 2021). In Kanase and Gaikwad (2016), the authors suggested using a cloud-based approach that uses a variety of sensory data. To get information from the patient’s room, sensors were used. To exhibit this, both an application and a website were built, and readings from the sensors and functions for regulating the patient’s environment were used by the relevant medical professionals, nurses, and staff members.

            In addition, the research demonstrated how the various types of data that are essential may be transported from the perspective of the sick to that of the carer. But in this planned system, the patient was not given any control over what was happening to them.

            Some system proposals were made by Khan et al. (2018), Saha et al. (2018), and Kamruzzaman (2020) for the monitoring and care of patients during emergency alarm generation. In Aadeeb et al. (2020), authors have constructed a brain-computer interface (BCI) and Deep management system for hospital patient rooms that is based on assisting individuals who are immobile, ill, or incapacitated in exercising control over their immediate environment.

            Tam et al. (2019) created a controller that is based on hand gestures. The operator was responsible for the collection of electromyographic sensors that assisted in the detection of muscular motions. Sensors for electromyography may be found here and were utilised as features for training a convolutional neural network (CNN).

            There are some alternative methods for computer vision and face feature analysis for individual patients. In Khan et al. (2017), the authors developed and conducted tests on a control system for people who are unable to use their hands because of their condition.

            The developments that have been made in the IoT in recent years have made it possible for us to create healthcare systems that are more intelligent and predictable. It is necessary to have a gateway (also known as a bridge point) to link the internet and the sensor network architecture in most of the technology based on the IoT in the medical field, notably in smart hospitals. The gateway is often located on the periphery of the network and is responsible for performing critical tasks utilised by the internet and networks of sensors ( Alam et al., 2019; Chen et al., 2019). These gateways provide for streamlined management of the sensor network and access to relevant data over the internet, which is the medium via which data are being transferred ( Casadei et al., 2019).

            The IoT is well positioned to play an important role in trends in healthcare due to technology on a variety of different levels. Unlocking the potential of the IoT to improve healthcare facilities is made possible when a hospital is equipped with a real-time smart healthcare system ( Savaglio et al., 2020). It makes the experience better for patients, enhances the experience for carers, has a positive impact on health and clinical outcomes, and reduces costs.

            The IoT-based systems, in conjunction with data mining methods ( Amato et al., 2018; Piccialli et al., 2019), monitor and synthesise patient information, crosscheck these records against registered patterns, and analyse illness indications. The IoT delivers patient monitoring solutions that are efficient and quick, playing a crucial role in the process of remote patient monitoring in the absence of a medical personnel. Currently, it offers a supporting monitoring system that makes use of non-intrusive equipment (IoT-based sensors or cameras), in addition to computer vision and deep learning algorithms ( Piccialli et al., 2020; Qureshi et al., 2020; Piccialli et al., 2021). Monitoring patients becomes more effective when it is based on the ability to detect their symptoms based on visual signals, such as how they stand, how they move, and how they seem. In recent years, scientists have experimented with systems for tracking patients that include specialist equipment, pressure beds, and sensors ( Birku and Agrawal, 2018), but this has resulted in incurring extra costs. According to the available research, most of the developed approaches make use of several different camera devices or sensors for monitoring patients.

            In the published study, most researchers investigated a variety of sensor-based approaches to fall detection. In addition, several scholars resorted to relying on vision in their investigation of fall detection ( Birku and Agrawal, 2018).

            Numerous researchers, such as Merrouche and Baha (2016), extract visual information from scenes that only include a single room by using numerous camera systems. Researchers monitored the pace, depth, and consistency of patients’ breathing using sensors and information based on signals. The analysis was based on the inhalation and exhalation duration and ratio, for example Sathyanarayana et al. (2018). Very few of them created vision-based systems that used information about patients’ facial expressions to assess pain, such as Jan et al. (2018), which requires the patients’ faces to be aligned directly with the camera. Monitoring measures for respiration or breathing include keeping an eye on things like the patient’s breathing rate, stability, and depth, as well as their exhalation/inhalation time ratio and any indications of sleep apnoea ( Uysal and Filik, 2018). In Dhillon (2017), a method was proposed that is responsible for the development of a monitoring and warning system for the diagnosis of epileptic seizures. Researchers from many institutions looked at a patient’s behaviour and utilised the data they gathered to conduct an analysis of the patient’s medical problems ( Sathyanarayana et al., 2018). In Liu and Ostadabbas (2017), the authors suggested an in-bed patient posture monitoring system that does not involve any intrusive procedures. Techniques for posture-based monitoring using many cameras have been researched by Liu and Ostadabbas (2017), but they mostly concentrate on the patient’s upper body region. In Deng et al. (2018), the authors presented the idea of computerised eye tracking for sleep analysis that makes use of motion sensors and infrared cameras. In the published study, most researchers concentrated their attention on a single patient and/or bed, in addition to certain contact-based devices. For these methods to work, it is necessary to connect sensors to patient bodies or beds, which is not only inconvenient but also expensive to do and often something the patient does not want. Several studies included visual methods, although most of the time different types of cameras were used for monitoring breathing and detecting falls. Several researchers have created techniques that are based on patients’ facial expressions; however, these techniques require the patient to have his face oriented squarely towards the camera.

            This research work develops an automated system for patient monitoring or detection. This system makes use of specialised hardware, sensors, or line-of-sight cameras in order to monitor patient conditions. The suggested method identified the patient body’s important spots using a model that was based on deep learning. This model was known as Mask-RCNN. The information from the key sites is then utilised for further investigation of the data from the smart bed, which corresponds to the heart rate, blood pressure, body temperature, motion detector, and bed occupancy. As a metric of performance, we use the average distance between two successive critical points that have been identified.

            In this study, in contrast to earlier similar efforts, a technique based on deep learning is developed to classify and investigate the IP-based top-view camera for the smart bed and five different key points of the patient’s body connected to sensors.

            DEEP LEARNING-BASED SMART-BED MONITORING SYSTEM

            Proposed methodology

            Deep learning, an artificial intelligence technique, uses deep neural networks to analyse medical data and provide accurate diagnoses and prognoses for patients. Deep learning works in this domain; image analysis: deep neural networks analyse magnetic resonance imaging (MRIs), X-rays, and histopathological samples. The deep model recognises complex characteristics and illness patterns to help diagnose cancer and neurological ailments early. Prediction and prognosis: deep learning processes massive amounts of patient data, including medical records, clinical reports, and diagnostic findings ( Arshad et al., 2023). These data help the deep model anticipate illness progression and therapy responses. Predictions may enhance treatment planning and identify outcome-affecting factors. Deep learning may uncover subtle medical data patterns and changes that healthcare practitioners overlook. It may identify health issues and disease development by finding complicated data linkages ( Islam et al., 2022).

            Deep learning might improve medical diagnostics, especially in terms of accuracy. Medical data may teach advanced models disease patterns and indicators. This improves diagnosis and treatment. Deep learning may anticipate and diagnose illnesses. Advanced models may use prior data to predict patient response to therapy and sickness progression. This dataset might improve and customise treatment regimens. Deep learning reduces medical mistakes, improving patient safety. Deep models may identify problems and provide solutions by studying data and risk. Deep learning algorithms may discover anomalies early, improving patient monitoring and health outcomes.

            The utilisation of deep learning enables the provision of tailored treatment to individual patients. The utilisation of deep models has the potential to facilitate individualised treatment plans and therapies through the analysis of medical history, genetic information, and lifestyle factors. The implementation of a tailored approach has the potential to enhance both the physical well-being and overall satisfaction of patients ( Bhardwaj et al., 2022; Sujith et al., 2022).

            The potential of deep learning is substantial; however, it is imperative that it is employed in conjunction with medical expertise and clinical discernment to optimise patient care.

            Design specifications

            For the time being, we can determine each of the data sets that are currently available and each of the in-lab trials that can be found in the published research. The data set was compiled with the assistance of a top-view IP-based camera that was positioned at a height of 4 m. The images from the IP camera have been considered to have a resolution of 640 × 480. The system keeps a record of the patient’s smart-bed image using an IP camera and sensory data from five key points of the patient’s body, that is, heart rate, blood pressure, body temperature, motion detector, and bed occupancy. The findings of their investigation revealed that the detection and monitoring of smart beds can be associated with a GUI to assist in making healthcare better. A personal computer camera was attached to the smart bed of the patient. Then, the images are displayed on a computer for further monitoring.

            The proposed Internet of Things-based system

            An IoT-deep learning-based smart-bed system has been proposed in this study. The system uses a deep learning algorithm to monitor patients’ activity. An IP-based camera is used to capture images from the top view of the patient’s smart bed, and a deep learning-based model is employed to recognise information about the patient’s key points connected to body sensors, which correspond to the heart rate, blood pressure, body temperature, motion detector, and bed occupancy. Together, these two components make up the total system. The proposed system’s block diagram is shown in Figure 1. The workflow of the proposed model with hardware specifications is shown in Figure 2.

            Input Data 2 types 1-Data from the sensors from 5 key points of patient body (Heart rate Blood pressure Body temperature Motion detector Bed occupancy) and 2-Image from the IP camera placed above the smart bed type 1 Values of different signals to be analysed Is everything normal? Yes Tack appropriate action Type 2 Image from the IP camera placed above the smart bed Image is analysed No Tack appropriate action
            Figure 1:

            Overall block diagram of the proposed model.

            GUI Node MCU 12V, 6 channel relay Proposed Deep Learning Model Image from patient bed Heart rate signal Blood Pressure signal Body temperature signal Motion detector signal Bed occupancy signal Data from sensors
            Figure 2:

            Workflow of the proposed model with hardware specification.

            Development of the CNN model

            The preparation of the dataset was the first step that needed to be taken before the CNN models could be trained. This required several stages, such as the gathering of picture data, the tagging of image data, the resizing of images, the cropping of images, the scaling of images, and several other procedures. After that, the picture data were split into two groups, one of which will be used to train the CNN models and the other to verify them. In addition, the pictures were shuffled around and divided up into batches before being used for training the CNNs. The Results section provides a more in-depth analysis of the final picture collection that can be credited to the previously indicated actions and procedures. There is one input layer ( Wang, 2003), several convolutional layers ( Albawi et al., 2017), and many pooling layers ( Sun et al., 2017) that together form the classification system that is the CNN. The input layer is where a picture is fed into the CNN before it is processed further. This layer is responsible for providing an input feature map to the succeeding CNN layers so that the picture may be processed ( Wu and Lin, 2018). With a three-dimensional feature map as the input, the convolutional layer applies the convolution operation ( Ludwig, 2013) defined by Eq. 1. F stands for the feature map, I for the convolution kernel, x and y for the location on the feature map where the kernel’s centre lies, and i and j for the surrounding coordinates.

            (1) FoI(x,y)=nj=Nnj=NF(i,j)I(x+i,y+i).

            By using the pooling layer, shrinkage of a feature map’s total width is possible. Each feature map that is supplied to it is processed by a single kernel. Because of its ability to reduce the data size, feature maps may likewise be converted using the pooling layer into one-dimensional feature maps.

            The input parameter x for the dense layer is a one-dimensional feature map, and it multiplies that parameter by the weight values w of the layer. In addition to this, the dense layer is having a bias value denoted by b, which is applied to the result generated when weights are multiplied together such that y = wx T + b. The dense layer will conduct the multiplication of weight, as well as the addition of bias, with the prescribed feature inputs. An activation function ( Sharma et al., 2017) is included in each layer of a CNN. This function is responsible for modifying the output that is produced by the CNN layer. The SoftMax equation determines the parameters for the SoftMax type of activation function, which is utilised in CNNs to analyse feature maps from convolutional layers, and is written as

            (2a) y=max(x,0)

            (2b) σ(xi)=exikj=1exi.

            An activation function (the rectified linear unit activation function and the SoftMax type of activation function) with the parameters has been implemented in the final defence layer for the purposes of classification. The CNNs that are suggested for use in executing the proposed algorithm feature pooling layers as well as convolutional layers. The proposed algorithm based CNNs employ a convolutional layer rather than a dense layer as their final layer. This is the key distinction between the two types of CNNs. This convolutional layer produces a three-dimensional feature map as its final output. The value of each vector z i with all images that sound like the image when the expression of patient is not normal may be determined by z i = [ p i , w i , h i , x i , y i ]. Here, the variable p i denotes the likelihood that an item is like “not normal class” which is denoted by the vector y i . A rectangular area that begins at the point ( x i , y i ) and has height and width dimensions of h i and w i correspondingly may be used to define the boundaries of the item (portion of the patient, i.e. face, legs, or arms if they are behaving abnormally) that has been spotted.

            The proposed Mask-RCNN model

            The images from the IP camera are sent to the monitoring model, which is based on deep learning, so that they may be further processed. Mask-RCNN ( Islam et al., 2021) is performed to monitor and identify any signs of patient discomfort, as shown in Figure 1. The Mask-RCNN model receives the images that are being inputted into it. The model can identify the patient as well as critical places on their bodies by using sensors. The identified critical spots are then used for further investigation. Figure 3 presents the block diagram for the proposed Mask-RCNN network for predicting patient conditions.

            T1, T2, T3, T4, T5 and T6 Proposed Mask-RCNN: Predict Segmentation Mask: R-CNN Classification Predict Boundary Box: Regression Loss Patient Expression Prediction 5 Key Point Detection (through sensor data) Patient Motion Detection Euclidian Distance Measurement Threshold Component based Evaluation Measure Mov_body Patient Condition Evaluation using Cond Patient
            Figure 3:

            Block diagram for the proposed Mask-RCNN network in predicting patient conditions. Abbreviation: Mask-RCNN, Mask Region Convolutional Neural Network.

            The data mining technique of association rules has been used to investigate sensors. Finally, a threshold that is dependent on distance is used to detect the patient’s data from different sensors that need attention. The completed processes are then sent to the monitoring unit, where they are evaluated further and may be used for emergency calls or alerts.

            The fundamental design is additionally expanded so that human posture may be estimated by leveraging information from critical spots ( He et al., 2017). The process consists of two phases. In the first step, a CNN is used to extract a features map from the input image using the region proposal network ( Lin et al., 2017). CNN’s extracted candidates for the bounding box choose a spot on the features map at random. It is possible to extract bounding box candidates of varying sizes; hence, in the second step, a layer known as RoI (Region of Interest) Align is used to reduce the dimension of the extracted features to ensure that their sizes are all comparable to one another, as shown in Figure 3. The collected characteristics are then supplied to the parallel branches of CNNs to make predictions about the segmentation masks and the bounding boxes. The loss function for the Mask-RCNN is arrived at by adding together the results in L = L cls + L loc + L mask . Here, the symbol L cls stands for classification loss ( He et al., 2017), while the symbol L loc denotes regression loss for the observed bounding box. The formula for determining the L cls of a RoI is as follows: L cls ( p, u) = −log p u . In the equation u stands for the actual class of the item, and p = ( p 0, …,, p k ) displays the projected probability distribution over k + 1 categories of objects. The equation for determining the regression loss inside the box’s confines L loc for RoI is as follows: Lloc(tu;v)=x{x,y,w,h}smoothL1(tuivi)

            in which v = ( v w ; v h ; v x ; v y ) represents the true RoI regression inside a box and tu=(tuw,tuh,tux,tuy)
            represents the anticipated regression inside a class’s bounding boundary u. In the third equation, the formula for smooth L 1 is written as in Ren et al. (2015), and it is as follows: smoothL1={0.5x2,if|x|<1|x|0.5,otherwise}.
            The loss function for binary data is the value of L mask and is found by calculating the average loss in mask branch binary cross-entropy. The output of this mask branch is the value K2m
            for each RoI, and K stands for a binary mask, with a resolution of m × n and K denotes the number of classes. The L mask is determined as He et al. (2017) which is written as follows:

            (3) Lmask=1m2ij[QijlogPuij+(1Qijlog(1Puij))].

            In this equation, Q represents the actual mask, while P u represents the anticipated mask for the class u of the RoI. These two masks are represented by the expressions Q ij ∈[0, 1] and Puij[0,1],

            respectively. The framework is enhanced so that facial expression estimation may take place.

            Next, the values of the sensory data from identified key point is represented as a one-hot mask in the model. The model forecasts a total of K masks, with one representing each of the K key point types. While this is happening, the object detection algorithm is being taught to determine whether the patient’s key point is normal or needs attention. Both steps, “patient expression detection” and “key point detection,” may be completed in isolation from one another. The pre-trained model that is shown in Figure 3 initially locates and segments patients in the input picture. In addition to this, the model can identify five distinct important data records from the patient’s body.

            The information about the patient’s expression is utilised, and other sensory data are used to determine whether the patient is experiencing any discomfort and needs some assistance.

            The patient’s motion is characterised by such a lack of control and a high frequency that it is regarded as a marker of pain. These motions and the detection of pain may be different for different illnesses, as other aspects may be linked to them. In this context, it is presumed that patients are experiencing standard discomfort. It is thought that there is discomfort present when there is a greater frequency of unknown situations and frequent moves over a longer period. The continuous motions of a specific body part are used as the basis for conducting an analysis of pain in that body part of the patient.

            This technique monitors the movement associated with a motion detection sensor over time by utilising key point coordinates ( x, y), and it determines if the patient’s state is normal or whether he is experiencing pain. Using information about the patient’s distance from the sensor, every movement that may have happened in any organ of the patient’s body may be quantified. Calculating the distance between consecutive images of the video sequences using the Euclidean distance allows for the related key spots of the concerned organ to have their distances determined. The equation that follows has been taken into consideration to determine how to compute the distance between the ( x, y) coordinates of all critical locations in two subsequent images such that D=(kxjKxj1)2+(kyjkyj1)2.

            The term D denotes the key point to pivot Euclidean distance k in subsequent images j and j − 1 correspondingly in the equation that was just presented. The notation (kxj,kyj)
            and (Kxj1,kyj1)
            are used to indicate the ( x, y) location of pivotal pixel coordinates k in images j and j − 1 respectively. An equation is used to determine whether a threshold T should be used such that if T={1,Ddp0,otherwise}.
            Regarding the picture’s pixels that make up the picture or frame, the distance between consecutive key points, also known as the movement of key points, is what the threshold value T measures. The d p value for this work has been increased all the way up to 25 pixels. The distance threshold is used to assess whether there has been movement in any of the identified key points of the connected patient body key point associated with motion detection. This allows doctors to determine whether a body part has been moved. For instance, if the ( x, y) coordinates of any of the identified key points on a patient’s body changed would cause a change in the value of the ( x, y) coordinates. An equation has been used to do an analysis on the Euclidean distances for rapid movement in the patient’s body ( Mov body ) to achieve this goal and is represented as follows:

            (4) Movbody={1,ifVni=1=10,otherwise}.

            The variable i may take on any value between 1 and n, and it represents the n component important points of the patient’s body B. Finally, a time-based threshold known as T t is used to assess each image for the occurrence of movement to determine whether a patient is experiencing normal sensations or some degree of discomfort. The time threshold might change depending on the size of the data collection as well as the kind of data set. The value of Cond patient reflects the current state of the patient P which is written as follows:

            (5) Condpatient={Discomfort,MovbodyTtNormal,otherwise}.

            The time threshold is denoted by the symbol T t , and it refers to the amount of time that serves as the dividing line between motions that are considered normal and movements that are produced by any uncomfortable situation. In this piece, the movement for 10 or more consecutive images is referred to as T t .

            RESULTS AND DISCUSSION

            Examining the Mask-CNNs statistically by calculating their accuracy, precision, F1-Score, Cohen's kappa, recall, receiver operating characteristics area under the curve (ROC AUC), true-positive rate (TPR), false-negative rate (FNR), true-negative rate (TNR), and false-positive rate (FPR). This statistical analysis is done for the overall six key points, such as T1: image from smart bed, T2: heart rate, T3: blood pressure, T4: body temperature, T5: motion detector, and T6: bed occupancy, which were classified based on the prediction values as well as the label values. A few of the statistical measures are shown in Table 1 w.r.t. classification for T1, classification for T2, classification for T3, classification for T4, classification for T5, and classification for T6. In addition, the photos that were used for training were likewise pre-processed using the same procedure and statistically examined using the same metrics for the sake of comparison.

            Table 1:

            Statistical analysis of classification using the proposed method.

            ParameterClassification for T1Classification for T2Classification for T3Classification for T4Classification for T5Classification for T6
            Accuracy0.971/0.9630.969/0.9530.971/0.9530.964/0.9440.986/0.9560.976/0.943
            Precision0.966/0.9530.953/0.9460.963/0.9410.953/0.9560.963/0.9440.968/0.922
            Recall0.987/0.9730.973/0.9640.952/0.9310.969/0.9540.958/0.9230.956/0.921
            F1-Score0.985/0.9650.963/0.9110.952/0.9210.974/0.9550.966/0.9480.955/0.934
            Cohen’s Kappa0.936/0.9040.921/0.9110.911/0.9010.933/0.9320.943/0.9220.962/0.932
            ROC_AUC0.974/0.9540.974/0.9320.964/0.9410.983/0.9450.974/0.9520.956/0.939

            Abbreviation: ROC_AUC, receiver operating characteristics area under the curve.

            The metrics that have been presented are all considered positive metrics; hence, the performance of the model that is being assessed will be improved in direct proportion to the values of these metrics. The images used in this work went through a series of preparation steps, which included grayscaling, enlarging the image resolution, and rescaling the pixel values. After that, the photos were used by the Mask-RCNN model, and the values of the predictions were saved in a file. The values that were predicted comprised the category of the items that were found, and the patient is classified as either “normal” or “abnormal.”

            This can improve the performance of a CNN model through data augmentation; data augmentation may expand and diversify the training dataset. The model may become more resilient and generalise to unknown data by adding random changes like rotation, scaling, and flipping to the input pictures.

            Hyperparameter tuning

            You can try multiple hyperparameter settings to obtain the best CNN model. This covers learning rate, batch size, layer number, filter size, and activation function. Grid or random search can tune hyperparameters.

            Transfer learning

            You can use pre-trained models to start CNN model training. Transfer learning uses ImageNet-trained information. Fine-tuning the pre-trained model for your task and dataset may improve performance with less training data.

            Regularisation

            Regularisation prevents overfitting and improves generalisation. Dropout and L2 regularisation may minimise model complexity and feature dependence. Optimising the CNN model can be done using multiple optimisation strategies. Adam, RMSprop, and momentum may expedite convergence and enhance training efficiency.

            Ensemble methods

            Combine predictions from many CNN models. Averaging or voting may increase model robustness and accuracy by integrating varied viewpoints from different models.

            Architecture changes

            Change the CNN model architecture. This may include adding or deleting layers, modifying the number of filters or neurons, network depth, or convolutional or pooling layer types. A model architecture that fits the job and dataset can be used. Early halting prevents overtraining. The training can be stopped when the model’s performance degrades on a validation set. This helps locate the ideal point when the model has learned meaningful patterns without overfitting the training data.

            Regular model evaluation

            Assess the model’s performance using a distinct validation or test dataset. This monitors model progress, identifies faults, and adjusts. If possible, use multiple data sources. Integrating appropriate sensor data or patient information into the model may improve its performance.

            The statistical analysis was carried out to measure the effectiveness of the proposed system in terms of the standard deviation for training and testing data. The standard deviation is measured as the value deviating from the “normal” class. As a result, the usefulness of the proposed model is based on the key points considered.

            The findings of the statistical analysis are reported in Table 2 for both the train dataset and the test dataset.

            Table 2:

            Measure of the effectiveness of the proposed system in terms of standard deviation for training and testing data.

            TaskT1: training, testingT2: training, testingT3: training, testingT4: training, testingT51: training, testingT6: training, testing
            T11.63, 1.142.54, 1.341.54, 1.131.58, 1.161.79, 1.361.83, 1.26
            T21.72, 1.682.83, 2.411.65, 1.211.58, 1.451.63, 1.311.54, 1.21
            T31.72, 1.412.54, 2.131.83, 1.011.84, 1.321.78, 1.421.56, 1.32
            T41.52, 1.232.47, 2.241.92, 1.431.77, 1.291.65, 1.231.83, 1.38
            T51.63, 1.212.65, 2.741.83, 1.211.56, 1.321.47, 1.271.45, 1.03
            T61.45, 1.012.78, 2.811.45, 1.041.55, 1.171.66, 1.211.59, 1.15

            Every step of the Mask-RCNNs has been shown to perform better on the training dataset than on the testing dataset, according to the statistical investigation. Since the metrics that were acquired for the testing datasets are not vastly different from the metrics obtained for the training datasets, we can conclude that the models did not suffer from overfitting. The suggested model had the highest value of precision on the training dataset but the lowest value of precision on the testing dataset. The suggested model got the best overall performance and the highest value for the measure of recall on the training dataset.

            Both qualitative and quantitative data were gathered using the strategy for assessing the system described in the methodology section. The information generated by the system has been used to assess its usefulness, efficiency, and user friendliness.

            The time it took to complete the network run was used to determine how efficient the suggested method was. Table 3 summarises the results of the network runtime analysis for both the train dataset and the test dataset.

            Table 3:

            Evaluation of task completion time using the proposed method.

            TaskT1: training, testingT2: training, testingT3: training, testingT4: training, testingT51: training, testingT6: training, testing
            T110.16, 9.168.46, 9.668.14, 7.158.99, 8.348.93, 8.268.34, 7.29
            T28.42, 7.657.48, 8.589.46, 8.629.85, 8.119.67, 8.398.67, 7.38
            T39.76, 8.458.96, 7.758.71, 9.418.78, 7.468.59, 7.457.92, 6.74
            T47.85, 7.288.84, 7.698.86, 8.269.34, 8.248.92, 7.498.28, 7.83
            T57.68, 8.548.61, 9.528.63, 7.598.59, 7.347.65, 6.457.92, 6.59
            T68.94, 7.457.34, 6.547.45, 6.347.48, 6.228.45, 7.938.91, 7.48

            Table 4 illustrates the average performance of various performance metrics like accuracy, recall, precision, FNR, TPR, FPR, and TNR using the proposed method. After the conclusion of the tests, the results were gathered for further consideration. And a decision is to be made about whether the patient needs assistance or whether the condition is normal.

            Table 4:

            Average performance of performance metrics using the proposed method.

            Average performanceAccuracy (%)Recall (%)Precision (%)FNR (%)TPR (%)FPR (%)TNR (%)
            T1949083695891
            T2939189591789
            T3958987492993
            T4949283793690
            T5938884694793
            T6928983695693

            Abbreviations: FNR, false-negative rate; FPR, false-positive rate; TNR, true-negative rate; TPR, true-positive rate.

            CONCLUSION

            Most patients who are confined to hospital beds need special care from hospital staff. Even though there are many different types of systems designed to help these patients, most of them concentrate on certain duties, such as calling for emergencies or monitoring the patient’s health and activities. This work uses a hospital smart-bed control system with the help of computer vision, smart sensors, and deep learning technologies. The results of the assessment demonstrated, in the end, that the system is successful and efficient. Deep learning has been implemented into the system with the purpose of increasing accessibility. Therefore, the smart-bed system that has been designed, combined with the results of its assessment and the needs that have been specified, presents a viable answer for the crisis that is now occurring in the healthcare sector.

            The implementation of smart hospital technology has the potential to enhance patient care through various means, including the improvement of communication and cooperation among healthcare teams. Instant messaging and smartphone apps provide a rapid means of communication for medical professionals such as doctors, nurses, and chemists. Facilitating care coordination and exchanging crucial information are simplified.

            The implementation of smart hospital technology enables precise patient monitoring through intelligent monitoring systems. Intelligent monitoring devices and sensors are capable of quantifying vital physiological parameters such as blood pressure, heart rate, and oxygen saturation levels. The utilisation of deep learning algorithms facilitates the analysis of data to detect anomalies or emergencies in a timely manner, enabling the healthcare team to promptly intervene and administer suitable medical interventions.

            Smart hospital technology employs data derived from electronic medical records, monitoring devices, and other medical sources to conduct data analysis and generate predictive insights. The utilisation of deep learning and artificial intelligence enables the identification of patterns, trends, and forecasts within the given dataset. The insights have the potential to enhance care planning, facilitate evidence-based decision-making, and optimise treatment outcomes.

            The findings of this study shed even more light on the importance of using a GUI for smart beds. The proposed framework demonstrated how computer vision works without the need for any additional hardware sensors for efficient patient care. This framework shows in greater detail how approaches like deep learning may greatly improve the performance of a proposed system by making use of computer vision and several sensors as the components.

            FUTURE TRENDS FOR SMART HOSPITAL BEDS

            Smart hospital beds will improve patient care and efficiency. Smart hospital bed development may include:

            1. Mobile apps and smart gadgets may help patients, physicians, and nurses communicate. Virtual and augmented reality may also improve patient communication by providing visual information and instructions.

            2. Intelligent data analysis: smart hospital beds will need intelligent data analysis. Medical gadgets, computerised medical records, and sophisticated monitoring systems generate massive volumes of data. This analysis improves patient care and allows evidence-based decision-making.

            3. Robotics and AI: smart hospital beds may use robotics and AI to simplify operations and help patients. Robots can transport medications and provide basic care, while AI systems can help physicians make data-driven judgements.

            AUTHOR CONTRIBUTIONS

            M.M. and S.A. conceptualised this study, M.B.A. did the methodology, S.Al.O. was responsible for the preparation of software, N.Al.T. and S.A. carried out the validation, M.M. conducted formal analysis, S.A. and F.H. conducted investigation, M.B.A. was responsible for resources management, S.Al.O. was responsible for data curation, N.Al.T. drafted the original manuscript, and S.A. wrote, reviewed, and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

            CONFLICTS OF INTEREST

            The authors declare no conflicts of interest in association with the present study.

            ACKNOWLEDGEMENTS

            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2022-042.

            DATA AVAILABILITY STATEMENT

            The data used to support the findings of this study are included within the article.

            REFERENCES

            1. Aadeeb MS, Hassan Munna M, Rahman MR, Islam MN. 2020. Towards developing a hospital cabin management system using brain computer interactionInternational Conference on Intelligent Systems Design and Applications; p. 212–224. New York. Springer.

            2. Akter N, Akter M, Turale S. 2019. Barriers to quality of work life among Bangladeshi nurses: a qualitative study. Int. Nurs. Rev. Vol. 66(3):396–403

            3. Alam MGR, Hassan MM, Uddin MZ, Almogren A, Fortino G. 2019. Autonomic computation offloading in mobile edge for IoT applications. Future Gener. Comput. Syst. Vol. 90:149–157

            4. Albawi S, Mohammed TA, Al-Zawi S. 2017. Understanding of a convolutional neural network2017 International Conference on Engineering and Technology (ICET); p. 1–6. New York. IEEE.

            5. Amato F, Moscato V, Picariello A, Piccialli F, Sperlí G. 2018. Centrality in heterogeneous social networks for lurkers detection: an approach based on hypergraphs. Concurr. Comput. Pract. Exp. Vol. 30(3):e4188

            6. Arshad J, Ashraf MA, Asim HM, Rasool N, Jaffery MH, Bhatti SI. 2023. Multi-mode electric wheelchair with health monitoring and posture detection using machine learning techniques. Electronics. Vol. 12(5):1132

            7. Bakator M, Radosav D. 2018. Deep learning and medical diagnosis: a review of literature. Multimodal. Technol. Interact. Vol. 2(3):47

            8. Bhardwaj V, Joshi R, Gaur AM. 2022. IoT-based smart health monitoring system for COVID-19. SN Comput. Sci. Vol. 3(2):137

            9. Birku Y, Agrawal H. 2018. Survey on fall detection systems. Int. J. Pure Appl. Math. Vol. 118(18):2537–2543

            10. Carpenter I, Bobby J, Kulinskaya E, Seymour G. 2007. People admitted to hospital with physical disability have increased length of stay: implications for diagnosis related group re-imbursement in England. Age Ageing. Vol. 36(1):73–78

            11. Casadei R, Fortino G, Pianini D, Russo W, Savaglio C, Viroli M. 2019. Modelling and simulation of opportunistic IOT services with aggregate computing. Future Gener. Comput. Syst. Vol. 91:252–262

            12. Chen M, Li W, Fortino G, Hao Y, Hu L, Humar I. 2019. A dynamic service migration mechanism in edge cognitive computing. ACM Trans. Int. Technol. (TOIT). Vol. 19(2):1–15

            13. Deng F, Dong J, Wang X, Fang Y, Liu Y, Yu Z, et al.. 2018. Design and implementation of a noncontact sleep monitoring system using infrared cameras and motion sensor. IEEE Trans. Instrum. Meas. Vol. 67(7):1555–1563

            14. Dhillon AS. 2017. Monitoring and alerting system for epilepsy patientsEEE Student Reports (FYP/IA/PA/PI). http://hdl.handle.net/10356/70748

            15. Farid M, Purdy N, Neumann WP. 2020. Using system dynamics modelling to show the effect of nurse workload on nurses’ health and quality of care. Ergonomics. Vol. 63(8):952–964

            16. Gao X, Xu D, Cheng M, Gao S. 2003. A BCI-based environmental controller for the motion-disabled. IEEE Trans. Neural. Syst. Rehabil. Eng. Vol. 11(2):137–140

            17. Hasan Z, Khan RR, Rifat W, Dipu DS, Islam MN, Sarker IH. 2021. Development of a predictive analytic system for chronic kidney disease using ensemble based machine learning2021 62nd International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS). p. 1–6. New York: IEEE.

            18. He K, Gkioxari G, Dollár P, Girshick R. 2017. Mask R-CNNProceedings of the IEEE International Conference on Computer Vision; p. 2961–2969. Cambridge, MA, USA:

            19. Hossain T, Sabbir MS-U-A, Mariam A, Inan TT, Islam MN, Mahbub K, et al.. 2019. Towards developing an intelligent wheelchair for people with congenital disabilities and mobility impairment2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT); p. 1–7. New York: IEEE.

            20. Islam MN, Khan SR, Islam NN, Rownok R, Zaman R, Zaman SR. 2021. A mobile application for mental health care during COVID-19 pandemic: development and usability evaluation with system usability scaleInternational Conference on Computational Intelligence in Information System; p. 33–42. New York: Springer.

            21. Islam MN, Aadeeb MS, Hassan Munna MM, Rahman MR. 2022. A deep learning based multimodal interaction system for bed ridden and immobile hospital admitted patients: design, development and evaluation. BMC Health Serv. Res. Vol. 22(1):803

            22. Jan A, Meng H, Gaus YFBA, Zhang F. 2018. Artificial intelligent system for automatic depression level analysis through visual and vocal expressions. IEEE Trans. Cogn. Dev. Syst. Vol. 10(3):668–680

            23. Kamruzzaman M. 2020. Architecture of smart health care system using artificial intelligence2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW); p. 1–6. New York: IEEE.

            24. Kanase P, Gaikwad S. 2016. Smart hospitals using internet of things (IOT). Int. Res. J. Eng. Technol. (IRJET). Vol. 3(03):1735–1737

            25. Khan NS, Kundu S, Al Ahsan S, Sarker M, Islam MN. 2018. An assistive system of walking for visually impaired2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2); p. 1–4. New York: IEEE.

            26. Khan SS, Sunny MSH, Hossain MS, Hossain E, Ahmad M. 2017. Nose tracking cursor control for the people with disabilities: an improved HCI2017 3rd International Conference on Electrical Information and Communication Technology (EICT); p. 1–5. New York: IEEE.

            27. Lakkis SI, Elshakankiri M. 2017. IOT based emergency and operational services in medical care systems2017 Internet of Things Business Models, Users, and Networks. p. 1–5. New York: IEEE.

            28. LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature. Vol. 521(7553):436–444

            29. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S. 2017. Feature pyramid networks for object detectionProceedings of the IEEE Conference on Computer Vision and Pattern Recognition; p. 2117–2125. Honolulu, HI, USA:

            30. Liu S, Ostadabbas S. 2017. A vision-based system for in-bed posture tracking2017 IEEE International Conference on Computer Vision Workshops (ICCVW); p. 1373–1382. Venice, Italy: IEEE.

            31. Ludwig J. 2013. Image Convolution. Portland State University. Portland:

            32. Mattison M, Marcantonio E, Schmader K, Gandhi T, Lin F. 2013. Hospital Management of Older Adults. UpToDate, Waltham:

            33. McGarry BE, Grabowski DC, Barnett ML. 2020. Severe staffing and personal protective equipment shortages faced by nursing homes during the covid-19 pandemic: study examines staffing and personal protective equipment shortages faced by nursing homes during the COVID-19 pandemic. Health Aff. Vol. 39(10):1812–1821

            34. Merrouche F, Baha N. 2016. Depth camera based fall detection using human shape and movement2016 IEEE International Conference on Signal and Image Processing (ICSIP); p. 586–590. Beijing, China: IEEE.

            35. Pace P, Aloi G, Gravina R, Caliciuri G, Fortino G, Liotta A. 2018. An edge-based architecture to support efficient applications for healthcare industry 4.0. IEEE Trans. Ind. Inform. Vol. 15(1):481–489

            36. Piccialli F, Casolla G, Cuomo S, Giampaolo F, Di Cola VS. 2019. Decision making in IOT environment through unsupervised learning. IEEE Intell. Syst. Vol. 35(1):27–35

            37. Piccialli F, Cuomo S, Crisci D, Prezioso E, Mei G. 2020. A deep learning approach for facility patient attendance prediction based on medical booking data. Sci. Rep. Vol. 10(1):1–11

            38. Piccialli F, Di Somma V, Giampaolo F, Cuomo S, Fortino G. 2021. A survey on deep learning in medicine: why, how and when? Inf. Fusion. Vol. 66:111–137

            39. Qureshi SM, Purdy N, Mohani A, Neumann WP. 2019. Predicting the effect of nurse–patient ratio on nurse workload and care quality using discrete event simulation. J. Nurs. Manag. Vol. 27(5):971–980

            40. Qureshi KN, Din S, Jeon G, Piccialli F. 2020. An accurate and dynamic predictive model for a smart m-health system using machine learning. Inf. Sci. Vol. 538:486–502

            41. Raheja JL, Gopinath D, Chaudhary A. 2014. GUI system for elders/patients in intensive care2014 IEEE International Technology Management Conference; p. 1–5. New York: IEEE.

            42. Reinhold K, Tint P, Traumann A, Tamme P, Tuulik V, Voolma S-R. 2019. Digital support in logistics of home-care nurses for disabled and elderly peopleInternational Conference on Human Interaction and Emerging Technologies; p. 563–568. New York: Springer.

            43. Ren S, He K, Girshick R, Sun J. 2015. Faster R-CNN: towards real-time object detection with region proposal networksAdvances in Neural Information Processing Systems. p. 91–99. Montreal, Quebec, Canada:

            44. Saha J, Saha AK, Chatterjee A, Agrawal S, Saha A, Kar A, et al.. 2018. Advanced IOT based combined remote health monitoring, home automation and alarm system2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC); p. 602–606. New York: IEEE.

            45. Sathyanarayana S, Satzoda RK, Sathyanarayana S, Thambipillai S. 2018. Vision-based patient monitoring: a comprehensive review of algorithms and technologies. J. Ambient Intell. Human. Comput. Vol. 9(2):225–251

            46. Savaglio C, Ganzha M, Paprzycki M, Bădică C, Ivanović M, Fortino G. 2020. Agent-based internet of things: state-of-the-art and research challenges. Future Gener. Comput. Syst. Vol. 102:1038–1053

            47. Sharma S, Sharma S, Athaiya A. 2017. Activation functions in neural networks. Towards Data Sci. Vol. 6(12):310–316

            48. Shen D, Wu G, Suk H-I. 2017. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. Vol. 19:221–248

            49. Shrestha A, Mahmood A. 2019. Review of deep learning algorithms and architectures. IEEE Access. Vol. 7:53040–53065. [Cross Ref]

            50. Sterpetti AV. 2020. Lessons learned during the covid-19 virus pandemic. J. Am. Coll. Surg. Vol. 230(6):1092–1093

            51. Sujith AV, Sajja GS, Mahalakshmi V, Nuhmani S, Prasanalakshmi B. 2022. Systematic review of smart health monitoring using deep learning and Artificial intelligence. Neurosci. Inform. Vol. 2(3):100028

            52. Sun M, Song Z, Jiang X, Pan J, Pang Y. 2017. Learning pooling for convolutional neural network. Neurocomputing. Vol. 224:96–104

            53. Tam S, Boukadoum M, Campeau-Lecours A, Gosselin B. 2019. A fully embedded adaptive real-time hand gesture classifier leveraging HD-sEMG and deep learning. IEEE Trans. Biomed. Circ. Syst. Vol. 14(2):232–243

            54. Uysal C, Filik T. 2018. MUSIC algorithm for respiratory rate estimation using RF signals. Electrica. Vol. 18(2):300–309

            55. Wang S-C. 2003. Artificial neural networkInterdisciplinary Computing in Java Programming. p. 81–100. New York: Springer.

            56. Wu B-F, Lin C-H. 2018. Adaptive feature mapping for customizing deep learning based facial expression recognition model. IEEE Access. Vol. 6:12451–12461

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            19 July 2023
            : 2
            : 2
            : 25-36
            Affiliations
            [1 ] Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University, Riyadh 11671, Saudi Arabia ( https://ror.org/05b0cyh02)
            [2 ] Department of Information Systems, College of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia ( https://ror.org/02f81g417)
            [3 ] Information Technology Department, Information Technology College, Ajloun National Private University, Ajloun, Jordan ( https://ror.org/01m28kg79)
            Author notes
            Author information
            https://orcid.org/0000-0001-7964-1051
            Article
            10.57197/JDR-2023-0017
            197eada0-957b-4409-9b38-605c79d4d16e
            Copyright © 2023 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 21 May 2023
            : 22 June 2023
            : 23 June 2023
            Page count
            Figures: 3, Tables: 4, References: 56, Pages: 12
            Funding
            Funded by: King Salman center For Disability Research
            Award ID: KSRG-2022-042
            The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no KSRG-2022-042.
            Categories

            Medicine
            smart bed,deep learning,convolutional neural network,regression loss,region of interest,overfitting,threshold component evaluation

            Comments

            Comment on this article