16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Overview of Configuring Adaptive Activation Functions for Deep Neural Networks - A Comparative Study

      ,
      Journal of Ubiquitous Computing and Communication Technologies
      Inventive Research Organization

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recently, the deep neural networks (DNN) have demonstrated many performances in the pattern recognition paradigm. The research studies on DNN include depth layer networks, filters, training and testing datasets. Deep neural network is providing many solutions for nonlinear partial differential equations (PDE). This research article comprises of many activation functions for each neuron. Besides, these activation networks are allowing many neurons within the neuron networks. In this network, the multitude of the functions will be selected between node by node to minimize the classification error. This is the reason for selecting the adaptive activation function for deep neural networks. Therefore, the activation functions are adapted with every neuron on the network, which is used to reduce the classification error during the process. This research article discusses the scaling factor for activation function that provides better optimization for the process in the dynamic changes of procedure. The proposed adaptive activation function has better learning capability than fixed activation function in any neural network. The research articles compare the convergence rate, early training function, and accuracy between existing methods. Besides, this research work provides improvements in debt ideas of the learning process of various neural networks. This learning process works and tests the solution available in the domain of various frequency bands. In addition to that, both forward and inverse problems of the parameters in the overriding equation will be identified. The proposed method is very simple architecture and efficiency, robustness, and accuracy will be high when considering the nonlinear function. The overall classification performance will be improved in the resulting networks, which have been trained with common datasets. The proposed work is compared with the recent findings in neuroscience research and proved better performance.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: not found
          • Article: not found

          ImageNet classification with deep convolutional neural networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

                Bookmark

                Author and article information

                Journal
                Journal of Ubiquitous Computing and Communication Technologies
                JUCCT
                Inventive Research Organization
                2582-337X
                March 2021
                May 1 2021
                : 3
                : 1
                : 10-22
                Article
                10.36548/jucct.2021.1.002
                5e6e845a-0789-404c-93ff-c5b14444bb9d
                © 2021
                History

                Comments

                Comment on this article