7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      A novel extended multimodal AI framework towards vulnerability detection in smart contracts

      , , , , , , ,
      Information Sciences
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references35

          • Record: found
          • Abstract: found
          • Article: not found

          Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

          Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
            Bookmark
            • Record: found
            • Abstract: not found
            • Book: not found

            BERT: Pre-training of deep bidirectional transformers for language understanding

              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Distributed representations of sentences and documents

                Bookmark

                Author and article information

                Contributors
                Journal
                Information Sciences
                Information Sciences
                Elsevier BV
                00200255
                July 2023
                July 2023
                : 636
                : 118907
                Article
                10.1016/j.ins.2023.03.132
                46c8a702-3ffb-49b8-99d7-8de7e21dd9aa
                © 2023

                https://www.elsevier.com/tdm/userlicense/1.0/

                https://doi.org/10.15223/policy-017

                https://doi.org/10.15223/policy-037

                https://doi.org/10.15223/policy-012

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-004

                History

                Comments

                Comment on this article

                scite_
                19
                0
                1
                0
                Smart Citations
                19
                0
                1
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content1,378

                Most referenced authors270