1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      A survey of transformer-based multimodal pre-trained modals

      , , , , , , , ,
      Neurocomputing
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references125

          • Record: found
          • Abstract: found
          • Article: not found

          Attention Is All You Need

          The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 15 pages, 5 figures
            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            Microsoft COCO: Common Objects in Context

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Aggregated Residual Transformations for Deep Neural Networks

                Bookmark

                Author and article information

                Journal
                Neurocomputing
                Neurocomputing
                Elsevier BV
                09252312
                January 2023
                January 2023
                : 515
                : 89-106
                Article
                10.1016/j.neucom.2022.09.136
                5a149bcd-1c42-4ef4-9a88-f7b3a2574dde
                © 2023

                https://www.elsevier.com/tdm/userlicense/1.0/

                https://doi.org/10.15223/policy-017

                https://doi.org/10.15223/policy-037

                https://doi.org/10.15223/policy-012

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-004

                History

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content1,822

                Cited by5

                Most referenced authors970