8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Natural Language Processing for Radiation Oncology: Personalizing Treatment Pathways

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Natural language processing (NLP), a technology that translates human language into machine-readable data, is revolutionizing numerous sectors, including cancer care. This review outlines the evolution of NLP and its potential for crafting personalized treatment pathways for cancer patients. Leveraging NLP’s ability to transform unstructured medical data into structured learnable formats, researchers can tap into the potential of big data for clinical and research applications. Significant advancements in NLP have spurred interest in developing tools that automate information extraction from clinical text, potentially transforming medical research and clinical practices in radiation oncology. Applications discussed include symptom and toxicity monitoring, identification of social determinants of health, improving patient-physician communication, patient education, and predictive modeling. However, several challenges impede the full realization of NLP’s benefits, such as privacy and security concerns, biases in NLP models, and the interpretability and generalizability of these models. Overcoming these challenges necessitates a collaborative effort between computer scientists and the radiation oncology community. This paper serves as a comprehensive guide to understanding the intricacies of NLP algorithms, their performance assessment, past research contributions, and the future of NLP in radiation oncology research and clinics.

          Related collections

          Most cited references96

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Attention Is All You Need

            The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 15 pages, 5 figures
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Double-slit photoelectron interference in strong-field ionization of the neon dimer

              Wave-particle duality is an inherent peculiarity of the quantum world. The double-slit experiment has been frequently used for understanding different aspects of this fundamental concept. The occurrence of interference rests on the lack of which-way information and on the absence of decoherence mechanisms, which could scramble the wave fronts. Here, we report on the observation of two-center interference in the molecular-frame photoelectron momentum distribution upon ionization of the neon dimer by a strong laser field. Postselection of ions, which are measured in coincidence with electrons, allows choosing the symmetry of the residual ion, leading to observation of both, gerade and ungerade, types of interference.
                Bookmark

                Author and article information

                Journal
                Pharmgenomics Pers Med
                Pharmgenomics Pers Med
                pgpm
                Pharmacogenomics and Personalized Medicine
                Dove
                1178-7066
                13 February 2024
                2024
                : 17
                : 65-76
                Affiliations
                [1 ]Department of Radiation Oncology, University of California San Francisco , San Francisco, CA, USA
                [2 ]UC Berkeley-UCSF Graduate Program in Bioengineering, University of California, Berkeley and San Francisco , San Francisco, CA, USA
                [3 ]Bakar Computational Health Sciences Institute, University of California , San Francisco, CA, USA
                [4 ]Joint Program in Computational Precision Health, University of California, Berkeley and San Francisco , Berkeley, CA, USA
                Author notes
                Correspondence: Julian C Hong, Email julian.hong@ucsf.edu
                Author information
                http://orcid.org/0000-0001-5172-6889
                Article
                396971
                10.2147/PGPM.S396971
                10874185
                38370334
                a87bcfcd-d204-4e39-a2bc-a0937d48e883
                © 2024 Lin et al.

                This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License ( http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms ( https://www.dovepress.com/terms.php).

                History
                : 23 August 2023
                : 29 January 2024
                Page count
                Figures: 1, References: 96, Pages: 12
                Categories
                Review

                Pharmacology & Pharmaceutical medicine
                artificial intelligence,personalized medicine,radiation therapy,natural language processing

                Comments

                Comment on this article