8
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study

      research-article
      , PhD 1 , , MD 2 , 3 , , MD, PhD 2 , 3 , , MD, PhD 4 , 5 , 6 , , MD 7 , , MD, PhD 8 , , MD, PhD 8 , , PhD 9 , 10 , , MD 11 , 12 , 13 , , MD 14 , 15 , 16 , , MD 17 , 18 , , , MD 19 , 20 , , MD, PhD 21 , 22 , 23
      ,
      (Reviewer), (Reviewer)
      Journal of Medical Internet Research
      JMIR Publications
      ChatGPT, abstract, AI-generated scientific content, plagiarism, artificial intelligence, NLP, natural language processing, LLM, language model, language models, text, textual, generation, generative, extract, extraction, scientific research, academic research, publication, publications, abstracts

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          ChatGPT may act as a research assistant to help organize the direction of thinking and summarize research findings. However, few studies have examined the quality, similarity (abstracts being similar to the original one), and accuracy of the abstracts generated by ChatGPT when researchers provide full-text basic research papers.

          Objective

          We aimed to assess the applicability of an artificial intelligence (AI) model in generating abstracts for basic preclinical research.

          Methods

          We selected 30 basic research papers from Nature, Genome Biology, and Biological Psychiatry. Excluding abstracts, we inputted the full text into ChatPDF, an application of a language model based on ChatGPT, and we prompted it to generate abstracts with the same style as used in the original papers. A total of 8 experts were invited to evaluate the quality of these abstracts (based on a Likert scale of 0-10) and identify which abstracts were generated by ChatPDF, using a blind approach. These abstracts were also evaluated for their similarity to the original abstracts and the accuracy of the AI content.

          Results

          The quality of ChatGPT-generated abstracts was lower than that of the actual abstracts (10-point Likert scale: mean 4.72, SD 2.09 vs mean 8.09, SD 1.03; P<.001). The difference in quality was significant in the unstructured format (mean difference –4.33; 95% CI –4.79 to –3.86; P<.001) but minimal in the 4-subheading structured format (mean difference –2.33; 95% CI –2.79 to –1.86). Among the 30 ChatGPT-generated abstracts, 3 showed wrong conclusions, and 10 were identified as AI content. The mean percentage of similarity between the original and the generated abstracts was not high (2.10%-4.40%). The blinded reviewers achieved a 93% (224/240) accuracy rate in guessing which abstracts were written using ChatGPT.

          Conclusions

          Using ChatGPT to generate a scientific abstract may not lead to issues of similarity when using real full texts written by humans. However, the quality of the ChatGPT-generated abstracts was suboptimal, and their accuracy was not 100%.

          Related collections

          Most cited references8

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns

          ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Can artificial intelligence help for scientific writing?

            This paper discusses the use of Artificial Intelligence Chatbot in scientific writing. ChatGPT is a type of chatbot, developed by OpenAI, that uses the Generative Pre-trained Transformer (GPT) language model to understand and respond to natural language inputs. AI chatbot and ChatGPT in particular appear to be useful tools in scientific writing, assisting researchers and scientists in organizing material, generating an initial draft and/or in proofreading. There is no publication in the field of critical care medicine prepared using this approach; however, this will be a possibility in the next future. ChatGPT work should not be used as a replacement for human judgment and the output should always be reviewed by experts before being used in any critical decision-making or application. Moreover, several ethical issues arise about using these tools, such as the risk of plagiarism and inaccuracies, as well as a potential imbalance in its accessibility between high- and low-income countries, if the software becomes paying. For this reason, a consensus on how to regulate the use of chatbots in scientific writing will soon be required.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers

              Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these models in scientific writing. We gathered fifth research abstracts from five high-impact factor medical journals and asked ChatGPT to generate research abstracts based on their titles and journals. Most generated abstracts were detected using an AI output detector, ‘GPT-2 Output Detector’, with % ‘fake’ scores (higher meaning more likely to be generated) of median [interquartile range] of 99.98% ‘fake’ [12.73%, 99.98%] compared with median 0.02% [IQR 0.02%, 0.09%] for the original abstracts. The AUROC of the AI output detector was 0.94. Generated abstracts scored lower than original abstracts when run through a plagiarism detector website and iThenticate (higher scores meaning more matching text found). When given a mixture of original and general abstracts, blinded human reviewers correctly identified 68% of generated abstracts as being generated by ChatGPT, but incorrectly identified 14% of original abstracts as being generated. Reviewers indicated that it was surprisingly difficult to differentiate between the two, though abstracts they suspected were generated were vaguer and more formulaic. ChatGPT writes believable scientific abstracts, though with completely generated data. Depending on publisher-specific guidelines, AI output detectors may serve as an editorial tool to help maintain scientific standards. The boundaries of ethical and acceptable use of large language models to help scientific writing are still being discussed, and different journals and conferences are adopting varying policies.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J Med Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                2023
                25 December 2023
                : 25
                : e51229
                Affiliations
                [1 ] Department of Nursing Mackay Medical College Taipei Taiwan
                [2 ] Department of Psychiatry Taipei Veterans General Hospital Taipei Taiwan
                [3 ] Division of Psychiatry, School of Medicine, National Yang-Ming University Taipei Taiwan
                [4 ] Department of Psychiatry Kaohsiung Medical University Hospital Kaohsiung Taiwan
                [5 ] Department of Psychiatry College of Medicine Kaohsiung Medical University Kaohsiung Taiwan
                [6 ] Department of Psychiatry Kaohsiung Municipal Siaogang Hospital Kaohsiung Medical University Kaohsiung Taiwan
                [7 ] Department of Psychiatry Kaohsiung Chang Gung Memorial Hospital Kaohsiung Taiwan
                [8 ] Department of Neurology Tri-Service General Hospital National Defense Medical Center Taipei Taiwan
                [9 ] Institute of Epidemiology and Preventive Medicine College of Public Health National Taiwan University Taipei Taiwan
                [10 ] Department of Dentistry National Taiwan University Hospital Taipei Taiwan
                [11 ] Department of Psychiatry Tri-service Hospital, Beitou branch Taipei Taiwan
                [12 ] Department of Psychiatry Armed Forces Taoyuan General Hospital Taoyuan Taiwan
                [13 ] Graduate Institute of Health and Welfare Policy National Yang Ming Chiao Tung University Taipei Taiwan
                [14 ] Institute of Biomedical Sciences Institute of Precision Medicine National Sun Yat-sen University Kaohsiung Taiwan
                [15 ] Department of Psychology College of Medical and Health Science Asia University Taichung Taiwan
                [16 ] Prospect Clinic for Otorhinolaryngology and Neurology Kaohsiung Taiwan
                [17 ] Department of Psychiatry E-Da Dachang Hospital I-Shou University Kaohsiung Taiwan
                [18 ] Department of Psychiatry E-Da Hospital I-Shou University Kaohsiung Taiwan
                [19 ] Department of Psychiatry Tri-service Hospital Beitou branch Taipei Taiwan
                [20 ] Department of Psychiatry National Defense Medical Center Taipei Taiwan
                [21 ] College of Medicine China Medical University Taichung Taiwan
                [22 ] Mind-Body Interface Laboratory China Medical University and Hospital Taichung Taiwan
                [23 ] An-Nan Hospital China Medical University Tainan Taiwan
                Author notes
                Corresponding Author: Tien-Wei Hsu s9801101@ 123456gmail.com
                Author information
                https://orcid.org/0000-0002-1523-8519
                https://orcid.org/0000-0002-9987-022X
                https://orcid.org/0000-0003-3779-9074
                https://orcid.org/0000-0001-8034-0221
                https://orcid.org/0000-0002-8650-4060
                https://orcid.org/0000-0001-6831-3634
                https://orcid.org/0000-0001-7693-1408
                https://orcid.org/0000-0002-2461-474X
                https://orcid.org/0000-0002-6091-0263
                https://orcid.org/0000-0001-5761-7800
                https://orcid.org/0000-0003-4136-1251
                https://orcid.org/0000-0003-1138-5586
                https://orcid.org/0000-0002-4501-2502
                Article
                v25i1e51229
                10.2196/51229
                10760418
                38145486
                654f795a-59d0-4d30-805f-ea69ede8dbc4
                ©Shu-Li Cheng, Shih-Jen Tsai, Ya-Mei Bai, Chih-Hung Ko, Chih-Wei Hsu, Fu-Chi Yang, Chia-Kuang Tsai, Yu-Kang Tu, Szu-Nian Yang, Ping-Tao Tseng, Tien-Wei Hsu, Chih-Sung Liang, Kuan-Pin Su. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.12.2023.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 25 July 2023
                : 13 October 2023
                : 17 October 2023
                : 20 November 2023
                Categories
                Original Paper
                Original Paper

                Medicine
                chatgpt,abstract,ai-generated scientific content,plagiarism,artificial intelligence,nlp,natural language processing,llm,language model,language models,text,textual,generation,generative,extract,extraction,scientific research,academic research,publication,publications,abstracts

                Comments

                Comment on this article