{"id":16442,"date":"2024-05-21T08:17:10","date_gmt":"2024-05-21T06:17:10","guid":{"rendered":"https:\/\/www.dimensions.ai\/?p=16442"},"modified":"2024-05-21T08:18:52","modified_gmt":"2024-05-21T06:18:52","slug":"article-uncovers-extensive-use-of-chatbots-in-scientific-publications","status":"publish","type":"post","link":"https:\/\/www.dimensions.ai\/blog\/article-uncovers-extensive-use-of-chatbots-in-scientific-publications\/","title":{"rendered":"Article uncovers extensive use of chatbots in scientific publications"},"content":{"rendered":"\n<p><strong>Are researchers turning to ChatGPT and other chatbots to write entire papers to survive in the publish or perish culture of the academic environments? A recent <\/strong><strong><em>Scientific American<\/em><\/strong><strong> article highlights how rampant the practice is by analyzing the publication data obtained from databases such as Dimensions.<\/strong><\/p>\n\n\n\n<p>What could the overuse of the words \u201cintricate,\u201d \u201cmeticulous\u201d and \u201ccommendable\u201d in a scientific paper signal? The possible misuse of ChatGPT and other artificial intelligence chatbots to produce a scientific paper, according to a recent <em>Scientific American<\/em> article,<a href=\"https:\/\/www.scientificamerican.com\/article\/chatbots-have-thoroughly-infiltrated-scientific-publishing\/\"> AI Chatbots Have Thoroughly Infiltrated Scientific Publishing<\/a>. Author Chris Stokel-Walker writes that obvious telltale signs, such as including the exact phrasing from a ChatGPT-produced text in science papers, are easy to spot, but a closer analysis reveals the sudden surge of particular words and turn of phrases in the past year after ChatGPT went mainstream.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">A chatbot attack?<\/h3>\n\n\n\n<p>Andrew Gray, a librarian and researcher at University College London, used Dimensions to hunt for \u201cAI Buzzwords\u201d and <a href=\"https:\/\/arxiv.org\/abs\/2403.16887\">found<\/a> that \u201cat least 60,000 papers\u2014slightly more than 1 percent of all scientific articles published globally last year\u2014may have used an LLM.\u201d These buzzwords are those that seemed to appear more often in AI-generated sentences than in typical human writing. One such phrase, according to the article, is \u201ccomplex and multifaceted,\u201d and a quick search in Dimensions reveals that there is definitely an uptick in the&nbsp; occurrence across different fields of research (see figure below).<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/cjLqnNLki93RAqF2FMLhged5ZI4yhvujl5XUfmoQN4G1OBfQFrfU1-pxLMufZbiR5AA8_ZRqtuOMWvF29zMnYlqIt-gkstObYTuUpaDavOmtQoTVGbhS1Iew--IeAnedFiClqDZmXttJgYuxDr6Jksk\" alt=\"\"\/><\/figure>\n\n\n\n<p>So what is the problem with using large language models (LLMs) to generate scientific literature? Scientific integrity consultant Elisabeth Bik, who was quoted in the <em>Scientific American <\/em>article explained that the AI chatbots are not sufficiently advanced to provide trustworthy outputs and are prone to what is termed as hallucinations, that is, they \u201cmake up\u201d text, including citations that simply do not exist. The article also points out that that problem is not just AI-generated text, but also that AI-generated judgements creep into scientific publications.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Transparency to ensure trustworthy research<\/h3>\n\n\n\n<p>But many within the research community recognize that using LLMs to support some aspects of writing a paper in itself is, perhaps, inevitable. Brent Sinclair in his article,<a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10172689\/\"> Letting ChatGPT do your science is fraudulent (and a bad idea), but AI-generated text can enhance inclusiveness in publishing<\/a>, argues that \u201cAI-generated text has the potential to make science publishing more inclusive by reducing the language barrier.\u201d However, he emphasizes that a \u201cscientist still needs to check [the AI-generated text], add references and context, and be fully accountable for the contents.\u201d&nbsp;&nbsp;<\/p>\n\n\n\n<p>This is the sentiment that is underlined in the <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00191-1\">2023 statement put out by <em>Nature<\/em><\/a>, and echoed by the publishing guidelines of many scientific publishers: \u201cNo LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.\u201d And if an LLM tool has been used to support the writing authors must be transparent about the use: \u201cresearchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.\u201d&nbsp;<\/p>\n\n\n\n<p>The newly-launched Dimensions Research GPT \/ Enterprise and the fully-integrated summarization feature have been developed keeping in mind the need to ensure trust in research. These solutions combine the power of AI technologies with the robust scientific data available through Dimensions \u2013 the world\u2019s largest collection of linked research data \u2013 to provide answers that are grounded in evidence. These solutions mean that authors can access an advanced literature discovery workflow that merges the scientific evidence base of Dimensions with the Generative AI functionality of ChatGPT \u2013 reducing the likelihood of the above-mentioned issue of hallucinations and providing click-through scholarly references for each statement to enable quick and easy validation and further discovery.&nbsp;<br>If you want more information on how Dimensions data can be used to support publishers and authors, <a href=\"https:\/\/www.dimensions.ai\/request-a-demo-or-quote\/\">contact the Dimensions team.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A recent Scientific American article highlights how rampant the practice is by analyzing the publication data obtained from databases such as Dimensions.<\/p>\n","protected":false},"author":19,"featured_media":16441,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":true,"latestblog_background":"","latestblog_bgcolor":"","latestblog_textcolor":"","latestblog_overlay":false,"inline_featured_image":false,"footnotes":""},"categories":[8],"tags":[82,80],"resource_audience_segment":[23],"class_list":["post-16442","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","tag-artificial-intelligence","tag-research-integrity","resource_audience_segment-publishers"],"acf":{"author_image":false,"author_name":"Misha Kidambi","header_image":{"id":"16441","top":"50","left":"50"},"cta":""},"featured_image_urls":{"full":["https:\/\/www.dimensions.ai\/wp-content\/uploads\/2024\/05\/Chatbots-as-writers.png",1920,1080,false]},"post_excerpt_dimensions":"<p>A recent Scientific American article highlights how rampant the practice is by analyzing the publication data obtained from databases such as Dimensions.<\/p>\n","category_list":"<a href=\"https:\/\/www.dimensions.ai\/blog\/category\/blog\/\" rel=\"category tag\">Blog<\/a>","author_info":{"name":"Misha Kidambi","url":"https:\/\/www.dimensions.ai\/blog\/author\/misha\/"},"comments_num":"0 comments","_links":{"self":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/posts\/16442","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/comments?post=16442"}],"version-history":[{"count":0,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/posts\/16442\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/media\/16441"}],"wp:attachment":[{"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/media?parent=16442"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/categories?post=16442"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/tags?post=16442"},{"taxonomy":"resource_audience_segment","embeddable":true,"href":"https:\/\/www.dimensions.ai\/wp-json\/wp\/v2\/resource_audience_segment?post=16442"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}