Use of Retrieval-Augmented Large Language Model for COVID-19 Fact-Checking: Development and Usability Study

Hai Li, Jingyi Huang, Mengmeng Ji, Yuyi Yang, Ruopeng An

Research output: Contribution to journalArticlepeer-review

Abstract

Background: The COVID-19 pandemic has been accompanied by an "infodemic," where the rapid spread of misinformation has exacerbated public health challenges. Traditional fact-checking methods, though effective, are time-consuming and resource-intensive, limiting their ability to combat misinformation at scale. Large language models (LLMs) such as GPT-4 offer a more scalable solution, but their susceptibility to generating hallucinations-plausible yet incorrect information-compromises their reliability. Objective: This study aims to enhance the accuracy and reliability of COVID-19 fact-checking by integrating a retrieval-augmented generation (RAG) system with LLMs, specifically addressing the limitations of hallucination and context inaccuracy inherent in stand-alone LLMs. Methods: We constructed a context dataset comprising approximately 130,000 peer-reviewed papers related to COVID-19 from PubMed and Scopus. This dataset was integrated with GPT-4 to develop multiple RAG-enhanced models: the naïve RAG, Lord of the Retrievers (LOTR)-RAG, corrective RAG (CRAG), and self-RAG (SRAG). The RAG systems were designed to retrieve relevant external information, which was then embedded and indexed in a vector store for similarity searches. One real-world dataset and one synthesized dataset, each containing 500 claims, were used to evaluate the performance of these models. Each model's accuracy, F1-score, precision, and sensitivity were compared to assess their effectiveness in reducing hallucination and improving fact-checking accuracy. Results: The baseline GPT-4 model achieved an accuracy of 0.856 on the real-world dataset. The naïve RAG model improved this to 0.946, while the LOTR-RAG model further increased accuracy to 0.951. The CRAG and SRAG models outperformed all others, achieving accuracies of 0.972 and 0.973, respectively. The baseline GPT-4 model reached an accuracy of 0.960 on the synthesized dataset. The naïve RAG model increased this to 0.972, and the LOTR-RAG, CRAG, and SRAG models achieved an accuracy of 0.978. These findings demonstrate that the RAG-enhanced models consistently maintained high accuracy levels, closely mirroring ground-truth labels and significantly reducing hallucinations. The CRAG and SRAG models also provided more detailed and contextually accurate explanations, further establishing the superiority of agentic RAG frameworks in delivering reliable and precise fact-checking outputs across diverse datasets. Conclusions: The integration of RAG systems with LLMs substantially improves the accuracy and contextual relevance of automated fact-checking. By reducing hallucinations and enhancing transparency by citing retrieved sources, this method holds significant promise for rapid, reliable information verification to combat misinformation during public health crises.

Original languageEnglish (US)
Article numbere66098
JournalJournal of medical Internet research
Volume27
Issue number1
DOIs
StatePublished - 2025

Keywords

  • accuracy
  • artificial intelligence
  • ChatGPT
  • coronavirus
  • COVID-19
  • disinformation
  • fact-checking
  • infectious
  • infodemic
  • large language model
  • machine learning
  • misinformation
  • natural language processing
  • pandemic
  • pulmonary
  • respiratory
  • retrieval-augmented generation
  • SARS-CoV-2

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'Use of Retrieval-Augmented Large Language Model for COVID-19 Fact-Checking: Development and Usability Study'. Together they form a unique fingerprint.

Cite this