cer-fact-checking / README.md
giuseppericcio's picture
Config app
e0864ff

A newer version of the Streamlit SDK is available: 1.43.2

Upgrade
metadata
title: Cer Fact Checking
emoji: 🐠
colorFrom: yellow
colorTo: pink
sdk: streamlit
sdk_version: 1.42.0
app_file: app.py
pinned: false
license: apache-2.0

🩺 CER Demo: Fact-Checking Biomedical Claims

Welcome to the demo of the CER (Combining Evidence and Reasoning) system for fact-checking biomedical claims. This tool combines PubMed, one of the leading biomedical knowledge bases, with Large Language Models (LLMs) to verify the accuracy of claims, generate justifications, and provide reliable classifications.

πŸŽ₯ Demo (or GIF)

Watch our demo to see how CER supports biomedical fact-checking and enhances the transparency of scientific recommendations!

πŸ“Š Data Sources

We use the following data sources for training and evaluating the system:

  • PubMed: A biomedical database containing over 20 million abstracts.
  • HealthFC: 750 biomedical claims curated by Vladika et al. (2024).
  • BioASQ-7b: 745 claims from the BioASQ Challenge, Nentidis et al. (2020).
  • SciFact: 1.4k expert-annotated scientific claims (Wadden et al., 2020).

πŸ›  Technologies Used

  • Python: Core programming language.
  • FAISS Indexing: For efficient retrieval of biomedical abstracts.
  • Meta-Llama-3.1-405B-Instruct: Language model for generating justifications.
  • PubMedBERT: Classifier for claim evaluation.
  • Streamlit: For building an interactive user interface.

The system is designed to work on both lightweight setups (Intel i7 CPU, 16GB RAM) and advanced environments with GPUs (e.g., NVIDIA Tesla T4), supporting complex tasks on large datasets.

πŸ”¬ Methodological Workflow

CER follows a structured workflow in three main phases:

  1. Evidence Retrieval: Relevant abstracts are extracted from PubMed using a BM25 retrieval engine.
  2. Justification Generation: The LLM generates explanations based on the retrieved abstracts.
  3. Claim Classification: The classifier evaluates each claim as true, false, or "not enough evidence."

Methodology

🌟 Key Features

  • Zero-Shot and Fine-Tuned Classification: Provides reliable fact-checking without the need for extensive task-specific labeled data.
  • Robustness Across Datasets: Fine-tuning enhances model performance, even when the training and test sets differ.
  • Efficient Retrieval: Leverages the Sparse Retriever for quick and accurate evidence extraction from PubMed.
  • Transparency: Generates justifications to explain the classification of each claim, ensuring transparency and interpretability.

πŸš€ Getting Started

Follow these steps to use the CER system demo:

Prerequisites

  • Python 3.9+
  • Required libraries: Install with the command:
    pip install -r requirements.txt
    

Running the Application

  1. Clone the repository:
    git clone https://github.com/picuslab/CER-Fact-Checking.git
    cd CER-Fact-Checking
    
  2. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # On Windows use `venv\Scripts\activate`
    
  3. Run the Streamlit application:
    streamlit run app.py
    
    Open your browser and go to http://localhost:8501 to interact with the application.

Submitting Claims

Enter a biomedical claim, for example:

"Vitamin D reduces the risk of osteoporosis."

Observe the process of evidence retrieval, justification generation, and classification.

πŸ“ˆ Conclusions

CER demonstrates how fact-checking using LLMs and evidence retrieval techniques can improve the reliability of medical information. Fine-tuning LLMs proves to be a powerful strategy for enhancing accuracy in fact-checking, even across different datasets. The ability to separate prediction from explanation ensures transparency and reduces bias.

βš– Ethical Considerations

CER is a decision-support tool, not a substitute for professional medical advice. All recommendations must be validated by authorized healthcare providers. This demo uses anonymized data for illustrative purposes.

πŸ™ Acknowledgments

Special thanks to the dataset creators, library developers, and the research team for their contributions to this project.

πŸ‘¨β€πŸ’» This project was developed by Mariano Barone, Antonio Romano, Giuseppe Riccio, Marco Postiglione, and Vincenzo Moscato.