Datasets:
license: other
license_name: common-crawl
license_link: LICENSE
task_categories:
- text-generation
language:
- en
pretty_name: Clinical Guidelines
size_categories:
- 10K<n<100K
Clinical Guidelines
The Clinical Guidelines corpus is a new dataset of 46,649 clinical practice guidelines from 16 high-quality online medical sources. This dataset serves as a crucial component of the original training corpus of the Meditron Large Language Model (LLM). We publicly release a subset of 35,733 articles from our Guidelines corpus, extracted from 8 of 16 sources that allow content redistribution, namely CCO, CDC, CMA, ICRC, NICE, SPOR, WHO and WikiDoc.
You can scrape and clean all 16 guideline sources using our code in epfLLM/meditron.
Source | Full Name | Tag | Guidelines | Words | Audience | Country | Released |
---|---|---|---|---|---|---|---|
AAFP | American Academy of Family Physicians | aafp |
50 | 9.4K | Doctor | USA | No |
CCO | Cancer Care Ontario | cco |
87 | 199K | Doctor | Canada | Yes |
CDC | Center for Disease Control and Prevention | cdc |
621 | 6.7M | Doctor | USA | Yes |
CMA | Canadian Medical Association | cma |
431 | 1.7M | Doctor | Canada | Yes |
CPS | Canadian Paediatric Society | cps |
54 | 133K | Doctor | Canada | No |
drugs.com | Drugs.com | drugs |
6548 | 4.1M | Both | NZ | No |
GuidelineCentral | GuidelineCentral | gc |
1029 | 1M | Doctor | Mix | No |
ICRC | International Committee of the Red Cross | icrc |
49 | 1.2M | Doctor | Switzerland | Yes |
IDSA | Infectious Diseases Society of America | idsa |
47 | 646K | Doctor | USA | No |
MAGIC | Making GRADE The Irresistible Choice | magic |
52 | 415K | Doctor | Mix | No |
MayoClinic | MayoClinic | mayo |
1100 | 2.2M | Patient | USA | No |
NICE | National Institute for Health and Care Excellence | nice |
1656 | 8.1M | Doctor | UK | Yes |
RCH | Royal Children's Hospital Melbourne | rch |
384 | 410K | Doctor | Australia | No |
SPOR | Strategy for Patient-Oriented Research | spor |
217 | 1.1M | Doctor | Canada | Yes |
WHO | World Health Organization | who |
223 | 3.1M | Both | Switzerland | Yes |
WikiDoc | WikiDoc | wikidoc |
33058 | 34M | Both | International | Yes |
Dataset Details
Dataset Description
- Curated by: EPFL LLM Team
- Funded by: [More Information Needed]
- Language(s): English only
- License: Common Crawl Foundation Terms of Use
- Knowledge Cutoff: August 2023
Dataset Sources
- Repository: epfLLM/meditron
- Paper: MediTron-70B: Scaling Medical Pretraining for Large Language Models
Uses
Direct Use
The dataset is intended for use in tasks related to text generation, specifically in the context of clinical practice guidelines. It can be employed for training language models and other natural language processing applications within the healthcare domain.
Out-of-Scope Use
[More Information Needed]
Dataset Structure
Each row of the dataset represents one clinical practice guideline article, and consists of the following dataset fields (all strings):
Field | Description | Sources with field |
---|---|---|
id |
Unique identifier for each article | All |
source |
Source tag (cco , cdc , cma , icrc , nice , spor , who or wikidoc ) |
All |
title |
Title of the article | CMA, NICE & WikiDoc only |
url |
URL of the article | NICE, WikiDoc only |
raw_text |
Unprocessed scraped article text | All |
clean_text |
Cleaned and formatted article text | All |
overview |
Short summary of the article | NICE only |
Dataset Creation
Curation Rationale
The dataset was curated to provide a high-quality collection of clinical practice guidelines (CPGs) for the medical training of LLMs. Our Clinical Guidelines corpus comprises 46,469 articles from 16 globally recognized sources for clinician and patient-directed guidance across high and low-resource settings, multiple medical domains (internal medicine, pediatrics, oncology, infectious disease, etc.) and multiple geographical locations.
Clinical practice guidelines are rigorously researched frameworks designed to guide healthcare practitioners and patients in making evidence-based decisions regarding diagnosis, treatment, and management. They are compiled through a systematic process of collaborative consensus between experts to establish recommendations from the latest evidence on best practices that would maximize benefit in light of practical concerns such as available resources and context. As a super-synthesis of meta-analyses, they sit atop the evidence pyramid and form the basis of actionable evidence-based practice. CPGs are produced at various geographic and organizational granularities, ranging from global to hospital-level initiatives directed by international professional medical associations to informal consortia, regional or national governmental bodies to individual NGOs and hospitals.
Source Data

The dataset is sourced from 16 globally recognized medical entities, covering a wide range of healthcare contexts and audiences. For instance, the geographic scope ranges from global (WHO) to national (CDC, NICE) and regional (Ontario, Melbourne) to institutional (ICRC, Mayo Clinic). The corpus also represents health care concerns from high- (Ontario, Melbourne), low- (WHO), and volatile- (ICRC) resource settings. Guidelines also contains a range of technical and conversational vocabulary with target audiences of clinicians or patients (or both), and is sometimes highly specialized within a theme (cancer, pediatrics, infectious disease). The peer review processes also ranged from UN bodies (WHO), institutional review boards (ICRC), professional associations (AAFP) to publicly crowdsourced knowledge bases (WikiDoc). Article length varies widely from very short statements to 100+ page guides.
Data Collection and Processing
PDF documents were converted to text using GROBID.
After extracting the raw text from each source, we cleaned data with an ad-hoc process to exclude irrelevant or repetitive content that did not contribute to the textual content, such as URLs, references, figures, table delimiters, and ill-formatted characters.
This filtering procedure was performed differently for each source using a sample of 50 articles. Please note that this procedure is not perfect, as it may have removed useful information or kept superfluous content. We provide the raw_text
for each article if you would like to perform your own cleaning step.
Additionally, the text was standardized to a unified format with hierarchical section headers indicated by '#'
, homogenous spacing '\n\n'
separating paragraphs, and normalized lists formatted with '- '
bullet points.
Finally, all samples were deduplicated using title matching, and articles that were too short or not English were filtered out.
Who are the source data producers?
We employed pragmatic selection criteria over medical sources, seeking CPGs that were:
- (1) open-access
- (2) systematically formatted with homogenous textual structure (i.e., in a format in which automated processes could be deployed without excessive risk of misaligning textual sequences)
- (3) in the language predominantly represented by the pre-training corpus of Llama (i.e., English)
- (4) covering a breadth of medical sub-domains, audiences (clinician, nurse, patient), and resource settings (high, low, and humanitarian response settings)
Personal and Sensitive Information
As the articles are publicly accessible, no personal or sensitive information is included.
Bias, Risks, and Limitations
Most guideline sources offer reliable and factual information, authored by trusted health professionals. However, users should exercise caution when relying on content from WikiDoc, as it is a crowdsourced encyclopedia. While it generally maintains high quality, there are no guarantees regarding its content.
[More Information Needed]
Recommendations
[More Information Needed]
Citation
To cite the Clinical Guidelines corpus, please use:
@software{meditron2023,
author = [ADD AUTHORS],
title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models},
month = November,
year = 2023,
url = {https://github.com/epfLLM/meditron}
}
Authors
- Curation: Mary-Anne Hartley
- Scraping: Antoine Bonnet, Alexandre Sallinen, Igor Krawczuk
- Cleaning: Antoine Bonnet, Alexandre Sallinen