BioMedGPT-LM-7B / README.md
youngking0727's picture
Update README.md
a0a497f
|
raw
history blame
1.79 kB
metadata
license: apache-2.0
tags:
  - medical
datasets:
  - biomed

BioMedGPT-LM-7B

In this repo, we present a medical language model named BioMedGPT-LM which is the first commercial-friendly GPT model in the biomedical domain and has demonstrated superior performance over existing LLMs of the same parameter size. We are releasing a 7B model BioMedGPT-LM-7B which is LLaMA2-7b-chat finetuned on the PMC abstracts and papers from the S2ORC.

Training Details

The model was trained with the following hyperparameters:

  • Epochs: 5
  • Batch size: 192
  • Cutoff length: 2048
  • Learning rate: 2e-5

Overview BioMedGPT-LM-7B was finetuned on over 26 billion tokens highly pertinent to the field of biomedicine. The fine-tuning data are extracted from 5.5 million biomedical papers in S2ORC data using PubMed Central (PMC)-ID and PubMed ID as criteria.

Model Developers

PharMolix

How to Use

BioMedGPT-LM-7B is a part of BioMedGPT-10B, an open-source version of BioMedGPT. BioMedGPT is a multimodal generative pre-trained transformer (GPT) for biomedicine, which bridges the natural language modality and diverse biomed- ical data modalities via a single GPT model. BioMedGPT aligns different biological modalities with the text modality via BioMedGPT-LM. The details of BioMedGPT-10B and BioMedGPT-LM-7B can be found in the technical report. The architecture of BioMedGPT-10B

Intended Use Cases

Out-of-scope Uses

Technical Report

"BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine"

github

https://github.com/BioFM/OpenBioMed

Limitations

[Highlight any limitations or potential issues of your model.]