--- license: apache-2.0 tags: - medical datasets: - biomed --- # BioMedGPT-LM-7B In this repo, we present a medical language model named BioMedGPT-LM which is the first commercial-friendly GPT model in the biomedical domain and has demonstrated superior performance over existing LLMs of the same parameter size. We are releasing a 7B model **BioMedGPT-LM-7B** which is LLaMA2-7b-chat finetuned on the PMC abstracts and papers from the S2ORC. ### Training Details The model was trained with the following hyperparameters: * Epochs: 5 * Batch size: 192 * Cutoff length: 2048 * Learning rate: 2e-5 Overview BioMedGPT-LM-7B was finetuned on over 26 billion tokens highly pertinent to the field of biomedicine. The fine-tuning data are extracted from 5.5 million biomedical papers in S2ORC data using PubMed Central (PMC)-ID and PubMed ID as criteria. ### Model Developers PharMolix ### How to Use BioMedGPT-LM-7B is a part of **[BioMedGPT-10B](https://github.com/BioFM/OpenBioMed)**, an open-source version of BioMedGPT. BioMedGPT is a multimodal generative pre-trained transformer (GPT) for biomedicine, which bridges the natural language modality and diverse biomed- ical data modalities via a single GPT model. BioMedGPT aligns different biological modalities with the text modality via BioMedGPT-LM. The details of BioMedGPT-10B and BioMedGPT-LM-7B can be found in the [technical report](). ![The architecture of BioMedGPT-10B](BioMedGPT-10B.jpeg) **Intended Use Cases** **Out-of-scope Uses** ### Technical Report "BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine" ### github [https://github.com/BioFM/OpenBioMed](https://github.com/BioFM/OpenBioMed) ### Limitations [Highlight any limitations or potential issues of your model.]