YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
phi-2-scientific-papers-base-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/dfurman/
- Original model: https://huggingface.co/dfurman/phi-2-scientific-papers-base-v0.1/
Original model description:
library_name: transformers license: apache-2.0 pipeline_tag: text-generation base_model: microsoft/phi-2
Model Card for dfurman/phi-2-scientific-papers-base-v0.1
A base model for scientific papers trained on 70MB (txt file) of research literature.
Model Details
Model Description
- Developed by: Daniel Furman
- Model type: Phi-2
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: microsoft/phi-2
Uses
The intended use of this model includes scientific paper next word prediction. It is a base model for the scientific research domain.
Direct Use
Use for document completion on scientific papers.
Downstream Use
Finetune for other tasks in scientific literature domain, like Q&A on scientific papers.
Out-of-Scope Use
Anything outside of scientific research adjacent NLP tasks.
Bias, Risks, and Limitations
No guardrails are baked into this model. Use at your own risk.
Compute Info
This model was fine-tuned using the accelerate package on a cluster from RunPod with 4x A100-SXM4-80GB GPUs (99% memory usage across each during training).
- Downloads last month
- 4