|
--- |
|
license: openrail |
|
datasets: |
|
- oscar-corpus/OSCAR-2201 |
|
language: |
|
- de |
|
metrics: |
|
- perplexity |
|
library_name: transformers |
|
--- |
|
# OPT-2.7B finetuned on OSCAR with LoRA |
|
This model was finetuned on 80000 examples from the german subset of the oscar corpus. See [the git repo](https://github.com/bjoernpl/llm_finetuning_de) |
|
for more info and exact hyperparameters. |
|
|
|
To run this model, simply instantiate the `facebook/opt-2.7b` model as per usual and activate the adapter with PEFT: |
|
|
|
```python |
|
from peft import PeftModel, PeftConfig |
|
from transformers import AutoModelForCausalLM |
|
|
|
model = AutoModelForCausalLM.from_pretrained("facebook/opt-2.7b") |
|
model = PeftModel.from_pretrained(model, "bjoernp/opt2.7B-de-lora") |
|
``` |
|
|
|
Refer to the [OPT Documentation](https://huggingface.co/facebook/opt-2.7b) for more infos on how to run the model and use the processor. |