Transformers
German
opt2.7B-de-lora / README.md
bjoernp's picture
Update README.md
e6b730e
|
raw
history blame
850 Bytes
metadata
license: openrail
datasets:
  - oscar-corpus/OSCAR-2201
language:
  - de
metrics:
  - perplexity
library_name: transformers

OPT-2.7B finetuned on OSCAR with LoRA

This model was finetuned on 80000 examples from the german subset of the oscar corpus. See the git repo for more info and exact hyperparameters.

To run this model, simply instantiate the facebook/opt-2.7b model as per usual and activate the adapter with PEFT:

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("facebook/opt-2.7b")
model = PeftModel.from_pretrained(model, "bjoernp/opt2.7B-de-lora")

Refer to the OPT Documentation for more infos on how to run the model and use the processor.