OrpoLlama3-8B

image/jpeg

This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 15k steps of mlabonne/orpo-dpo-mix-40k.

๐Ÿ’ป Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Muhammad2003/OrpoLlama3-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

๐Ÿ“ˆ Training curves

Wandb Report

image/png

๐Ÿ† Evaluation

image/png

Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Muhammad2003/OrpoLlama3-8B

Finetuned
(374)
this model

Dataset used to train Muhammad2003/OrpoLlama3-8B

Collection including Muhammad2003/OrpoLlama3-8B