|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- mlabonne/mini-platypus |
|
pipeline_tag: text-generation |
|
--- |
|
# π¦π§ Miniplatypus-7b |
|
|
|
<center><img src="https://i.imgur.com/VkGvQym.png" width="300"></center> |
|
|
|
This is a `Llama-2-7b-chat` model fine-tuned using QLoRA (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/mini-platypus) dataset, which is a subset of the [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). |
|
|
|
## π§ Training |
|
|
|
It was trained on a Google Colab notebook with a T4 GPU. It is mainly designed for educational purposes, not for inference. |
|
|
|
## π» Usage |
|
|
|
``` python |
|
# pip install transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/llama-2-7b-miniplatypus" |
|
prompt = "What is a large language model?" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
sequences = pipeline( |
|
f'<s>[INST] {prompt} [/INST]', |
|
do_sample=True, |
|
top_k=10, |
|
num_return_sequences=1, |
|
eos_token_id=tokenizer.eos_token_id, |
|
max_length=200, |
|
) |
|
for seq in sequences: |
|
print(f"Result: {seq['generated_text']}") |
|
``` |