File size: 3,448 Bytes
7a741dc 785446d 7a741dc 4ed1f75 7a741dc f22717d 41dc771 f22717d 7a741dc 6d37e44 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
license: apache-2.0
datasets:
- HuggingFaceH4/no_robots
base_model: mistralai/Mistral-7B-v0.1
language:
- en
pipeline_tag: text-generation
thumbnail: https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions/resolve/main/mistralh4-removebg-preview.png?download=true
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions/resolve/main/mistralh4-removebg-preview.png?download=true" alt="limstral logo"">
</div>
<br />
## Mistral 7B fine-tuned on H4/No Robots instructions
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset for instruction following downstream task.
## Training procedure
The model was loaded on **8 bits** and fine-tuned on the LIMA dataset using the **LoRA** PEFT technique with the `huggingface/peft` library and `trl/sft` for one epoch on 1 x A100 (40GB) GPU.
SFT Trainer params:
```
trainer = SFTTrainer(
model=model,
train_dataset=train_ds,
eval_dataset=test_ds,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=2048,
tokenizer=tokenizer,
args=training_arguments,
packing=False
)
```
LoRA config:
```
config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules = ['q_proj', 'k_proj', 'down_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj']
)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 66
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 1.796200 | 1.774305 |
| 20 | 1.769700 | 1.679720 |
| 30 | 1.626800 | 1.667754 |
| 40 | 1.663400 | 1.665188 |
| 50 | 1.565700 | 1.659000 |
| 60 | 1.660300 | 1.658270 |
### Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
repo_id = "mrm8488/mistral-7b-ft-h4-no_robots_instructions"
model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
gen = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
instruction = "[INST] Write an email to say goodbye to me boss [\INST]"
res = gen(instruction, max_new_tokens=512, temperature=0.3, top_p=0.75, top_k=40, repetition_penalty=1.2, eos_token_id=2)
print(res[0]['generated_text'])
```
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { mistral-7b-ft-h4-no_robots_instructions (Revision 785446d) },
year = 2023,
url = { https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions },
doi = { 10.57967/hf/1426 },
publisher = { Hugging Face }
}
``` |