|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: |
|
- nbeerbower/flammen16-mistral-7B |
|
datasets: |
|
- wenbopan/Chinese-dpo-pairs |
|
tags: |
|
- experimental |
|
--- |
|
|
|
![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) |
|
|
|
# flammen16-chinese-DPO-7B |
|
|
|
A Mistral 7B LLM built from merging pretrained models and finetuning on [Wenbo Pan](https://huggingface.co/wenbopan)'s [Chinese DPO Pairs](https://huggingface.co/datasets/wenbopan/Chinese-dpo-pairs). |
|
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence. |
|
Please note this is an experimental model and is not recommended for production use. |
|
|
|
我是一款基于混合预训练模型并在温博潘的中文DPO对话双方数据上微调的缅德尔7B大语言模型(LLM)。它的特长在于出色的角色扮演、创造性写作和通用智能。请注意,这是一个实验性模型,不适宜生产使用。 |
|
|
|
### Method |
|
|
|
Finetuned using an A100 on Google Colab. 🙏 |
|
|
|
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) |
|
|
|
### Configuration |
|
|
|
LoRA, model, and training settings: |
|
|
|
```python |
|
# LoRA configuration |
|
peft_config = LoraConfig( |
|
r=16, |
|
lora_alpha=16, |
|
lora_dropout=0.05, |
|
bias="none", |
|
task_type="CAUSAL_LM", |
|
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] |
|
) |
|
|
|
# Model to fine-tune |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
load_in_4bit=True |
|
) |
|
model.config.use_cache = False |
|
|
|
# Reference model |
|
ref_model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
load_in_4bit=True |
|
) |
|
|
|
# Training arguments |
|
training_args = TrainingArguments( |
|
per_device_train_batch_size=2, |
|
gradient_accumulation_steps=2, |
|
gradient_checkpointing=True, |
|
learning_rate=2e-5, |
|
lr_scheduler_type="cosine", |
|
max_steps=1000, |
|
save_strategy="no", |
|
logging_steps=1, |
|
output_dir=new_model, |
|
optim="paged_adamw_32bit", |
|
warmup_steps=100, |
|
bf16=True, |
|
report_to="wandb", |
|
) |
|
|
|
# Create DPO trainer |
|
dpo_trainer = DPOTrainer( |
|
model, |
|
ref_model, |
|
args=training_args, |
|
train_dataset=dataset, |
|
tokenizer=tokenizer, |
|
peft_config=peft_config, |
|
beta=0.1, |
|
max_prompt_length=1024, |
|
max_length=1536, |
|
force_use_ref_model=True |
|
) |
|
|
|
# Fine-tune model with DPO |
|
dpo_trainer.train() |
|
``` |