|
## π Introduction |
|
|
|
**Qwen2-7B-Instruct-Exp** and **Qwen2-1.5B-Instruct-Exp** are powerful large language models that can expand instructions with same task type but of different content. |
|
|
|
We fine-tuned **Qwen2-7B-Instruct** and **Qwen2-1.5B-Instruct-Exp** to obtain **Qwen2-7B-Instruct-Exp** and **Qwen2-1.5B-Instruct-Exp**. |
|
We sampled the dataset from OpenHermes and the LCCD dataset, ensuring a balanced task distribution. For training set annotations, we used Qwen-max with incorporated our handwritten examples as in-context prompts. |
|
|
|
#### Example Input |
|
> Plan an in depth tour itinerary of France that includes Paris, Lyon, and Provence. |
|
#### Example Output 1 |
|
> Describe a classic road trip itinerary along the California coastline in the United States. |
|
#### Example Output 2 |
|
> Create a holiday plan that combines cultural experiences in Bangkok, Thailand, with beach relaxation in Phuket. |
|
|
|
|
|
|
|
## π Quick Start |
|
|
|
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
"alibaba-pai/Qwen2-1.5B-Instruct-Exp", |
|
torch_dtype="auto", |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/Qwen2-1.5B-Instruct-Exp") |
|
|
|
prompt = "Give me a short introduction to large language model." |
|
messages = [ |
|
{"role": "user", "content": prompt} |
|
] |
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
model_inputs = tokenizer([text], return_tensors="pt").to(device) |
|
|
|
generated_ids = model.generate( |
|
model_inputs.input_ids, |
|
max_new_tokens=2048οΌ |
|
eos_token_id=151645οΌ |
|
) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
``` |
|
|
|
## π Evaluation |
|
|
|
We evaluated the data augmentation effect of our model on the Elementary Math and Implicature datasets. |
|
|
|
| Model | Math | Impl. | |
|
|--------------------------------|--------|--------| |
|
| Qwen2-1.5B-Instruct | 57.90% | 28.96% | |
|
| + Qwen2-1.5B-Instruct-Exp | 59.15% | 31.22% | |
|
| + Qwen2-7B-Instruct-Exp | 58.32% | 39.37% | |
|
| Qwen2-7B-Instruct | 71.40% | 28.85% | |
|
| + Qwen2-1.5B-Instruct-Exp | 73.90% | 35.41% | |
|
| + Qwen2-7B-Instruct-Exp | 72.53% | 32.92% | |
|
|
|
## π Citation |
|
|
|
If you find our work helpful, please cite it! |
|
|
|
``` |
|
@misc{data-augmentation-family, |
|
title={Building a Family of Data Augmentation Models for Low-cost LLM Fine-tuning on the Cloud}, |
|
author={Yuanhao Yue and Chengyu Wang and Jun Huang and Peng Wang}, |
|
year={2024}, |
|
eprint={2412.04871}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2412.04871}, |
|
} |
|
``` |