|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- datajuicer/alpaca-cot-en-refined-by-data-juicer |
|
--- |
|
## News |
|
Our first data-centric LLM competition begins! Please visit the competition's official websites, **FT-Data Ranker** ([1B Track](https://tianchi.aliyun.com/competition/entrance/532157), [7B Track](https://tianchi.aliyun.com/competition/entrance/532158)), for more information. |
|
## Introduction |
|
This is a reference LLM from [Data-Juicer](https://github.com/alibaba/data-juicer). |
|
|
|
The model architecture is LLaMA-7B and we built it upon the pre-trained [checkpoint](https://huggingface.co/huggyllama/llama-7b). |
|
The model is fine-trained on 40k English chat samples of Data-Juicer's refined [alpaca-CoT data](https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes/alpaca_cot/README.md#refined-alpaca-cot-dataset-meta-info). |
|
It beats LLaMA-7B fine-tuned on 52k Alpaca samples in GPT-4 evaluation. |
|
|
|
For more details, please refer to our [paper](https://arxiv.org/abs/2309.02033). |
|
|
|
![exp_llama](https://img.alicdn.com/imgextra/i2/O1CN019WtUPP1uhebnDlPR8_!!6000000006069-2-tps-2530-1005.png) |