moreh-sungmin's picture
Update README.md
9207024 verified
|
raw
history blame
1.81 kB
---
license: mit
language:
- en
---
# **Introduction**
MoMo-70B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
This is a Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) version trained from v1.8.4 as a base model, with several optimizations in hyperparameters.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- [slimorca](Open-Orca/SlimOrca)
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.8.6(result < 0.1, %)**| TBU |TBU | 0.73 | TBU |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-70B-LoRA-V1.8.6")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-70B-LoRA-V1.8.6"
)
```