File size: 800 Bytes
0302553 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# AISquare-Instruct-llama2-koen-13b-v0.9.24
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Method**
Using DPO method and SFT method
**Hardware**
We utilized an A100x4 * 1 for training our model
**Base Model**
[beomi/llama2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|