AISquare-Instruct-llama2-koen-13b-v0.9.24
Model Details
Developed by Inswave Systems UI Platform Team
Method
Using DPO method and SFT method
Hardware
We utilized an A100x4 * 1 for training our model
Base Model beomi/llama2-koen-13b
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 2,504
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.