Edit model card

AISquare-Instruct-llama2-koen-13b-v0.9.24

Model Details

Developed by Inswave Systems UI Platform Team

Method
Using DPO method and SFT method

Hardware
We utilized an A100x4 * 1 for training our model

Base Model beomi/llama2-koen-13b

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
2,504
Safetensors
Model size
13.2B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24 1