zomd's picture
Update README.md
7195bf1
|
raw
history blame
872 Bytes
metadata
language:
  - en
datasets:
  - Intel/orca_dpo_pairs
pipeline_tag: text-generation
license: cc-by-nc-4.0

AISquare-Instruct-yi-ko-6b-v0.9.30

Model Details

Developed by Inswave Systems UI Platform Team

Method
Using DPO method and SFT method

Hardware
We utilized an A100x4 * 1 for training our model

Base Model beomi/Yi-Ko-6B

Open ko-leaderboard Rank

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "zomd/AISquare-Instruct-yi-ko-6b-v0.9.30"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)