oneonlee's picture
Update README.md
9e6d5f8 verified
|
raw
history blame
2.17 kB
metadata
license: cc-by-nc-4.0
datasets:
  - kyujinpy/KOR-gugugu-platypus-set
language:
  - en
  - ko
base_model:
  - yanolja/KoSOLAR-10.7B-v0.2
pipeline_tag: text-generation

KoSOLAR-v0.2-gugutypus-10.7B


Model Details

Model Developers

Model Architecture

  • KoSOLAR-v0.2-gugutypus-10.7B is a instruction fine-tuned auto-regressive language model, based on the SOLAR transformer architecture.

Base Model

Training Dataset


Model comparisons

  • Ko-LLM leaderboard (YYYY/MM/DD) [link]
Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
KoSOLAR-gugutypus NaN NaN NaN NaN NaN NaN

  • AI-Harness evaluation [link]
Model Copa Copa HellaSwag HellaSwag BoolQ BoolQ Sentineg Sentineg
0-shot 5-shot 0-shot 5-shot 0-shot 5-shot 0-shot 5-shot
KoSOLAR-gugutypus NaN NaN NaN NaN NaN NaN NaN NaN

Implementation Code

### KoSOLAR-gugutypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "oneonlee/KoSOLAR-v0.2-gugutypus-10.7B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)