Ash-Hun's picture
Update README.md
50164b6 verified
|
raw
history blame
1.4 kB
metadata
license: llama2
base_model: beomi/llama-2-ko-7b
inference: false
datasets:
  - Ash-Hun/Welfare-QA
library_name: peft
pipeline_tag: text-generation
tags:
  - torch
  - llama2
  - domain-specific-lm

"WelSSiSKo : Welfare Domain Specific Model"


Github ▼

If you want to get how to use this model, please check my github repository :)
👉 Github Repo

Open In Colab

What is BaseModel ▼

👉 beomi/llama-2-ko-7b

Training procedure ▼

The following bitsandbytes quantization config was used during training:

  • load_in_4bit: True
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions ▼

  • PEFT 0.8.2.

Evaluate Score

  • 적절한 Domain Benchmark Set이 없기때문에 정성평가를 진행하였고 그에 따른 AVG Score는 0.74 입니다.

image/png