|
--- |
|
base_model: https://huggingface.co/beomi/llama-2-ko-70b |
|
inference: false |
|
language: |
|
- en |
|
- ko |
|
model_name: Llama 2 7B Chat |
|
model_type: llama |
|
pipeline_tag: text-generation |
|
quantized_by: kuotient |
|
tags: |
|
- facebook |
|
- meta |
|
- pytorch |
|
- llama |
|
- llama-2 |
|
- kollama |
|
- llama-2-ko |
|
- gptq |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
|
|
# Llama-2-Ko-70b-GPTQ |
|
- ๋ชจ๋ธ ์ ์์: [beomi](https://huggingface.co/beomi) |
|
- ์๋ณธ ๋ชจ๋ธ: [Llama-2-ko-70b](https://huggingface.co/beomi/llama-2-ko-70b) |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
์ด ๋ ํฌ๋ [Llama-2-ko-70b](https://huggingface.co/beomi/llama-2-ko-70b)์ GPTQ ๋ชจ๋ธ ํ์ผ์ ํฌํจํ๊ณ ์์ต๋๋ค. |
|
|
|
<!-- description end --> |
|
|
|
<!-- README_GPTQ.md-provided-files start --> |
|
## Provided files and GPTQ parameters |
|
ํ๋์จ์ด์ ์๊ตฌ์ฌํญ์ ๊ฐ์ฅ ์ ํฉํ ์์ํ ๋งค๊ฐ๋ณ์๋ฅผ ์ ํํ ์ ์๋๋ก ์ฌ๋ฌ ๊ฐ์ง(๊ณง) ์์ํ ๋งค๊ฐ๋ณ์๊ฐ ์ ๊ณต๋ฉ๋๋ค. |
|
๊ฐ ์์ํ๋ ๋ค๋ฅธ ๋ธ๋์น์ ์์ต๋๋ค. |
|
๋ชจ๋ GPTQ ์์ํ๋ AutoGPTQ๋ก ๋ง๋ค์ด์ก์ต๋๋ค. |
|
|
|
<details> |
|
<summary>GPTQ ํ๋ผ๋ฏธํฐ ์ ๋ณด</summary> |
|
|
|
- Bits: ์์ํ๋ ๋ชจ๋ธ์ ๋นํธ ํฌ๊ธฐ์
๋๋ค. |
|
- GS: GPTQ ๊ทธ๋ฃน ์ฌ์ด์ฆ. ์ซ์๊ฐ ๋์์๋ก VRAM์ ๋ ์ฌ์ฉํ์ง๋ง ์์ํ ์ ํ๋๊ฐ ๋ฎ์์ง๋๋ค. "None"์ ๊ฐ๋ฅํ ๊ฐ์ฅ ๋ฎ์ ๊ฐ์
๋๋ค. |
|
- Act Order: True or False. `desc_act`์ผ๋ก๋ ์๋ ค์ ธ ์์ต๋๋ค. ์ฐธ์ด๋ฉด ์์ํ ์ ํ๋๊ฐ ํฅ์๋ฉ๋๋ค. |
|
- Damp %: ์ํ์ด ์ ๋ํ๋ฅผ ์ํด ์ฒ๋ฆฌ๋๋ ๋ฐฉ์์ ์ํฅ์ ์ฃผ๋ GPTQ ๋งค๊ฐ๋ณ์์
๋๋ค. 0.01์ด ๊ธฐ๋ณธ๊ฐ์ด์ง๋ง 0.1์ ์ฌ์ฉํ๋ฉด ์ ํ๋๊ฐ ์ฝ๊ฐ ํฅ์๋ฉ๋๋ค. |
|
- GPTQ dataset: ์ ๋ํ์ ์ฌ์ฉ๋๋ ๋ฐ์ดํฐ ์ธํธ์
๋๋ค. ๋ชจ๋ธ ํ์ต์ ๋ ์ ํฉํ ๋ฐ์ดํฐ ์ธํธ๋ฅผ ์ฌ์ฉํ๋ฉด ์ ๋ํ ์ ํ๋๊ฐ ํฅ์๋ ์ ์์ต๋๋ค. GPTQ ๋ฐ์ดํฐ ์ธํธ๋ ๋ชจ๋ธ ํ์ต์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋์ผํ์ง ์์ผ๋ฏ๋ก ํ์ต ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ์์ธํ ๋ด์ฉ์ ์๋ณธ ๋ชจ๋ธ repo๋ฅผ ์ฐธ์กฐํ์ธ์. |
|
- Sequence Length: ์ ๋ํ์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ ์ธํธ ์ํ์ค์ ๊ธธ์ด์
๋๋ค. ์ด์์ ์ผ๋ก๋ ๋ชจ๋ธ ์ํ์ค ๊ธธ์ด์ ๋์ผํฉ๋๋ค. ์ผ๋ถ ๋งค์ฐ ๊ธด ์ํ์ค ๋ชจ๋ธ(16+K)์ ๊ฒฝ์ฐ ๋ ์งง์ ์ํ์ค ๊ธธ์ด๋ฅผ ์ฌ์ฉํด์ผ ํ ์๋ ์์ต๋๋ค. ์ํ์ค ๊ธธ์ด๊ฐ ์งง๋ค๊ณ ํด์ ์์ํ๋ ๋ชจ๋ธ์ ์ํ์ค ๊ธธ์ด๊ฐ ์ ํ๋๋ ๊ฒ์ ์๋๋๋ค. ์ด๋ ๊ธด ์ถ๋ก ์ํ์ค์ ์์ํ ์ ํ๋์๋ง ์ํฅ์ ๋ฏธ์นฉ๋๋ค. |
|
- ExLlama Compatibility: Exllama๋ก ์ด ํ์ผ์ ๋ก๋ํ ์ ์๋์ง์ ์ฌ๋ถ์ด๋ฉฐ, ํ์ฌ 4๋นํธ์ ๋ผ๋ง ๋ชจ๋ธ๋ง ์ง์ํฉ๋๋ค. |
|
|
|
</details> |
|
|
|
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | |
|
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | |
|
| [main](https://huggingface.co/kuotient/llama-2-ko-70b-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.8 GB | Yes | 4-bit, Act Order ํฌํจ. VRAM ์ฌ์ฉ๋์ ์ค์ด๊ธฐ ์ํ group size -1. | |
|
|
|
<!-- README_GPTQ.md-provided-files end --> |
|
|
|
<!-- original model card start --> |
|
|
|
# Original model card: Llama 2 ko 70b |
|
> ๐ง Note: this repo is under construction ๐ง |
|
|
|
# **Llama-2-Ko** ๐ฆ๐ฐ๐ท |
|
|
|
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the **70B** pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below. |
|
|
|
## Model Details |
|
|
|
**Model Developers** Junbum Lee (Beomi) |
|
|
|
**Variations** Llama-2-Ko will come in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations. |
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text only. |
|
|
|
## Usage |
|
|
|
**Use with 8bit inference** |
|
|
|
- Requires > 74GB vram (compatible with 4x RTX 3090/4090 or 1x A100/H100 80G or 2x RTX 6000 ada/A6000 48G) |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
model_8bit = AutoModelForCausalLM.from_pretrained( |
|
"beomi/llama-2-ko-70b", |
|
load_in_8bit=True, |
|
device_map="auto", |
|
) |
|
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b') |
|
pipe = pipeline('text-generation', model=model_8bit, tokenizer=tk) |
|
def gen(x): |
|
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt. |
|
max_new_tokens=300, |
|
top_p=0.95, |
|
do_sample=True, |
|
)[0]['generated_text'] |
|
print(len(gended)) |
|
print(gended) |
|
``` |
|
|
|
**Use with bf16 inference** |
|
|
|
- Requires > 150GB vram (compatible with 8x RTX 3090/4090 or 2x A100/H100 80G or 4x RTX 6000 ada/A6000 48G) |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"beomi/llama-2-ko-70b", |
|
device_map="auto", |
|
) |
|
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b') |
|
pipe = pipeline('text-generation', model=model, tokenizer=tk) |
|
def gen(x): |
|
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt. |
|
max_new_tokens=300, |
|
top_p=0.95, |
|
do_sample=True, |
|
)[0]['generated_text'] |
|
print(len(gended)) |
|
print(gended) |
|
``` |
|
|
|
**Model Architecture** |
|
|
|
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2. |
|
|
|
||Training Data|Params|Content Length|GQA|Tokens|LR| |
|
|---|---|---|---|---|---|---| |
|
|Llama-2-Ko 70B|*A new mix of Korean online data*|70B|4k|โ
|>20B|1e<sup>-5</sup>| |
|
*Plan to train upto 300B tokens |
|
**Vocab Expansion** |
|
| Model Name | Vocabulary Size | Description | |
|
| --- | --- | --- | |
|
| Original Llama-2 | 32000 | Sentencepiece BPE | |
|
| **Expanded Llama-2-Ko** | 46592 | Sentencepiece BPE. Added Korean vocab and merges | |
|
*Note: Llama-2-Ko 70B uses `46592` not `46336`(7B), will update new 7B model soon. |
|
|
|
**Tokenizing "์๋
ํ์ธ์, ์ค๋์ ๋ ์จ๊ฐ ์ข๋ค์. ใ
ใ
"** |
|
|
|
| Model | Tokens | |
|
| --- | --- | |
|
| Llama-2 | `['โ', '์', '<0xEB>', '<0x85>', '<0x95>', 'ํ', '์ธ', '์', ',', 'โ', '์ค', '<0xEB>', '<0x8A>', '<0x98>', '์', 'โ', '<0xEB>', '<0x82>', '<0xA0>', '์จ', '๊ฐ', 'โ', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '์', '.', 'โ', '<0xE3>', '<0x85>', '<0x8E>', '<0xE3>', '<0x85>', '<0x8E>']` | |
|
| Llama-2-Ko *70B | `['โ์๋
', 'ํ์ธ์', ',', 'โ์ค๋์', 'โ๋ ', '์จ๊ฐ', 'โ์ข๋ค์', '.', 'โ', 'ใ
', 'ใ
']` | |
|
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"** |
|
| Model | Tokens | |
|
| --- | --- | |
|
| Llama-2 | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` | |
|
| Llama-2-Ko 70B | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` | |
|
# **Model Benchmark** |
|
## LM Eval Harness - Korean (polyglot branch) |
|
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot |
|
### TBD |
|
## Note for oobabooga/text-generation-webui |
|
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`. |
|
```python |
|
diff --git a/modules/models.py b/modules/models.py |
|
index 232d5fa..de5b7a0 100644 |
|
--- a/modules/models.py |
|
+++ b/modules/models.py |
|
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model): |
|
trust_remote_code=shared.args.trust_remote_code, |
|
use_fast=False |
|
) |
|
- except ValueError: |
|
+ except: |
|
tokenizer = AutoTokenizer.from_pretrained( |
|
path_to_model, |
|
trust_remote_code=shared.args.trust_remote_code, |
|
``` |
|
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package, |
|
it is required to use `use_fast=True` option when initialize tokenizer. |
|
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU) |
|
## LICENSE |
|
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT |
|
- Full License available at: [https://huggingface.co/beomi/llama-2-ko-70b/blob/main/LICENSE](https://huggingface.co/beomi/llama-2-ko-70b/blob/main/LICENSE) |
|
- For Commercial Usage, contact Author. |
|
## Citation |
|
``` |
|
@misc {l._junbum_2023, |
|
author = { {L. Junbum} }, |
|
title = { llama-2-ko-70b }, |
|
year = 2023, |
|
url = { https://huggingface.co/beomi/llama-2-ko-70b }, |
|
doi = { 10.57967/hf/1130 }, |
|
publisher = { Hugging Face } |
|
} |
|
``` |
|
<!-- original model card end --> |