File size: 9,001 Bytes
5f31079 1aaab80 5f31079 1aaab80 5f31079 1aaab80 5f31079 2389f6a 1aaab80 5f31079 1aaab80 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
base_model: https://huggingface.co/beomi/llama-2-ko-70b
inference: false
language:
- en
- ko
model_name: Llama 2 7B Chat
model_type: llama
pipeline_tag: text-generation
quantized_by: kuotient
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
- gptq
license: cc-by-nc-sa-4.0
---
# Llama-2-Ko-70b-GPTQ
- ๋ชจ๋ธ ์ ์์: [beomi](https://huggingface.co/beomi)
- ์๋ณธ ๋ชจ๋ธ: [Llama-2-ko-70b](https://huggingface.co/beomi/llama-2-ko-70b)
<!-- description start -->
## Description
์ด ๋ ํฌ๋ [Llama-2-ko-70b](https://huggingface.co/beomi/llama-2-ko-70b)์ GPTQ ๋ชจ๋ธ ํ์ผ์ ํฌํจํ๊ณ ์์ต๋๋ค.
<!-- description end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
ํ๋์จ์ด์ ์๊ตฌ์ฌํญ์ ๊ฐ์ฅ ์ ํฉํ ์์ํ ๋งค๊ฐ๋ณ์๋ฅผ ์ ํํ ์ ์๋๋ก ์ฌ๋ฌ ๊ฐ์ง(๊ณง) ์์ํ ๋งค๊ฐ๋ณ์๊ฐ ์ ๊ณต๋ฉ๋๋ค.
๊ฐ ์์ํ๋ ๋ค๋ฅธ ๋ธ๋์น์ ์์ต๋๋ค.
๋ชจ๋ GPTQ ์์ํ๋ AutoGPTQ๋ก ๋ง๋ค์ด์ก์ต๋๋ค.
<details>
<summary>GPTQ ํ๋ผ๋ฏธํฐ ์ ๋ณด</summary>
- Bits: ์์ํ๋ ๋ชจ๋ธ์ ๋นํธ ํฌ๊ธฐ์
๋๋ค.
- GS: GPTQ ๊ทธ๋ฃน ์ฌ์ด์ฆ. ์ซ์๊ฐ ๋์์๋ก VRAM์ ๋ ์ฌ์ฉํ์ง๋ง ์์ํ ์ ํ๋๊ฐ ๋ฎ์์ง๋๋ค. "None"์ ๊ฐ๋ฅํ ๊ฐ์ฅ ๋ฎ์ ๊ฐ์
๋๋ค.
- Act Order: True or False. `desc_act`์ผ๋ก๋ ์๋ ค์ ธ ์์ต๋๋ค. ์ฐธ์ด๋ฉด ์์ํ ์ ํ๋๊ฐ ํฅ์๋ฉ๋๋ค.
- Damp %: ์ํ์ด ์ ๋ํ๋ฅผ ์ํด ์ฒ๋ฆฌ๋๋ ๋ฐฉ์์ ์ํฅ์ ์ฃผ๋ GPTQ ๋งค๊ฐ๋ณ์์
๋๋ค. 0.01์ด ๊ธฐ๋ณธ๊ฐ์ด์ง๋ง 0.1์ ์ฌ์ฉํ๋ฉด ์ ํ๋๊ฐ ์ฝ๊ฐ ํฅ์๋ฉ๋๋ค.
- GPTQ dataset: ์ ๋ํ์ ์ฌ์ฉ๋๋ ๋ฐ์ดํฐ ์ธํธ์
๋๋ค. ๋ชจ๋ธ ํ์ต์ ๋ ์ ํฉํ ๋ฐ์ดํฐ ์ธํธ๋ฅผ ์ฌ์ฉํ๋ฉด ์ ๋ํ ์ ํ๋๊ฐ ํฅ์๋ ์ ์์ต๋๋ค. GPTQ ๋ฐ์ดํฐ ์ธํธ๋ ๋ชจ๋ธ ํ์ต์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋์ผํ์ง ์์ผ๋ฏ๋ก ํ์ต ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ์์ธํ ๋ด์ฉ์ ์๋ณธ ๋ชจ๋ธ repo๋ฅผ ์ฐธ์กฐํ์ธ์.
- Sequence Length: ์ ๋ํ์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ ์ธํธ ์ํ์ค์ ๊ธธ์ด์
๋๋ค. ์ด์์ ์ผ๋ก๋ ๋ชจ๋ธ ์ํ์ค ๊ธธ์ด์ ๋์ผํฉ๋๋ค. ์ผ๋ถ ๋งค์ฐ ๊ธด ์ํ์ค ๋ชจ๋ธ(16+K)์ ๊ฒฝ์ฐ ๋ ์งง์ ์ํ์ค ๊ธธ์ด๋ฅผ ์ฌ์ฉํด์ผ ํ ์๋ ์์ต๋๋ค. ์ํ์ค ๊ธธ์ด๊ฐ ์งง๋ค๊ณ ํด์ ์์ํ๋ ๋ชจ๋ธ์ ์ํ์ค ๊ธธ์ด๊ฐ ์ ํ๋๋ ๊ฒ์ ์๋๋๋ค. ์ด๋ ๊ธด ์ถ๋ก ์ํ์ค์ ์์ํ ์ ํ๋์๋ง ์ํฅ์ ๋ฏธ์นฉ๋๋ค.
- ExLlama Compatibility: Exllama๋ก ์ด ํ์ผ์ ๋ก๋ํ ์ ์๋์ง์ ์ฌ๋ถ์ด๋ฉฐ, ํ์ฌ 4๋นํธ์ ๋ผ๋ง ๋ชจ๋ธ๋ง ์ง์ํฉ๋๋ค.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/kuotient/llama-2-ko-70b-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.8 GB | Yes | 4-bit, Act Order ํฌํจ. VRAM ์ฌ์ฉ๋์ ์ค์ด๊ธฐ ์ํ group size -1. |
<!-- README_GPTQ.md-provided-files end -->
<!-- original model card start -->
# Original model card: Llama 2 ko 70b
> ๐ง Note: this repo is under construction ๐ง
# **Llama-2-Ko** ๐ฆ๐ฐ๐ท
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the **70B** pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
## Model Details
**Model Developers** Junbum Lee (Beomi)
**Variations** Llama-2-Ko will come in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
## Usage
**Use with 8bit inference**
- Requires > 74GB vram (compatible with 4x RTX 3090/4090 or 1x A100/H100 80G or 2x RTX 6000 ada/A6000 48G)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_8bit = AutoModelForCausalLM.from_pretrained(
"beomi/llama-2-ko-70b",
load_in_8bit=True,
device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model_8bit, tokenizer=tk)
def gen(x):
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
max_new_tokens=300,
top_p=0.95,
do_sample=True,
)[0]['generated_text']
print(len(gended))
print(gended)
```
**Use with bf16 inference**
- Requires > 150GB vram (compatible with 8x RTX 3090/4090 or 2x A100/H100 80G or 4x RTX 6000 ada/A6000 48G)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained(
"beomi/llama-2-ko-70b",
device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model, tokenizer=tk)
def gen(x):
gended = pipe(f"### Title: {x}\n\n### Contents:", # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
max_new_tokens=300,
top_p=0.95,
do_sample=True,
)[0]['generated_text']
print(len(gended))
print(gended)
```
**Model Architecture**
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama-2-Ko 70B|*A new mix of Korean online data*|70B|4k|โ
|>20B|1e<sup>-5</sup>|
*Plan to train upto 300B tokens
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Llama-2 | 32000 | Sentencepiece BPE |
| **Expanded Llama-2-Ko** | 46592 | Sentencepiece BPE. Added Korean vocab and merges |
*Note: Llama-2-Ko 70B uses `46592` not `46336`(7B), will update new 7B model soon.
**Tokenizing "์๋
ํ์ธ์, ์ค๋์ ๋ ์จ๊ฐ ์ข๋ค์. ใ
ใ
"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โ', '์', '<0xEB>', '<0x85>', '<0x95>', 'ํ', '์ธ', '์', ',', 'โ', '์ค', '<0xEB>', '<0x8A>', '<0x98>', '์', 'โ', '<0xEB>', '<0x82>', '<0xA0>', '์จ', '๊ฐ', 'โ', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '์', '.', 'โ', '<0xE3>', '<0x85>', '<0x8E>', '<0xE3>', '<0x85>', '<0x8E>']` |
| Llama-2-Ko *70B | `['โ์๋
', 'ํ์ธ์', ',', 'โ์ค๋์', 'โ๋ ', '์จ๊ฐ', 'โ์ข๋ค์', '.', 'โ', 'ใ
', 'ใ
']` |
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
| Llama-2-Ko 70B | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
### TBD
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
## LICENSE
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: [https://huggingface.co/beomi/llama-2-ko-70b/blob/main/LICENSE](https://huggingface.co/beomi/llama-2-ko-70b/blob/main/LICENSE)
- For Commercial Usage, contact Author.
## Citation
```
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-70b },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-70b },
doi = { 10.57967/hf/1130 },
publisher = { Hugging Face }
}
```
<!-- original model card end --> |