Create README.md (#1)
Browse files- Create README.md (47f4d4caa6be5639c7fc6d0324c4d18bf4892c9d)
- Update README.md (eeb438a1be7efb4ca7ab021b01354d5fd7f4369a)
- Update README.md (c70ebe5eb92122619930d408b82c632c6aa7fc0e)
- Update README.md (f9652f368cacab2584bc20a310ee92cdcf45165f)
- add tokenizer info (71d0e8797b2cfb69b00179937f5f8cefdf9ddd7d)
- Update README.md (e51b61c66590b160f82d9bcaa3de4f72f01ab092)
- Update README.md (82509947747fa5418b12043c0552b3d52c8605e9)
- Update README.md (8703a3b0be16c63059cd0600c823993efcd80210)
- Update README.md (4ef7c89f12c2f2e7bf07a0c1c9ead59eb6596a4d)
- Update README.md (9b28dd72e0c30b72e2e10ebf4908aab1119da5ce)
- Update README.md (75fb8d751739d78a9062f3f283842e15351e77f5)
- Update README.md (35675cdc71e73daa87b434e41604b4c263b401c5)
- Update README.md (820cd7d882acdca2510ffd812bc224b928853e91)
- update vocabulary size (09210b87f985968987c5d36c1a936b7c43068b1f)
- Update README.md (82b6aafa6fa5525d6fed5ec1c5a4ce52641bc60a)
- Update README.md (ed3c0b74ae58ac8634f4179e2e47c375b9d4f86e)
- Update README.md (17a7ee50ff4bbfd6d791c1b10ea064195bc6694e)
- Update README.md (e9cc55c450c36a85d19c433102e8cb077efaa209)
- Update README.md (ee3f84b319f0279ec3bfeb9a36fe00fcb23029d8)
Co-authored-by: Takashi Kodama <[email protected]>
Co-authored-by: tatsuya hiraoka <[email protected]>
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- ja
|
6 |
+
programming_language:
|
7 |
+
- C
|
8 |
+
- C++
|
9 |
+
- C#
|
10 |
+
- Go
|
11 |
+
- Java
|
12 |
+
- JavaScript
|
13 |
+
- Lua
|
14 |
+
- PHP
|
15 |
+
- Python
|
16 |
+
- Ruby
|
17 |
+
- Rust
|
18 |
+
- Scala
|
19 |
+
- TypeScript
|
20 |
+
library_name: transformers
|
21 |
+
pipeline_tag: text-generation
|
22 |
+
inference: false
|
23 |
+
---
|
24 |
+
# llm-jp-13b-v2.0
|
25 |
+
|
26 |
+
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
|
27 |
+
|
28 |
+
| Model Variant |
|
29 |
+
| :--- |
|
30 |
+
|**Instruction models**|
|
31 |
+
| [llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
|
32 |
+
| [llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
|
33 |
+
| [llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
|
34 |
+
|
35 |
+
|
36 |
+
| |
|
37 |
+
| :--- |
|
38 |
+
|**Pre-trained models**|
|
39 |
+
| [llm-jp-13b-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-v2.0) |
|
40 |
+
|
41 |
+
Checkpoints format: Hugging Face Transformers
|
42 |
+
|
43 |
+
|
44 |
+
## Required Libraries and Their Versions
|
45 |
+
|
46 |
+
- torch>=2.2.2
|
47 |
+
- transformers>=4.39.3
|
48 |
+
- tokenizers>=0.15.2
|
49 |
+
- accelerate>=0.27.2
|
50 |
+
- flash-attn>=2.5.6
|
51 |
+
|
52 |
+
## Usage
|
53 |
+
|
54 |
+
```python
|
55 |
+
import torch
|
56 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-v2.0")
|
58 |
+
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-v2.0", device_map="auto", torch_dtype=torch.float16)
|
59 |
+
text = "自然言語処理とは何か"
|
60 |
+
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
|
61 |
+
with torch.no_grad():
|
62 |
+
output = model.generate(
|
63 |
+
tokenized_input,
|
64 |
+
max_new_tokens=100,
|
65 |
+
do_sample=True,
|
66 |
+
top_p=0.95,
|
67 |
+
temperature=0.7,
|
68 |
+
repetition_penalty=1.05,
|
69 |
+
)[0]
|
70 |
+
print(tokenizer.decode(output))
|
71 |
+
```
|
72 |
+
|
73 |
+
|
74 |
+
## Model Details
|
75 |
+
|
76 |
+
- **Model type:** Transformer-based Language Model
|
77 |
+
- **Total seen tokens:** 256B
|
78 |
+
|
79 |
+
|Model|Params|Layers|Hidden size|Heads|Context length|
|
80 |
+
|:---:|:---:|:---:|:---:|:---:|:---:|
|
81 |
+
|13b model|13b|40|5120|40|4096|
|
82 |
+
|
83 |
+
|
84 |
+
## Training
|
85 |
+
|
86 |
+
- **Pre-training:**
|
87 |
+
- **Hardware:** 128 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
|
88 |
+
- **Software:** Megatron-LM
|
89 |
+
|
90 |
+
- **Instruction tuning:**
|
91 |
+
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
|
92 |
+
- **Software:** [TRL](https://github.com/huggingface/trl), [PEFT](https://github.com/huggingface/peft), and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
93 |
+
|
94 |
+
## Tokenizer
|
95 |
+
|
96 |
+
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
97 |
+
The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2).
|
98 |
+
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
|
99 |
+
|
100 |
+
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
101 |
+
- **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm.
|
102 |
+
- **Training data:** A subset of the datasets for model pre-training
|
103 |
+
- **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code)
|
104 |
+
- The acutal size of vocabulary in the pretrained model is 97,024 due to round-up to multiples of 256.
|
105 |
+
|
106 |
+
|
107 |
+
## Datasets
|
108 |
+
|
109 |
+
### Pre-training
|
110 |
+
|
111 |
+
The models have been pre-trained using a blend of the following datasets.
|
112 |
+
|
113 |
+
| Language | Dataset | Tokens|
|
114 |
+
|:---|:---|:---|
|
115 |
+
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.4B
|
116 |
+
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus)|130.7B
|
117 |
+
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|4.7B
|
118 |
+
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|110.3B
|
119 |
+
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|8.7B
|
120 |
+
|
121 |
+
### Instruction tuning
|
122 |
+
|
123 |
+
The models have been fine-tuned on the following datasets.
|
124 |
+
|
125 |
+
| Language | Dataset | description |
|
126 |
+
|:---|:---|:---|
|
127 |
+
|Japanese|[ichikara-instruction-004-001](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset |
|
128 |
+
| |[answer-carefully-001]()| A manually constructed Japanese instruction dataset focusing on LLMs' safety |
|
129 |
+
| |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL |
|
130 |
+
| |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL |
|
131 |
+
| |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL |
|
132 |
+
|English |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
|
133 |
+
| |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) |
|
134 |
+
|
135 |
+
## Evaluation
|
136 |
+
|
137 |
+
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) (v1.3.0) for the evaluation.
|
138 |
+
|
139 |
+
## Risks and Limitations
|
140 |
+
|
141 |
+
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
|
142 |
+
|
143 |
+
|
144 |
+
## Send Questions to
|
145 |
+
|
146 |
+
llm-jp(at)nii.ac.jp
|
147 |
+
|
148 |
+
|
149 |
+
## License
|
150 |
+
|
151 |
+
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
152 |
+
|
153 |
+
|
154 |
+
## Model Card Authors
|
155 |
+
|
156 |
+
*The names are listed in alphabetical order.*
|
157 |
+
|
158 |
+
Namgi Han, Tatsuya Hiraoka, Hirokazu Kiyomaru, Takashi Kodama, and Hiroshi Matsuda.
|