RichardErkhov commited on
Commit
219f742
1 Parent(s): 4987c56

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gemma-2-baku-2b - GGUF
11
+ - Model creator: https://huggingface.co/rinna/
12
+ - Original model: https://huggingface.co/rinna/gemma-2-baku-2b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [gemma-2-baku-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q2_K.gguf) | Q2_K | 1.15GB |
18
+ | [gemma-2-baku-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
19
+ | [gemma-2-baku-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.IQ3_S.gguf) | IQ3_S | 1.27GB |
20
+ | [gemma-2-baku-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
21
+ | [gemma-2-baku-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.IQ3_M.gguf) | IQ3_M | 1.3GB |
22
+ | [gemma-2-baku-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q3_K.gguf) | Q3_K | 1.36GB |
23
+ | [gemma-2-baku-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
24
+ | [gemma-2-baku-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
25
+ | [gemma-2-baku-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
26
+ | [gemma-2-baku-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q4_0.gguf) | Q4_0 | 1.52GB |
27
+ | [gemma-2-baku-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
28
+ | [gemma-2-baku-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
29
+ | [gemma-2-baku-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q4_K.gguf) | Q4_K | 1.59GB |
30
+ | [gemma-2-baku-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
31
+ | [gemma-2-baku-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q4_1.gguf) | Q4_1 | 1.64GB |
32
+ | [gemma-2-baku-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q5_0.gguf) | Q5_0 | 1.75GB |
33
+ | [gemma-2-baku-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
34
+ | [gemma-2-baku-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q5_K.gguf) | Q5_K | 1.79GB |
35
+ | [gemma-2-baku-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
36
+ | [gemma-2-baku-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q5_1.gguf) | Q5_1 | 1.87GB |
37
+ | [gemma-2-baku-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q6_K.gguf) | Q6_K | 2.0GB |
38
+ | [gemma-2-baku-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_gemma-2-baku-2b-gguf/blob/main/gemma-2-baku-2b.Q8_0.gguf) | Q8_0 | 2.59GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
46
+ license: gemma
47
+ datasets:
48
+ - mc4
49
+ - wikipedia
50
+ - EleutherAI/pile
51
+ - oscar-corpus/colossal-oscar-1.0
52
+ - cc100
53
+ language:
54
+ - ja
55
+ - en
56
+ tags:
57
+ - gemma2
58
+ inference: false
59
+ base_model: google/gemma-2-2b
60
+ pipeline_tag: text-generation
61
+ library_name: transformers
62
+ ---
63
+
64
+ # `Gemma 2 Baku 2B (rinna/gemma-2-baku-2b)`
65
+
66
+ ![rinna-icon](./rinna.png)
67
+
68
+ # Overview
69
+
70
+ We conduct continual pre-training of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on **80B** tokens from a mixture of Japanese and English datasets. The continual pre-training improves the model's performance on Japanese tasks.
71
+
72
+ The name `baku` comes from the Japanese word [`獏/ばく/Baku`](https://ja.wikipedia.org/wiki/獏), which is a kind of Japanese mythical creature ([`妖怪/ようかい/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
73
+
74
+ | Size | Continual Pre-Training | Instruction-Tuning |
75
+ | :- | :- | :- |
76
+ | 2B | Gemma 2 Baku 2B [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b) | Gemma 2 Baku 2B Instruct [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b-it) |
77
+
78
+ * **Library**
79
+
80
+ The model was trained using code based on [Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt).
81
+
82
+ * **Model architecture**
83
+
84
+ A 26-layer, 2304-hidden-size transformer-based language model. Please refer to the [Gemma 2 Model Card](https://www.kaggle.com/models/google/gemma-2/) for detailed information on the model's architecture.
85
+
86
+ * **Training**
87
+
88
+ The model was initialized with the [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) model and continually trained on around **80B** tokens from a mixture of the following corpora
89
+ - [Japanese CC-100](https://huggingface.co/datasets/cc100)
90
+ - [Japanese C4](https://huggingface.co/datasets/mc4)
91
+ - [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
92
+ - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
93
+ - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
94
+ - rinna curated Japanese dataset
95
+
96
+ * **Contributors**
97
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
98
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
99
+ - [Kei Sawada](https://huggingface.co/keisawada)
100
+
101
+ ---
102
+
103
+ # Benchmarking
104
+
105
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
106
+
107
+ ---
108
+
109
+ # How to use the model
110
+
111
+ ~~~python
112
+ import transformers
113
+ import torch
114
+
115
+ model_id = "rinna/gemma-2-baku-2b"
116
+ pipeline = transformers.pipeline(
117
+ "text-generation",
118
+ model=model_id,
119
+ model_kwargs={"torch_dtype": torch.bfloat16, "attn_implementation": "eager"},
120
+ device_map="auto"
121
+ )
122
+ output = pipeline(
123
+ "西田幾多郎は、",
124
+ max_new_tokens=256,
125
+ do_sample=True
126
+ )
127
+ print(output[0]["generated_text"])
128
+ ~~~
129
+
130
+ It is recommended to use eager attention when conducting batch inference under bfloat16 precision.
131
+ Currently, Gemma 2 yields NaN values for input sequences with padding when the default attention mechanism (torch.scaled_dot_product_attention) is employed in conjunction with bfloat16.
132
+
133
+ ---
134
+
135
+ # Tokenization
136
+ The model uses the original [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) tokenizer.
137
+
138
+ ---
139
+
140
+ # How to cite
141
+ ```bibtex
142
+ @misc{rinna-gemma-2-baku-2b,
143
+ title = {rinna/gemma-2-baku-2b},
144
+ author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
145
+ url = {https://huggingface.co/rinna/gemma-2-baku-2b}
146
+ }
147
+
148
+ @inproceedings{sawada2024release,
149
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
150
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
151
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
152
+ month = {5},
153
+ year = {2024},
154
+ pages = {13898--13905},
155
+ url = {https://aclanthology.org/2024.lrec-main.1213},
156
+ note = {\url{https://arxiv.org/abs/2404.01657}}
157
+ }
158
+ ```
159
+ ---
160
+
161
+ # References
162
+ ```bibtex
163
+ @article{gemma-2-2024,
164
+ title = {Gemma 2},
165
+ url = {https://www.kaggle.com/models/google/gemma-2},
166
+ publisher = {Kaggle},
167
+ author = {Gemma Team},
168
+ year = {2024}
169
+ }
170
+
171
+ @misc{litgpt-2023,
172
+ author = {Lightning AI},
173
+ title = {LitGPT},
174
+ howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
175
+ year = {2023}
176
+ }
177
+ ```
178
+ ---
179
+
180
+ # License
181
+ [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
182
+