RichardErkhov commited on
Commit
c15f503
·
verified ·
1 Parent(s): c7365fe

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gemma-2-baku-2b - AWQ
11
+ - Model creator: https://huggingface.co/rinna/
12
+ - Original model: https://huggingface.co/rinna/gemma-2-baku-2b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
20
+ license: gemma
21
+ datasets:
22
+ - mc4
23
+ - wikipedia
24
+ - EleutherAI/pile
25
+ - oscar-corpus/colossal-oscar-1.0
26
+ - cc100
27
+ language:
28
+ - ja
29
+ - en
30
+ tags:
31
+ - gemma2
32
+ inference: false
33
+ base_model: google/gemma-2-2b
34
+ pipeline_tag: text-generation
35
+ library_name: transformers
36
+ ---
37
+
38
+ # `Gemma 2 Baku 2B (rinna/gemma-2-baku-2b)`
39
+
40
+ ![rinna-icon](./rinna.png)
41
+
42
+ # Overview
43
+
44
+ We conduct continual pre-training of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on **80B** tokens from a mixture of Japanese and English datasets. The continual pre-training improves the model's performance on Japanese tasks.
45
+
46
+ The name `baku` comes from the Japanese word [`獏/ばく/Baku`](https://ja.wikipedia.org/wiki/獏), which is a kind of Japanese mythical creature ([`妖怪/ようかい/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
47
+
48
+ | Size | Continual Pre-Training | Instruction-Tuning |
49
+ | :- | :- | :- |
50
+ | 2B | Gemma 2 Baku 2B [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b) | Gemma 2 Baku 2B Instruct [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b-it) |
51
+
52
+ * **Library**
53
+
54
+ The model was trained using code based on [Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt).
55
+
56
+ * **Model architecture**
57
+
58
+ A 26-layer, 2304-hidden-size transformer-based language model. Please refer to the [Gemma 2 Model Card](https://www.kaggle.com/models/google/gemma-2/) for detailed information on the model's architecture.
59
+
60
+ * **Training**
61
+
62
+ The model was initialized with the [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) model and continually trained on around **80B** tokens from a mixture of the following corpora
63
+ - [Japanese CC-100](https://huggingface.co/datasets/cc100)
64
+ - [Japanese C4](https://huggingface.co/datasets/mc4)
65
+ - [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
66
+ - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
67
+ - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
68
+ - rinna curated Japanese dataset
69
+
70
+ * **Contributors**
71
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
72
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
73
+ - [Kei Sawada](https://huggingface.co/keisawada)
74
+
75
+ ---
76
+
77
+ # Benchmarking
78
+
79
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
80
+
81
+ ---
82
+
83
+ # How to use the model
84
+
85
+ ~~~python
86
+ import transformers
87
+ import torch
88
+
89
+ model_id = "rinna/gemma-2-baku-2b"
90
+ pipeline = transformers.pipeline(
91
+ "text-generation",
92
+ model=model_id,
93
+ model_kwargs={"torch_dtype": torch.bfloat16, "attn_implementation": "eager"},
94
+ device_map="auto"
95
+ )
96
+ output = pipeline(
97
+ "西田幾多郎は、",
98
+ max_new_tokens=256,
99
+ do_sample=True
100
+ )
101
+ print(output[0]["generated_text"])
102
+ ~~~
103
+
104
+ It is recommended to use eager attention when conducting batch inference under bfloat16 precision.
105
+ Currently, Gemma 2 yields NaN values for input sequences with padding when the default attention mechanism (torch.scaled_dot_product_attention) is employed in conjunction with bfloat16.
106
+
107
+ ---
108
+
109
+ # Tokenization
110
+ The model uses the original [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) tokenizer.
111
+
112
+ ---
113
+
114
+ # How to cite
115
+ ```bibtex
116
+ @misc{rinna-gemma-2-baku-2b,
117
+ title = {rinna/gemma-2-baku-2b},
118
+ author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
119
+ url = {https://huggingface.co/rinna/gemma-2-baku-2b}
120
+ }
121
+
122
+ @inproceedings{sawada2024release,
123
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
124
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
125
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
126
+ month = {5},
127
+ year = {2024},
128
+ pages = {13898--13905},
129
+ url = {https://aclanthology.org/2024.lrec-main.1213},
130
+ note = {\url{https://arxiv.org/abs/2404.01657}}
131
+ }
132
+ ```
133
+ ---
134
+
135
+ # References
136
+ ```bibtex
137
+ @article{gemma-2-2024,
138
+ title = {Gemma 2},
139
+ url = {https://www.kaggle.com/models/google/gemma-2},
140
+ publisher = {Kaggle},
141
+ author = {Gemma Team},
142
+ year = {2024}
143
+ }
144
+
145
+ @misc{litgpt-2023,
146
+ author = {Lightning AI},
147
+ title = {LitGPT},
148
+ howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
149
+ year = {2023}
150
+ }
151
+ ```
152
+ ---
153
+
154
+ # License
155
+ [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
156
+