keisawada commited on
Commit
a3c8cde
Β·
verified Β·
1 Parent(s): fad35e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -10,6 +10,9 @@ datasets:
10
  language:
11
  - ja
12
  - en
 
 
 
13
  inference: false
14
  ---
15
 
@@ -23,6 +26,10 @@ We conduct continual pre-training of [meta-llama/Meta-Llama-3-8B](https://huggin
23
 
24
  The name `youko` comes from the Japanese word [`妖狐/γ‚ˆγ†γ“/Youko`](https://ja.wikipedia.org/wiki/%E5%A6%96%E7%8B%90), which is a kind of Japanese mythical creature ([`ε¦–ζ€ͺ/γ‚ˆγ†γ‹γ„/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
25
 
 
 
 
 
26
 
27
  * **Library**
28
 
@@ -45,6 +52,8 @@ The name `youko` comes from the Japanese word [`妖狐/γ‚ˆγ†γ“/Youko`](https:/
45
  * **Contributors**
46
 
47
  - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
 
 
48
  - [Kei Sawada](https://huggingface.co/keisawada)
49
 
50
  ---
@@ -79,7 +88,7 @@ print(output)
79
  ---
80
 
81
  # Tokenization
82
- The model uses the original meta-llama/Meta-Llama-3-8B tokenizer.
83
 
84
  ---
85
 
@@ -87,7 +96,7 @@ The model uses the original meta-llama/Meta-Llama-3-8B tokenizer.
87
  ```bibtex
88
  @misc{rinna-llama-3-youko-8b,
89
  title = {rinna/llama-3-youko-8b},
90
- author = {Mitsuda, Koh and Sawada, Kei},
91
  url = {https://huggingface.co/rinna/llama-3-youko-8b}
92
  }
93
 
 
10
  language:
11
  - ja
12
  - en
13
+ tags:
14
+ - llama
15
+ - llama-3
16
  inference: false
17
  ---
18
 
 
26
 
27
  The name `youko` comes from the Japanese word [`妖狐/γ‚ˆγ†γ“/Youko`](https://ja.wikipedia.org/wiki/%E5%A6%96%E7%8B%90), which is a kind of Japanese mythical creature ([`ε¦–ζ€ͺ/γ‚ˆγ†γ‹γ„/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
28
 
29
+ | Size | Continual Pre-Training | Instruction-Tuning |
30
+ | :- | :- | :- |
31
+ | 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
32
+ | 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
33
 
34
  * **Library**
35
 
 
52
  * **Contributors**
53
 
54
  - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
55
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
56
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
57
  - [Kei Sawada](https://huggingface.co/keisawada)
58
 
59
  ---
 
88
  ---
89
 
90
  # Tokenization
91
+ The model uses the original [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) tokenizer.
92
 
93
  ---
94
 
 
96
  ```bibtex
97
  @misc{rinna-llama-3-youko-8b,
98
  title = {rinna/llama-3-youko-8b},
99
+ author = {Mitsuda, Koh and Chen, Xinqi and Wakatsuki, Toshiaki and Sawada, Kei},
100
  url = {https://huggingface.co/rinna/llama-3-youko-8b}
101
  }
102