Quantization made by Richard Erkhov.
sarashina2-13b - GGUF
- Model creator: https://huggingface.co/sbintuitions/
- Original model: https://huggingface.co/sbintuitions/sarashina2-13b/
Name | Quant method | Size |
---|---|---|
sarashina2-13b.Q2_K.gguf | Q2_K | 4.91GB |
sarashina2-13b.IQ3_XS.gguf | IQ3_XS | 5.41GB |
sarashina2-13b.IQ3_S.gguf | IQ3_S | 5.69GB |
sarashina2-13b.Q3_K_S.gguf | Q3_K_S | 5.69GB |
sarashina2-13b.IQ3_M.gguf | IQ3_M | 5.99GB |
sarashina2-13b.Q3_K.gguf | Q3_K | 6.32GB |
sarashina2-13b.Q3_K_M.gguf | Q3_K_M | 6.32GB |
sarashina2-13b.Q3_K_L.gguf | Q3_K_L | 6.87GB |
sarashina2-13b.IQ4_XS.gguf | IQ4_XS | 6.99GB |
sarashina2-13b.Q4_0.gguf | Q4_0 | 7.33GB |
sarashina2-13b.IQ4_NL.gguf | IQ4_NL | 7.37GB |
sarashina2-13b.Q4_K_S.gguf | Q4_K_S | 7.38GB |
sarashina2-13b.Q4_K.gguf | Q4_K | 7.79GB |
sarashina2-13b.Q4_K_M.gguf | Q4_K_M | 7.79GB |
sarashina2-13b.Q4_1.gguf | Q4_1 | 8.09GB |
sarashina2-13b.Q5_0.gguf | Q5_0 | 8.86GB |
sarashina2-13b.Q5_K_S.gguf | Q5_K_S | 8.86GB |
sarashina2-13b.Q5_K.gguf | Q5_K | 9.1GB |
sarashina2-13b.Q5_K_M.gguf | Q5_K_M | 9.1GB |
sarashina2-13b.Q5_1.gguf | Q5_1 | 9.63GB |
sarashina2-13b.Q6_K.gguf | Q6_K | 10.5GB |
sarashina2-13b.Q8_0.gguf | Q8_0 | 13.6GB |
Original model description:
license: mit language: - ja - en
Sarashina2-13B
This repository provides large language models trained by SB Intuitions.
How to use
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina2-13b", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-13b")
# If you want to use slow tokenizer
# tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-13b", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
text = generator(
"おはようございます、今日の天気は",
max_length=30,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=3,
)
for t in text:
print(t)
Configuration
Parameters | Vocab size | Training tokens | Architecture | Position type | Layers | Hidden dim | Attention heads |
---|---|---|---|---|---|---|---|
7B | 102400 | 2.1T | Llama2 | RoPE | 32 | 4096 | 32 |
13B | 102400 | 2.1T | Llama2 | RoPE | 40 | 5120 | 40 |
70B | 102400 | 2.1T | Llama2 | RoPE | 80 | 8192 | 64 |
Training Corpus
For our Japanese training data, we used a Japanese portion of the Common Crawl corpus, which is the largest Web corpus, as our training dataset. To clean the training corpus, we used CCNet and HojiChar. After cleaning, our Japanese training data contains about 1T tokens.
For our English training data, we extracted English documents from SlimPajama but we removed books3 corpus due to copyright infringement.
Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We do not apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer.
Ethical Considerations and Limitations
Sarashina2 has not been tuned to follow an instruction yet. Therefore, sarashina2 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs. Before using sarashina2, we would like developers to tune models based on human preferences and safety considerations.
License
- Downloads last month
- 38