sho-takase commited on
Commit
de3a413
1 Parent(s): 0268beb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -12,7 +12,9 @@ This repository provides large language models trained by [SB Intuitions](https:
12
 
13
  ## How to use
14
 
15
- ```
 
 
16
  import torch
17
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
18
 
@@ -36,7 +38,7 @@ for t in text:
36
 
37
  ## Configuration
38
 
39
- | Parameters | Vocab size | Trainning tokens | Architecture | Position type | Layers | Hidden dim | Attention heads |
40
  | :-----: | :-----------: | :-------------: | :------------ | :-----------: | :----: | :--------: | :-------------: |
41
  | [7B](https://huggingface.co/sbintuitions/sarashina2-7b) | 102400 | 2.1T | Llama2 | RoPE | 32 | 4096 | 32 |
42
  | [13B](https://huggingface.co/sbintuitions/sarashina2-13b) | 102400 | 2.1T | Llama2 | RoPE | 40 | 5120 | 40 |
 
12
 
13
  ## How to use
14
 
15
+ Please set **use_fast=False** to use our tokenizer properly.
16
+
17
+ ```python
18
  import torch
19
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
20
 
 
38
 
39
  ## Configuration
40
 
41
+ | Parameters | Vocab size | Training tokens | Architecture | Position type | Layers | Hidden dim | Attention heads |
42
  | :-----: | :-----------: | :-------------: | :------------ | :-----------: | :----: | :--------: | :-------------: |
43
  | [7B](https://huggingface.co/sbintuitions/sarashina2-7b) | 102400 | 2.1T | Llama2 | RoPE | 32 | 4096 | 32 |
44
  | [13B](https://huggingface.co/sbintuitions/sarashina2-13b) | 102400 | 2.1T | Llama2 | RoPE | 40 | 5120 | 40 |