Fill-Mask
Transformers
Safetensors
Japanese
modernbert
speed commited on
Commit
88fec92
·
verified ·
1 Parent(s): c659bca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,6 +10,7 @@ language:
10
  This model is based on the [modernBERT-base](https://arxiv.org/abs/2412.13663) architecture with [llm-jp-tokenizer](https://github.com/llm-jp/llm-jp-tokenizer).
11
  It was trained using the Japanese subset (3.4TB) of the llm-jp-corpus v4 and supports a max sequence length of 8192.
12
 
 
13
 
14
  ## Usage
15
 
@@ -85,4 +86,3 @@ Evaluation code can be found at https://github.com/speed1313/bert-eval
85
  | sbintuitions/modernbert-ja-310m | **0.932** | **0.933** | **0.883** | **0.916** |
86
  | **speed/llm-jp-modernbert-base-v4-ja-stage2-200k** | 0.918 | 0.913 | 0.844 | 0.892 |
87
 
88
-
 
10
  This model is based on the [modernBERT-base](https://arxiv.org/abs/2412.13663) architecture with [llm-jp-tokenizer](https://github.com/llm-jp/llm-jp-tokenizer).
11
  It was trained using the Japanese subset (3.4TB) of the llm-jp-corpus v4 and supports a max sequence length of 8192.
12
 
13
+ For detailed information on the training methods, evaluation, and analysis results, please visit at [TODO]()
14
 
15
  ## Usage
16
 
 
86
  | sbintuitions/modernbert-ja-310m | **0.932** | **0.933** | **0.883** | **0.916** |
87
  | **speed/llm-jp-modernbert-base-v4-ja-stage2-200k** | 0.918 | 0.913 | 0.844 | 0.892 |
88