Fill-Mask
Transformers
Safetensors
Japanese
modernbert
speed commited on
Commit
87c768f
·
verified ·
1 Parent(s): b68b06e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -78,11 +78,11 @@ For reference, Warner et al.'s ModernBERT uses 1.72T tokens for stage 1, 250B to
78
  JSTS, JNLI, and JCoLA from [JGLUE](https://aclanthology.org/2022.lrec-1.317/) were used.
79
  Evaluation code can be found at https://github.com/speed1313/bert-eval
80
 
81
- | Model | JSTS (pearson) | JNLI (accuracy) | JCoLA(accuracy) | Avg |
82
  |-------------------------------------------------------|--------|--------|---------|--------------|
83
  | tohoku-nlp/bert-base-japanese-v3 | 0.920 | 0.912 | 0.880 | 0.904 |
84
  | sbintuitions/modernbert-ja-130m | 0.916 | 0.927 | 0.868 | 0.904 |
85
- | sbintuitions/modernbert-ja-310m | 0.932 | 0.933 | 0.883 | 0.916 |
86
- | speed/llm-jp-modernbert-base-v4-ja-stage2-200k | 0.918 | 0.913 | 0.844 | 0.892 |
87
 
88
 
 
78
  JSTS, JNLI, and JCoLA from [JGLUE](https://aclanthology.org/2022.lrec-1.317/) were used.
79
  Evaluation code can be found at https://github.com/speed1313/bert-eval
80
 
81
+ | Model | JSTS (pearson) | JNLI (accuracy) | JCoLA (accuracy) | Avg |
82
  |-------------------------------------------------------|--------|--------|---------|--------------|
83
  | tohoku-nlp/bert-base-japanese-v3 | 0.920 | 0.912 | 0.880 | 0.904 |
84
  | sbintuitions/modernbert-ja-130m | 0.916 | 0.927 | 0.868 | 0.904 |
85
+ | sbintuitions/modernbert-ja-310m | **0.932** | **0.933** | **0.883** | **0.916** |
86
+ | **speed/llm-jp-modernbert-base-v4-ja-stage2-200k** | 0.918 | 0.913 | 0.844 | 0.892 |
87
 
88