Nobuhiro Ueda commited on
Commit
f17da20
1 Parent(s): f0559e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
  - cc100
7
  mask_token: "[MASK]"
8
  widget:
9
- - text: "京都大学で自然言語処理を [MASK] する。"
10
  ---
11
 
12
  # ku-nlp/roberta-large-japanese-char-wwm
@@ -25,7 +25,7 @@ from transformers import AutoTokenizer, AutoModelForMaskedLM
25
  tokenizer = AutoTokenizer.from_pretrained("ku-nlp/roberta-large-japanese-char-wwm")
26
  model = AutoModelForMaskedLM.from_pretrained("ku-nlp/roberta-large-japanese-char-wwm")
27
 
28
- sentence = '京都大学で自然言語処理を [MASK] する。'
29
  encoding = tokenizer(sentence, return_tensors='pt')
30
  ...
31
  ```
 
6
  - cc100
7
  mask_token: "[MASK]"
8
  widget:
9
+ - text: "京都大学で自然言語処理を[MASK]する。"
10
  ---
11
 
12
  # ku-nlp/roberta-large-japanese-char-wwm
 
25
  tokenizer = AutoTokenizer.from_pretrained("ku-nlp/roberta-large-japanese-char-wwm")
26
  model = AutoModelForMaskedLM.from_pretrained("ku-nlp/roberta-large-japanese-char-wwm")
27
 
28
+ sentence = '京都大学で自然言語処理を[MASK]する。'
29
  encoding = tokenizer(sentence, return_tensors='pt')
30
  ...
31
  ```