Update README.md
Browse fileserror:
```
>>> model = GPT2Model.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese')
Some weights of the model checkpoint at IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese were not used when initializing GPT2Model: ['lm_head.weight']
- This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
correct:
```
>>> from transformers import GPT2Tokenizer,GPT2LMHeadModel
>>> model = GPT2LMHeadModel.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese')
>>>
```
@@ -42,9 +42,9 @@ To obtain a powerful unidirectional language model, we adopt the GPT model struc
|
|
42 |
### 加载模型 Loading Models
|
43 |
|
44 |
```python
|
45 |
-
from transformers import GPT2Tokenizer,
|
46 |
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese')
|
47 |
-
model =
|
48 |
text = "Replace me by any text you'd like."
|
49 |
encoded_input = tokenizer(text, return_tensors='pt')
|
50 |
output = model(**encoded_input)
|
|
|
42 |
### 加载模型 Loading Models
|
43 |
|
44 |
```python
|
45 |
+
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
46 |
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese')
|
47 |
+
model = GPT2LMHeadModel.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese')
|
48 |
text = "Replace me by any text you'd like."
|
49 |
encoded_input = tokenizer(text, return_tensors='pt')
|
50 |
output = model(**encoded_input)
|