sjrhuschlee
commited on
Commit
路
9f7b3ec
1
Parent(s):
3368621
Update README.md
Browse files
README.md
CHANGED
@@ -141,6 +141,8 @@ model-index:
|
|
141 |
|
142 |
This is the [flan-t5-base](https://huggingface.co/google/flan-t5-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
|
143 |
|
|
|
|
|
144 |
**NOTE:** The `<cls>` token must be manually added to the beginning of the question for this model to work properly.
|
145 |
It uses the `<cls>` token to be able to make "no answer" predictions.
|
146 |
The t5 tokenizer does not automatically add this special token which is why it is added manually.
|
@@ -168,7 +170,7 @@ nlp = pipeline(
|
|
168 |
'question-answering',
|
169 |
model=model_name,
|
170 |
tokenizer=model_name,
|
171 |
-
trust_remote_code=True,
|
172 |
)
|
173 |
qa_input = {
|
174 |
'question': f'{nlp.tokenizer.cls_token}Where do I live?', # '<cls>Where do I live?'
|
@@ -179,7 +181,8 @@ res = nlp(qa_input)
|
|
179 |
|
180 |
# b) Load model & tokenizer
|
181 |
model = AutoModelForQuestionAnswering.from_pretrained(
|
182 |
-
model_name,
|
|
|
183 |
)
|
184 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
185 |
|
|
|
141 |
|
142 |
This is the [flan-t5-base](https://huggingface.co/google/flan-t5-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
|
143 |
|
144 |
+
**UPDATE:** With transformers version 4.31.0 the `use_remote_code=True` is no longer necessary and will cause `AutoModelForQuestionAnswering.from_pretrained()` to fail.
|
145 |
+
|
146 |
**NOTE:** The `<cls>` token must be manually added to the beginning of the question for this model to work properly.
|
147 |
It uses the `<cls>` token to be able to make "no answer" predictions.
|
148 |
The t5 tokenizer does not automatically add this special token which is why it is added manually.
|
|
|
170 |
'question-answering',
|
171 |
model=model_name,
|
172 |
tokenizer=model_name,
|
173 |
+
# trust_remote_code=True, # Do not use if version transformers>=4.31.0
|
174 |
)
|
175 |
qa_input = {
|
176 |
'question': f'{nlp.tokenizer.cls_token}Where do I live?', # '<cls>Where do I live?'
|
|
|
181 |
|
182 |
# b) Load model & tokenizer
|
183 |
model = AutoModelForQuestionAnswering.from_pretrained(
|
184 |
+
model_name,
|
185 |
+
# trust_remote_code=True # Do not use if version transformers>=4.31.0
|
186 |
)
|
187 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
188 |
|