Update README.md
Browse files
README.md
CHANGED
@@ -13,25 +13,28 @@ probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
# Kiran2004/my_qa_model
|
15 |
|
16 |
-
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
|
|
|
17 |
It achieves the following results on the evaluation set:
|
18 |
- Train Loss: 3.4107
|
19 |
- Validation Loss: 9.9990
|
20 |
- Epoch: 1
|
21 |
|
22 |
-
##
|
23 |
-
|
24 |
-
More information needed
|
25 |
|
26 |
-
|
|
|
|
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
|
|
33 |
|
34 |
-
|
|
|
35 |
|
36 |
### Training hyperparameters
|
37 |
|
|
|
13 |
|
14 |
# Kiran2004/my_qa_model
|
15 |
|
16 |
+
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
|
17 |
+
|
18 |
It achieves the following results on the evaluation set:
|
19 |
- Train Loss: 3.4107
|
20 |
- Validation Loss: 9.9990
|
21 |
- Epoch: 1
|
22 |
|
23 |
+
## Usage
|
|
|
|
|
24 |
|
25 |
+
### In Transformers
|
26 |
+
```python
|
27 |
+
from transformers import pipeline
|
28 |
|
29 |
+
model_name = "Kiran2004/Roberta_qca_sample"
|
30 |
|
31 |
+
question_answerer = pipeline("question-answering", model = model_name)
|
32 |
|
33 |
+
question = "How many programming languages does BLOOM support?"
|
34 |
+
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
|
35 |
|
36 |
+
question_answerer(question=question, context=context)
|
37 |
+
```
|
38 |
|
39 |
### Training hyperparameters
|
40 |
|