Upload 10 files
Browse files- README.md +99 -0
- config.json +24 -0
- gitattributes (1).txt +34 -0
- model.safetensors +3 -0
- pytorch_model.bin +3 -0
- pytorch_model.zip +3 -0
- special_tokens_map.json +7 -0
- tokenizer.json +0 -0
- tokenizer_config.json +14 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- squad_v2
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
metrics:
|
7 |
+
- accuracy
|
8 |
+
library_name: transformers
|
9 |
+
pipeline_tag: question-answering
|
10 |
+
tags:
|
11 |
+
- question-answering
|
12 |
+
---
|
13 |
+
# QA-BERT
|
14 |
+
QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there.
|
15 |
+
|
16 |
+
## Dataset
|
17 |
+
The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike.
|
18 |
+
Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset.
|
19 |
+
|
20 |
+
<details>
|
21 |
+
<summary><i>Structure of the Data Dictonary</i></summary>
|
22 |
+
<!--All you need is a blank line-->
|
23 |
+
|
24 |
+
{
|
25 |
+
"data":[
|
26 |
+
{
|
27 |
+
"title":"Article Title",
|
28 |
+
"paragraphs":[
|
29 |
+
{
|
30 |
+
"context":"The context text of the paragraph",
|
31 |
+
"qas":[
|
32 |
+
{
|
33 |
+
"question":"The question asked about the context",
|
34 |
+
"id":"A unique identifier for the question",
|
35 |
+
"answers":[
|
36 |
+
{
|
37 |
+
"text":"The answer to the question",
|
38 |
+
"answer_start":"The starting index of the answer in the context"
|
39 |
+
}
|
40 |
+
]
|
41 |
+
}
|
42 |
+
]
|
43 |
+
}
|
44 |
+
]
|
45 |
+
}
|
46 |
+
],
|
47 |
+
"version":"The version of the SQuAD dataset"
|
48 |
+
}
|
49 |
+
</details>
|
50 |
+
|
51 |
+
## Model
|
52 |
+
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers.
|
53 |
+
<img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc">
|
54 |
+
For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind)
|
55 |
+
|
56 |
+
|
57 |
+
## Inference
|
58 |
+
_Load model_
|
59 |
+
```python
|
60 |
+
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
|
61 |
+
|
62 |
+
QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small")
|
63 |
+
|
64 |
+
QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small")
|
65 |
+
```
|
66 |
+
|
67 |
+
_context_
|
68 |
+
```text
|
69 |
+
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
|
70 |
+
question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
|
71 |
+
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
|
72 |
+
```
|
73 |
+
|
74 |
+
|
75 |
+
_Build Pipeline_
|
76 |
+
```python
|
77 |
+
from transformers import pipeline
|
78 |
+
|
79 |
+
ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer)
|
80 |
+
|
81 |
+
result = ask(question="What is a good example of a question answering dataset?", context=context)
|
82 |
+
|
83 |
+
print(f"Answer: '{result['answer']}'")
|
84 |
+
```
|
85 |
+
|
86 |
+
## Contributing
|
87 |
+
|
88 |
+
Pull requests are welcome. For major changes, please open an issue first
|
89 |
+
to discuss what you would like to change.
|
90 |
+
Please make sure to update tests as appropriate.
|
91 |
+
|
92 |
+
## Citations
|
93 |
+
```
|
94 |
+
@citation{ QA-BERT-small,
|
95 |
+
author = {Shreyas Dixit},
|
96 |
+
year = {2023},
|
97 |
+
url = {https://huggingface.co/SRDdev/QA-BERT-small}
|
98 |
+
}
|
99 |
+
```
|
config.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "distilbert-base-uncased",
|
3 |
+
"activation": "gelu",
|
4 |
+
"architectures": [
|
5 |
+
"DistilBertForQuestionAnswering"
|
6 |
+
],
|
7 |
+
"attention_dropout": 0.1,
|
8 |
+
"dim": 768,
|
9 |
+
"dropout": 0.1,
|
10 |
+
"hidden_dim": 3072,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"max_position_embeddings": 512,
|
13 |
+
"model_type": "distilbert",
|
14 |
+
"n_heads": 12,
|
15 |
+
"n_layers": 6,
|
16 |
+
"pad_token_id": 0,
|
17 |
+
"qa_dropout": 0.1,
|
18 |
+
"seq_classif_dropout": 0.2,
|
19 |
+
"sinusoidal_pos_embds": false,
|
20 |
+
"tie_weights_": true,
|
21 |
+
"torch_dtype": "float32",
|
22 |
+
"transformers_version": "4.26.1",
|
23 |
+
"vocab_size": 30522
|
24 |
+
}
|
gitattributes (1).txt
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e6cf53ada81236b3988e49e4c5e41e0cf7add596f5a4c0156f2bc8a16be22d34
|
3 |
+
size 265470032
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9158014e99ae0a268d4174d495b047e2459d929a5111cc5ac3e03c42c2346050
|
3 |
+
size 265492517
|
pytorch_model.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9488dac0960ab9d8f7473e258353baa619925b684c2ab726ce65774d9f1cb12b
|
3 |
+
size 244804935
|
special_tokens_map.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"mask_token": "[MASK]",
|
4 |
+
"pad_token": "[PAD]",
|
5 |
+
"sep_token": "[SEP]",
|
6 |
+
"unk_token": "[UNK]"
|
7 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"do_lower_case": true,
|
4 |
+
"mask_token": "[MASK]",
|
5 |
+
"model_max_length": 512,
|
6 |
+
"name_or_path": "distilbert-base-uncased",
|
7 |
+
"pad_token": "[PAD]",
|
8 |
+
"sep_token": "[SEP]",
|
9 |
+
"special_tokens_map_file": null,
|
10 |
+
"strip_accents": null,
|
11 |
+
"tokenize_chinese_chars": true,
|
12 |
+
"tokenizer_class": "DistilBertTokenizer",
|
13 |
+
"unk_token": "[UNK]"
|
14 |
+
}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|