techysanoj commited on
Commit
a7ea906
·
1 Parent(s): c6cab21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -81
README.md CHANGED
@@ -9,50 +9,14 @@ license: cc-by-4.0
9
  ---
10
 
11
  ---
12
- language: en
13
- license: cc-by-4.0
14
- datasets:
15
- - squad_v2
16
- model-index:
17
- - name: Aviskaaram-ekta
18
- results:
19
- - task:
20
- type: Question-Answering
21
- name: Question Answering
22
- dataset:
23
- name: squad_v2
24
- type: squad_v2
25
- config: squad_v2
26
- split: validation
27
- metrics:
28
- - type: exact_match
29
- value: -
30
- name: Exact Match
31
- verified: true
32
- verifyToken: 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
33
-
34
- - type: f1
35
- value: -
36
- name: F1
37
- verified: true
38
- verifyToken: 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
39
-
40
- - type: total
41
- value: 11869
42
- name: total
43
- verified: true
44
- verifyToken: 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
45
-
46
- ---
47
-
48
- # Aviskaaram-ekta for QA
49
 
50
- This is the [Aviskaaram-ekta](https://huggingface.co/) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
51
 
52
 
53
  ## Overview
54
- **Language model:**Aviskaaram-ekta
55
- **Language:** English
56
  **Downstream-task:** Extractive QA
57
  **Training data:** SQuAD 2.0
58
  **Eval data:** SQuAD 2.0
@@ -73,7 +37,6 @@ doc_stride=128
73
  max_query_length=64
74
  ```
75
 
76
-
77
  ## Usage
78
 
79
  ### In Haystack
@@ -83,7 +46,7 @@ reader = FARMReader(model_name_or_path="AVISHKAARAM/avishkaarak-ekta-hindi")
83
  # or
84
  reader = TransformersReader(model_name_or_path="AVISHKAARAM/avishkaarak-ekta-hindi",tokenizer="deepset/roberta-base-squad2")
85
  ```
86
- For a complete example of ``avishkaarak-ekta-hindi`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
87
 
88
  ### In Transformers
89
  ```python
@@ -108,46 +71,19 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
108
  Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
109
 
110
  ```
111
- "exact": ,
112
- "f1": ,
113
-
114
- "total": ,
115
- "HasAns_exact": ,
116
- "HasAns_f1": ,
117
- "HasAns_total": ,
118
- "NoAns_exact": ,
119
- "NoAns_f1": ,
120
- "NoAns_total":
121
  ```
122
 
123
  ## Authors
124
- **Branden Chan:** branden.chan@deepset.ai
125
- **Timo Möller:** [email protected]
126
- **Malte Pietsch:** [email protected]
127
- **Tanay Soni:** [email protected]
128
-
129
- ## About us
130
-
131
- <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
132
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
133
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
134
- </div>
135
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
136
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
137
- </div>
138
- </div>
139
-
140
- [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
141
-
142
-
143
-
144
- ## Get in touch and join the Haystack community
145
-
146
- <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
147
-
148
- We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
149
-
150
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
151
-
152
- By the way: [we're hiring!](http://www.deepset.ai/jobs)
153
 
 
 
9
  ---
10
 
11
  ---
12
+ # AVISHKAARAM/avishkaarak-ekta-hindi for QA
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ This is the [avishkaarak-ekta-hindi](https://huggingface.co/AVISHKAARAM/avishkaarak-ekta-hindi) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
15
 
16
 
17
  ## Overview
18
+ **Language model:** avishkaarak-ekta-hindi
19
+ **Language:** English, Hindi(Upcoming)
20
  **Downstream-task:** Extractive QA
21
  **Training data:** SQuAD 2.0
22
  **Eval data:** SQuAD 2.0
 
37
  max_query_length=64
38
  ```
39
 
 
40
  ## Usage
41
 
42
  ### In Haystack
 
46
  # or
47
  reader = TransformersReader(model_name_or_path="AVISHKAARAM/avishkaarak-ekta-hindi",tokenizer="deepset/roberta-base-squad2")
48
  ```
49
+ For a complete example of ``AVISHKAARAM/avishkaarak-ekta-hindi`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
50
 
51
  ### In Transformers
52
  ```python
 
71
  Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
72
 
73
  ```
74
+ "exact": 79.87029394424324,
75
+ "f1": 82.91251169582613,
76
+
77
+ "total": 11873,
78
+ "HasAns_exact": 77.93522267206478,
79
+ "HasAns_f1": 84.02838248389763,
80
+ "HasAns_total": 5928,
81
+ "NoAns_exact": 81.79983179142137,
82
+ "NoAns_f1": 81.79983179142137,
83
+ "NoAns_total": 5945
84
  ```
85
 
86
  ## Authors
87
+ **Shashwat Bindal:** optimus.coders.@ai
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
+ **Sanoj:** optimus.coders.@ai