Datasets:
sadrasabouri
commited on
Commit
·
c31dc2c
1
Parent(s):
9549646
Update README.md
Browse files
README.md
CHANGED
@@ -33,8 +33,6 @@ data = datasets.load_dataset('SLPL/syntran-fa', split="train")
|
|
33 |
- [Dataset Structure](#dataset-structure)
|
34 |
- [Dataset Creation](#dataset-creation)
|
35 |
- [Source Data](#source-data)
|
36 |
-
- [Annotations](#annotations)
|
37 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
- [Additional Information](#additional-information)
|
@@ -90,18 +88,15 @@ Currently, the dataset just provided the `train` split. There would be a `test`
|
|
90 |
|
91 |
## Dataset Creation
|
92 |
|
93 |
-
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
|
94 |
-
|
95 |
### Source Data
|
96 |
The source datasets that we used are as follows:
|
97 |
|
98 |
+ [PersianQA](https://github.com/sajjjadayobi/PersianQA)
|
99 |
+ [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
|
100 |
-
+ [PQuAD](https://arxiv.org/abs/2202.06219)
|
101 |
|
102 |
#### Initial Data Collection and Normalization
|
103 |
|
104 |
-
|
105 |
|
106 |
### Annotations
|
107 |
|
@@ -117,14 +112,6 @@ The source datasets that we used are as follows:
|
|
117 |
|
118 |
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
|
119 |
|
120 |
-
## Considerations for Using the Data
|
121 |
-
|
122 |
-
[More Information Needed]
|
123 |
-
|
124 |
-
### Social Impact of Dataset
|
125 |
-
|
126 |
-
[More Information Needed]
|
127 |
-
|
128 |
## Additional Information
|
129 |
|
130 |
### Dataset Curators
|
@@ -133,7 +120,7 @@ The dataset is gathered together completely in the Asr Gooyesh Pardaz company's
|
|
133 |
|
134 |
### Licensing Information
|
135 |
|
136 |
-
|
137 |
|
138 |
### Citation Information
|
139 |
|
|
|
33 |
- [Dataset Structure](#dataset-structure)
|
34 |
- [Dataset Creation](#dataset-creation)
|
35 |
- [Source Data](#source-data)
|
|
|
|
|
36 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
37 |
- [Social Impact of Dataset](#social-impact-of-dataset)
|
38 |
- [Additional Information](#additional-information)
|
|
|
88 |
|
89 |
## Dataset Creation
|
90 |
|
|
|
|
|
91 |
### Source Data
|
92 |
The source datasets that we used are as follows:
|
93 |
|
94 |
+ [PersianQA](https://github.com/sajjjadayobi/PersianQA)
|
95 |
+ [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
|
|
|
96 |
|
97 |
#### Initial Data Collection and Normalization
|
98 |
|
99 |
+
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
|
100 |
|
101 |
### Annotations
|
102 |
|
|
|
112 |
|
113 |
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
|
114 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
115 |
## Additional Information
|
116 |
|
117 |
### Dataset Curators
|
|
|
120 |
|
121 |
### Licensing Information
|
122 |
|
123 |
+
MIT
|
124 |
|
125 |
### Citation Information
|
126 |
|