sadrasabouri commited on
Commit
e31a180
1 Parent(s): 898c16e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -28
README.md CHANGED
@@ -31,18 +31,12 @@ data = datasets.load_dataset('SLPL/syntran-fa', split="train")
31
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
  - [Languages](#languages)
33
  - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-fields)
36
- - [Data Splits](#data-splits)
37
  - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
  - [Source Data](#source-data)
40
  - [Annotations](#annotations)
41
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
  - [Considerations for Using the Data](#considerations-for-using-the-data)
43
  - [Social Impact of Dataset](#social-impact-of-dataset)
44
- - [Discussion of Biases](#discussion-of-biases)
45
- - [Other Known Limitations](#other-known-limitations)
46
  - [Additional Information](#additional-information)
47
  - [Dataset Curators](#dataset-curators)
48
  - [Licensing Information](#licensing-information)
@@ -60,10 +54,7 @@ data = datasets.load_dataset('SLPL/syntran-fa', split="train")
60
 
61
  Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
62
 
63
- This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources were as follows:
64
- + [PersianQA](https://github.com/sajjjadayobi/PersianQA)
65
- + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
66
- + [PQuAD](https://arxiv.org/abs/2202.06219)
67
 
68
  The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (1-2 word) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
69
 
@@ -100,19 +91,16 @@ Currently, the dataset just provided the `train` split. There would be a `test`
100
 
101
  ## Dataset Creation
102
 
103
- [More Information Needed]
104
-
105
- ### Curation Rationale
106
-
107
- [More Information Needed]
108
 
109
  ### Source Data
 
110
 
111
- #### Initial Data Collection and Normalization
112
-
113
- [More Information Needed]
114
 
115
- #### Who are the source language producers?
116
 
117
  [More Information Needed]
118
 
@@ -128,19 +116,13 @@ Currently, the dataset just provided the `train` split. There would be a `test`
128
 
129
  ### Personal and Sensitive Information
130
 
131
- [More Information Needed]
132
 
133
  ## Considerations for Using the Data
134
 
135
- ### Social Impact of Dataset
136
-
137
  [More Information Needed]
138
 
139
- ### Discussion of Biases
140
-
141
- [More Information Needed]
142
-
143
- ### Other Known Limitations
144
 
145
  [More Information Needed]
146
 
@@ -148,7 +130,7 @@ Currently, the dataset just provided the `train` split. There would be a `test`
148
 
149
  ### Dataset Curators
150
 
151
- [More Information Needed]
152
 
153
  ### Licensing Information
154
 
 
31
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
  - [Languages](#languages)
33
  - [Dataset Structure](#dataset-structure)
 
 
 
34
  - [Dataset Creation](#dataset-creation)
 
35
  - [Source Data](#source-data)
36
  - [Annotations](#annotations)
37
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
  - [Considerations for Using the Data](#considerations-for-using-the-data)
39
  - [Social Impact of Dataset](#social-impact-of-dataset)
 
 
40
  - [Additional Information](#additional-information)
41
  - [Dataset Curators](#dataset-curators)
42
  - [Licensing Information](#licensing-information)
 
54
 
55
  Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
56
 
57
+ This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data).
 
 
 
58
 
59
  The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (1-2 word) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
60
 
 
91
 
92
  ## Dataset Creation
93
 
94
+ We extract all short answer (1-2 words as answer) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
 
 
 
 
95
 
96
  ### Source Data
97
+ The source datasets that we used are as follows:
98
 
99
+ + [PersianQA](https://github.com/sajjjadayobi/PersianQA)
100
+ + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
101
+ + [PQuAD](https://arxiv.org/abs/2202.06219)
102
 
103
+ #### Initial Data Collection and Normalization
104
 
105
  [More Information Needed]
106
 
 
116
 
117
  ### Personal and Sensitive Information
118
 
119
+ The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
120
 
121
  ## Considerations for Using the Data
122
 
 
 
123
  [More Information Needed]
124
 
125
+ ### Social Impact of Dataset
 
 
 
 
126
 
127
  [More Information Needed]
128
 
 
130
 
131
  ### Dataset Curators
132
 
133
+ The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
134
 
135
  ### Licensing Information
136