Datasets:
File size: 6,296 Bytes
9344e0d dd57b65 1cc4854 dd57b65 1cc4854 dd57b65 1cc4854 dd57b65 1cc4854 9344e0d e7fb76a 92b3fdd 898c16e 92b3fdd e7fb76a 4b54a41 9092643 82f9560 f82ac25 e7fb76a 89885f4 c3dcaf6 e31a180 e7fb76a 9549646 0d593c0 e7fb76a 0d593c0 e7fb76a 0d593c0 e7fb76a 92b3fdd 140cc60 92b3fdd 898c16e e7fb76a 898c16e e7fb76a 898c16e e7fb76a e31a180 e7fb76a e31a180 e7fb76a e31a180 e7fb76a c31dc2c e7fb76a e31a180 e7fb76a e31a180 e7fb76a c31dc2c e7fb76a 8ef8dda 78f17d3 8ef8dda e7fb76a fb4e4af |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
language:
- fa
license: mit
multilinguality:
- monolingual
size_categories:
- 30k<n<50k
task_categories:
- question-answering
- text2text-generation
- text-generation
task_ids: []
pretty_name: SynTranFa
tags:
- conditional-text-generation
- conversational-question-answering
---
# SynTran-fa
Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
```python
import datasets
data = datasets.load_dataset('SLPL/syntran-fa', split="train")
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL)
- **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa)
- **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
- **Paper:** [SynTran-fa: Generating Comprehensive Answers for Farsi QA Pairs via Syntactic Transformation](https://www.preprints.org/manuscript/202410.1684/v1)
### Dataset Summary
Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data).
The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
### Supported Tasks and Leaderboards
This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf).
### Languages
+ Persian (fa)
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'id': 0,
'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟',
'short_answer': 'باشگاه هاکی ساوتهمپتون',
'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.',
'bert_loss': 1.110097069682014
}
```
+ `id` : the entry id in dataset
+ `question` : the question
+ `short_answer` : the short answer corresponding to the `question` (the primary answer)
+ `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer)
+ `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent.
Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent.
### Data Splits
Currently, the dataset just provided the `train` split. There would be a `test` split soon.
## Dataset Creation
### Source Data
The source datasets that we used are as follows:
+ [PersianQA](https://github.com/sajjjadayobi/PersianQA)
+ [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
#### Initial Data Collection and Normalization
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
## Additional Information
### Dataset Curators
The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
### Licensing Information
MIT
### Citation Information
```bibtex
@article{farsi2024syntran,
title={SynTran-fa: Generating Comprehensive Answers for Farsi QA Pairs via Syntactic Transformation},
author={Farsi, Farhan and Sabouri, Sadra and Kashfipour, Kian and Gooran, Soroush and Sameti, Hossein and Asgari, Ehsaneddin},
year={2024},
doi={10.20944/preprints202410.1684.v1},
publisher={Preprints}
}
```
### Contributions
Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset. |