Datasets:

Sub-tasks:
text-scoring
Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
17e5b01
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-scoring
4
+ multilinguality:
5
+ - monolingual
6
+ task_ids:
7
+ - other
8
+ languages:
9
+ - en
10
+ annotations_creators:
11
+ - crowdsourced
12
+ source_datasets:
13
+ - extended
14
+ size_categories:
15
+ - 10K<n<100K
16
+ licenses:
17
+ - CC-BY-SA-4-0
18
+ ---
19
+
20
+ # Dataset Card Creation Guide
21
+
22
+ ## Table of Contents
23
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
24
+ - [Table of Contents](#table-of-contents)
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits](#data-splits)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
37
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
38
+ - [Annotations](#annotations)
39
+ - [Annotation process](#annotation-process)
40
+ - [Who are the annotators?](#who-are-the-annotators)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** [GitHub](https://github.com/google-research-datasets/query-wellformedness)
54
+ - **Repository:** [GitHub](https://github.com/google-research-datasets/query-wellformedness)
55
+ - **Paper:** [ARXIV](https://arxiv.org/abs/1808.09419)
56
+ - **Leaderboard:**
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ Google's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ [More Information Needed]
66
+
67
+ ### Languages
68
+
69
+ English
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ [More Information Needed]
76
+
77
+ ### Data Fields
78
+
79
+ - `rating`: a `float` between 0-1
80
+ - `sentence`: query which you want to rate
81
+
82
+ ### Data Splits
83
+
84
+ | | Train | Valid | Test |
85
+ | ----- | ------ | ----- | ---- |
86
+ | Input Sentences | 17500 | 3750 | 3850 |
87
+
88
+ ## Dataset Creation
89
+
90
+ ### Curation Rationale
91
+
92
+ Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. This dataset introduce a new task of identifying a well-formed natural language question.
93
+
94
+ ### Source Data
95
+
96
+ Used the Paralex corpus (Fader et al., 2013) that contains pairs of noisy paraphrase questions. These questions were issued by users in WikiAnswers (a Question-Answer forum) and consist of both web-search query like constructs (“5 parts of chloroplast?”) and well-formed questions (“What is the punishment for grand theft?”).
97
+
98
+ #### Initial Data Collection and Normalization
99
+
100
+ Selected 25,100 queries from the unique list of queries extracted from the corpus such that no two queries in the selected set are paraphrases.
101
+
102
+ #### Who are the source language producers?
103
+
104
+ [More Information Needed]
105
+
106
+ ### Annotations
107
+
108
+ #### Annotation process
109
+
110
+ The queries are annotated into well-formed or non-wellformed questions if it satisfies the following:
111
+
112
+ 1. Query is grammatical.
113
+ 2. Query is an explicit question.
114
+ 3. Query does not contain spelling errors.
115
+
116
+ #### Who are the annotators?
117
+
118
+ Every query was labeled by five different crowdworkers with a binary label indicating whether a query is well-formed or not. And average of the ratings of the five annotators was reported, to get the probability of a query being well-formed.
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ [More Information Needed]
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ [More Information Needed]
129
+
130
+ ### Discussion of Biases
131
+
132
+ [More Information Needed]
133
+
134
+ ### Other Known Limitations
135
+
136
+ [More Information Needed]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ [More Information Needed]
143
+
144
+ ### Licensing Information
145
+
146
+ Query-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party content or data is provided “As Is” without any warranty, express or implied.
147
+
148
+ ### Citation Information
149
+
150
+ ```
151
+ @InProceedings{FaruquiDas2018,
152
+ title = {{Identifying Well-formed Natural Language Questions}},
153
+ author = {Faruqui, Manaal and Das, Dipanjan},
154
+ booktitle = {Proc. of EMNLP},
155
+ year = {2018}
156
+ }
157
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Google's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed.\n", "citation": "@misc{faruqui2018identifying,\n title={Identifying Well-formed Natural Language Questions},\n author={Manaal Faruqui and Dipanjan Das},\n year={2018},\n eprint={1808.09419},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/google-research-datasets/query-wellformedness", "license": "", "features": {"rating": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "google_wellformed_query", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 857391, "num_examples": 17500, "dataset_name": "google_wellformed_query"}, "test": {"name": "test", "num_bytes": 189503, "num_examples": 3850, "dataset_name": "google_wellformed_query"}, "validation": {"name": "validation", "num_bytes": 184110, "num_examples": 3750, "dataset_name": "google_wellformed_query"}}, "download_checksums": {"https://raw.githubusercontent.com/google-research-datasets/query-wellformedness/master/train.tsv": {"num_bytes": 805818, "checksum": "d857d11fed665bd6daeaf68bc5bbcf81c0cccfa21d485f4f8be9a169db526b6b"}, "https://raw.githubusercontent.com/google-research-datasets/query-wellformedness/master/test.tsv": {"num_bytes": 178070, "checksum": "978574b96a37587845fd25e1cabc992a2d6f1bcab57750d1d04b60e2757a0ba2"}, "https://raw.githubusercontent.com/google-research-datasets/query-wellformedness/master/dev.tsv": {"num_bytes": 173131, "checksum": "a9e869b9c66fa43887f9a6da41b5928ab0296439990930b0ff0becfd0842193d"}}, "download_size": 1157019, "post_processing_size": null, "dataset_size": 1231004, "size_in_bytes": 2388023}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb4799c784ada192e575c585a63c2af963bcc734af89f0b1c77ad69576283449
3
+ size 950
google_wellformed_query.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Google Wellformed Query Dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @misc{faruqui2018identifying,
24
+ title={Identifying Well-formed Natural Language Questions},
25
+ author={Manaal Faruqui and Dipanjan Das},
26
+ year={2018},
27
+ eprint={1808.09419},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ Google's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed.
35
+ """
36
+
37
+ _URL = "https://raw.githubusercontent.com/google-research-datasets/query-wellformedness/master/{}.tsv"
38
+
39
+
40
+ class GoogleWellformedQuery(datasets.GeneratorBasedBuilder):
41
+ def _info(self):
42
+ return datasets.DatasetInfo(
43
+ description=_DESCRIPTION,
44
+ features=datasets.Features({"rating": datasets.Value("float"), "content": datasets.Value("string")}),
45
+ supervised_keys=None,
46
+ homepage="https://github.com/google-research-datasets/query-wellformedness",
47
+ citation=_CITATION,
48
+ )
49
+
50
+ def _split_generators(self, dl_manager):
51
+ tr_file = dl_manager.download_and_extract(_URL.format("train"))
52
+ tst_file = dl_manager.download_and_extract(_URL.format("test"))
53
+ dev_file = dl_manager.download_and_extract(_URL.format("dev"))
54
+
55
+ return [
56
+ datasets.SplitGenerator(
57
+ name=datasets.Split.TRAIN,
58
+ # These kwargs will be passed to _generate_examples
59
+ gen_kwargs={
60
+ "filepath": tr_file,
61
+ },
62
+ ),
63
+ datasets.SplitGenerator(
64
+ name=datasets.Split.TEST,
65
+ # These kwargs will be passed to _generate_examples
66
+ gen_kwargs={
67
+ "filepath": tst_file,
68
+ },
69
+ ),
70
+ datasets.SplitGenerator(
71
+ name=datasets.Split.VALIDATION,
72
+ # These kwargs will be passed to _generate_examples
73
+ gen_kwargs={
74
+ "filepath": dev_file,
75
+ },
76
+ ),
77
+ ]
78
+
79
+ def _generate_examples(self, filepath):
80
+ """ Yields examples. """
81
+ with open(filepath, "r", encoding="utf-8") as file:
82
+ reader = file.read().split("\n")
83
+ for idx, row in enumerate(reader):
84
+ row = row.split("\t")
85
+ if len(row) == 1:
86
+ continue
87
+ yield idx, {"rating": row[1], "content": row[0]}