pt-sk commited on
Commit
da28191
1 Parent(s): db29652

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -3
README.md CHANGED
@@ -1,3 +1,177 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license: mit
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 10K<n<100K
13
+ source_datasets:
14
+ - original
15
+ task_categories:
16
+ - text-classification
17
+ - text-generation
18
+ task_ids:
19
+ - sentiment-classification
20
+ paperswithcode_id: imdb-movie-reviews
21
+ pretty_name: IMDB
22
+ dataset_info:
23
+ config_name: plain_text
24
+ features:
25
+ - name: text
26
+ dtype: string
27
+ - name: label
28
+ dtype:
29
+ class_label:
30
+ names:
31
+ '0': neg
32
+ '1': pos
33
+ splits:
34
+ - name: train
35
+ num_bytes: 33432823
36
+ num_examples: 25000
37
+ - name: test
38
+ num_bytes: 32650685
39
+ num_examples: 25000
40
+ - name: unsupervised
41
+ num_bytes: 67106794
42
+ num_examples: 50000
43
+ download_size: 83446840
44
+ dataset_size: 133190302
45
+ configs:
46
+ - config_name: plain_text
47
+ data_files:
48
+ - split: train
49
+ path: plain_text/train-*
50
+ - split: test
51
+ path: plain_text/test-*
52
+ - split: unsupervised
53
+ path: plain_text/unsupervised-*
54
+ default: true
55
+ train-eval-index:
56
+ - config: plain_text
57
+ task: text-classification
58
+ task_id: binary_classification
59
+ splits:
60
+ train_split: train
61
+ eval_split: test
62
+ col_mapping:
63
+ text: text
64
+ label: target
65
+ metrics:
66
+ - type: accuracy
67
+ - name: Accuracy
68
+ - type: f1
69
+ name: F1 macro
70
+ args:
71
+ average: macro
72
+ - type: f1
73
+ name: F1 micro
74
+ args:
75
+ average: micro
76
+ - type: f1
77
+ name: F1 weighted
78
+ args:
79
+ average: weighted
80
+ - type: precision
81
+ name: Precision macro
82
+ args:
83
+ average: macro
84
+ - type: precision
85
+ name: Precision micro
86
+ args:
87
+ average: micro
88
+ - type: precision
89
+ name: Precision weighted
90
+ args:
91
+ average: weighted
92
+ - type: recall
93
+ name: Recall macro
94
+ args:
95
+ average: macro
96
+ - type: recall
97
+ name: Recall micro
98
+ args:
99
+ average: micro
100
+ - type: recall
101
+ name: Recall weighted
102
+ args:
103
+ average: weighted
104
+ ---
105
+
106
+ # Dataset Card for "imdb"
107
+
108
+ ## Table of Contents
109
+ - [Dataset Description](#dataset-description)
110
+ - [Dataset Summary](#dataset-summary)
111
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
112
+ - [Languages](#languages)
113
+ - [Dataset Structure](#dataset-structure)
114
+ - [Data Instances](#data-instances)
115
+ - [Data Fields](#data-fields)
116
+ - [Data Splits](#data-splits)
117
+ - [Dataset Creation](#dataset-creation)
118
+ - [Curation Rationale](#curation-rationale)
119
+ - [Source Data](#source-data)
120
+ - [Annotations](#annotations)
121
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
122
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
123
+ - [Social Impact of Dataset](#social-impact-of-dataset)
124
+ - [Discussion of Biases](#discussion-of-biases)
125
+ - [Other Known Limitations](#other-known-limitations)
126
+ - [Additional Information](#additional-information)
127
+ - [Dataset Curators](#dataset-curators)
128
+ - [Licensing Information](#licensing-information)
129
+ - [Citation Information](#citation-information)
130
+ - [Contributions](#contributions)
131
+
132
+ ## Dataset Description
133
+
134
+ - **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
135
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+ - **Size of downloaded dataset files:** 84.13 MB
139
+ - **Size of the generated dataset:** 133.23 MB
140
+ - **Total amount of disk used:** 217.35 MB
141
+
142
+ ### Dataset Summary
143
+
144
+ Large Movie Review Dataset.
145
+ This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
146
+
147
+ ## Dataset Structure
148
+
149
+ ### Data Instances
150
+
151
+ #### plain_text
152
+
153
+ - **Size of downloaded dataset files:** 84.13 MB
154
+ - **Size of the generated dataset:** 133.23 MB
155
+ - **Total amount of disk used:** 217.35 MB
156
+
157
+ An example of 'train' looks as follows.
158
+ ```
159
+ {
160
+ "label": 0,
161
+ "text": "Goodbye world2\n"
162
+ }
163
+ ```
164
+
165
+ ### Data Fields
166
+
167
+ The data fields are the same among all splits.
168
+
169
+ #### plain_text
170
+ - `text`: a `string` feature.
171
+ - `label`: a classification label, with possible values including `neg` (0), `pos` (1).
172
+
173
+ ### Data Splits
174
+
175
+ | name |train|unsupervised|test |
176
+ |----------|----:|-----------:|----:|
177
+ |plain_text|25000| 50000|25000|