Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Tags:
pii-detection
License:
Update README.md
Browse files
README.md
CHANGED
@@ -56,16 +56,14 @@ train-eval-index:
|
|
56 |
|
57 |
## Dataset Description
|
58 |
|
59 |
-
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://
|
60 |
- **Size of the generated dataset:** 146.4 MB
|
61 |
|
62 |
### Dataset Summary
|
63 |
-
A synthetic dataset generated using Privy, a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
|
64 |
|
65 |
This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, MySQL), HTML, and XML) generated from OpenAPI specifications and includes 60+ PII types.
|
66 |
|
67 |
-
The uploading dataset using ___(IOB?)__ tagging scheme
|
68 |
-
|
69 |
An IOB tagged version of this dataset is available [](here)
|
70 |
|
71 |
### Supported Tasks and Leaderboards
|
@@ -98,12 +96,6 @@ A sample looks like this:
|
|
98 |
}
|
99 |
```
|
100 |
|
101 |
-
### Data Splits
|
102 |
-
|
103 |
-
| name |train|validation|test|
|
104 |
-
|---------|----:|---------:|---:|
|
105 |
-
|conll2003|14041| 3250|3453|
|
106 |
-
|
107 |
## Dataset Creation
|
108 |
|
109 |
### Curation Rationale
|
@@ -150,44 +142,20 @@ A sample looks like this:
|
|
150 |
|
151 |
## Additional Information
|
152 |
|
|
|
|
|
153 |
### Dataset Curators
|
154 |
|
155 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
156 |
|
157 |
### Licensing Information
|
158 |
|
159 |
-
|
160 |
-
|
161 |
-
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
|
162 |
-
|
163 |
-
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
|
164 |
-
|
165 |
-
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
|
166 |
-
>
|
167 |
-
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
|
168 |
-
>
|
169 |
-
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
|
170 |
-
>
|
171 |
-
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
|
172 |
-
>
|
173 |
-
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
|
174 |
|
175 |
### Citation Information
|
176 |
|
177 |
-
|
178 |
-
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
|
179 |
-
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
|
180 |
-
author = "Tjong Kim Sang, Erik F. and
|
181 |
-
De Meulder, Fien",
|
182 |
-
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
|
183 |
-
year = "2003",
|
184 |
-
url = "https://www.aclweb.org/anthology/W03-0419",
|
185 |
-
pages = "142--147",
|
186 |
-
}
|
187 |
-
|
188 |
-
```
|
189 |
-
|
190 |
|
191 |
### Contributions
|
192 |
|
193 |
-
|
|
|
56 |
|
57 |
## Dataset Description
|
58 |
|
59 |
+
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy)
|
60 |
- **Size of the generated dataset:** 146.4 MB
|
61 |
|
62 |
### Dataset Summary
|
63 |
+
A synthetic dataset generated using [Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy), a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
|
64 |
|
65 |
This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, MySQL), HTML, and XML) generated from OpenAPI specifications and includes 60+ PII types.
|
66 |
|
|
|
|
|
67 |
An IOB tagged version of this dataset is available [](here)
|
68 |
|
69 |
### Supported Tasks and Leaderboards
|
|
|
96 |
}
|
97 |
```
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
## Dataset Creation
|
100 |
|
101 |
### Curation Rationale
|
|
|
142 |
|
143 |
## Additional Information
|
144 |
|
145 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
146 |
+
|
147 |
### Dataset Curators
|
148 |
|
149 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
150 |
|
151 |
### Licensing Information
|
152 |
|
153 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
|
155 |
### Citation Information
|
156 |
|
157 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
|
159 |
### Contributions
|
160 |
|
161 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|