Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Tags:
pii-detection
License:
Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ train-eval-index:
|
|
28 |
name: seqeval
|
29 |
---
|
30 |
|
31 |
-
# Dataset Card for "
|
32 |
|
33 |
## Table of Contents
|
34 |
- [Dataset Description](#dataset-description)
|
@@ -56,13 +56,8 @@ train-eval-index:
|
|
56 |
|
57 |
## Dataset Description
|
58 |
|
59 |
-
- **Homepage:** [https://
|
60 |
-
- **
|
61 |
-
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
62 |
-
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
63 |
-
- **Size of downloaded dataset files:** 4.63 MB
|
64 |
-
- **Size of the generated dataset:** 9.78 MB
|
65 |
-
- **Total amount of disk used:** 14.41 MB
|
66 |
|
67 |
### Dataset Summary
|
68 |
A synthetic dataset generated using Privy, a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
|
@@ -71,70 +66,38 @@ This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, My
|
|
71 |
|
72 |
The uploading dataset using ___(IOB?)__ tagging scheme
|
73 |
|
|
|
|
|
74 |
### Supported Tasks and Leaderboards
|
75 |
|
76 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
77 |
|
78 |
### Languages
|
79 |
|
80 |
-
|
81 |
|
82 |
## Dataset Structure
|
83 |
|
84 |
### Data Instances
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
- **Size of downloaded dataset files:** 4.63 MB
|
89 |
-
- **Size of the generated dataset:** 9.78 MB
|
90 |
-
- **Total amount of disk used:** 14.41 MB
|
91 |
-
|
92 |
-
An example of 'train' looks as follows.
|
93 |
-
|
94 |
```
|
95 |
{
|
96 |
-
"
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
}
|
102 |
```
|
103 |
|
104 |
-
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
|
105 |
-
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
|
106 |
-
|
107 |
-
### Data Fields
|
108 |
-
|
109 |
-
The data fields are the same among all splits.
|
110 |
-
|
111 |
-
#### conll2003
|
112 |
-
- `id`: a `string` feature.
|
113 |
-
- `tokens`: a `list` of `string` features.
|
114 |
-
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
115 |
-
|
116 |
-
```python
|
117 |
-
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
|
118 |
-
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
|
119 |
-
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
|
120 |
-
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
|
121 |
-
'WP': 44, 'WP$': 45, 'WRB': 46}
|
122 |
-
```
|
123 |
-
|
124 |
-
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
125 |
-
|
126 |
-
```python
|
127 |
-
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
|
128 |
-
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
|
129 |
-
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
|
130 |
-
```
|
131 |
-
|
132 |
-
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
133 |
-
|
134 |
-
```python
|
135 |
-
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
|
136 |
-
```
|
137 |
-
|
138 |
### Data Splits
|
139 |
|
140 |
| name |train|validation|test|
|
|
|
28 |
name: seqeval
|
29 |
---
|
30 |
|
31 |
+
# Dataset Card for "privy-small-en"
|
32 |
|
33 |
## Table of Contents
|
34 |
- [Dataset Description](#dataset-description)
|
|
|
56 |
|
57 |
## Dataset Description
|
58 |
|
59 |
+
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://www.aclweb.org/anthology/W03-0419/)
|
60 |
+
- **Size of the generated dataset:** 146.4 MB
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
### Dataset Summary
|
63 |
A synthetic dataset generated using Privy, a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
|
|
|
66 |
|
67 |
The uploading dataset using ___(IOB?)__ tagging scheme
|
68 |
|
69 |
+
An IOB tagged version of this dataset is available [](here)
|
70 |
+
|
71 |
### Supported Tasks and Leaderboards
|
72 |
|
73 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
74 |
|
75 |
### Languages
|
76 |
|
77 |
+
English
|
78 |
|
79 |
## Dataset Structure
|
80 |
|
81 |
### Data Instances
|
82 |
|
83 |
+
A sample looks like this:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
```
|
85 |
{
|
86 |
+
"full_text": "{\"full_name_female\": \"Bethany Williams\", \"NewServerCertificateName\": \"\", \"NewPath\": \"\", \"ServerCertificateName\": \"dCwMNqR\", \"Action\": \"\", \"Version\": \"u zNS zNS\"}",
|
87 |
+
"masked": "{\"full_name_female\": \"{{name_female}}\", \"NewServerCertificateName\": \"{{string}}\", \"NewPath\": \"{{string}}\", \"ServerCertificateName\": \"{{string}}\", \"Action\": \"{{string}}\", \"Version\": \"{{string}}\"}",
|
88 |
+
"spans": [
|
89 |
+
{
|
90 |
+
"entity_type": "PERSON",
|
91 |
+
"entity_value": "Bethany Williams",
|
92 |
+
"start_position": 22,
|
93 |
+
"end_position": 38
|
94 |
+
}
|
95 |
+
],
|
96 |
+
"template_id": 51889,
|
97 |
+
"metadata": null
|
98 |
}
|
99 |
```
|
100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
### Data Splits
|
102 |
|
103 |
| name |train|validation|test|
|