license: mit
task_categories:
- token-classification
language:
- he
tags:
- privacy
- token-classification
- security
- text-masking
- hebrew
- pii-detection
- ner
pretty_name: GolemGuard
size_categories:
- 100K<n<1M
GolemGuard: Hebrew Privacy Information Detection Corpus
GolemGuard is a comprehensive Hebrew language dataset specifically designed for training and evaluating models for Personal Identifiable Information (PII) detection and masking. The dataset contains ~600MB of synthetic text data representing various document types and communication formats commonly found in Israeli professional and administrative contexts.
Source Data
Initial Data Collection and Normalization
The dataset combines synthetic data generated from multiple authoritative sources:
Names:
- Israeli government open data portal (data.gov.il) for first names
- Hebrew Wikipedia category for family names
- Faker library
- Additional curated sources
Addresses:
- Israeli government datasets for cities
- Israeli government street names datasets
Synthetic Identifiers:
- ID numbers: Generated following Israeli ID number format and checksum rules
- Bank accounts: Generated following Israeli bank account number formats
- Postal codes: Based on valid Israeli postal code ranges
- Credit cards: Generated following valid card number algorithms
- Email addresses: Constructed using collected names and top Israeli email providers
Entity Types:
PII Entity Type | Description | Count |
---|---|---|
FIRST_NAME | First names | 131,192 |
LAST_NAME | Last names | 115,020 |
ID_NUM | Israeli ID numbers | 77,902 |
PHONE_NUM | Phone numbers (both mobile and landlines) | 142,795 |
DATE | Dates | 77,042 |
STREET | Street addresses | 105,381 |
CITY | City names | 105,874 |
Email addresses | 90,080 | |
POSTAL_CODE | Israeli postal codes (Mikud) | 93,748 |
BANK_ACCOUNT_NUM | Bank account numbers | 31,046 |
CC_NUM | Credit card numbers | 1,751 |
CC_PROVIDER | Credit card providers | 1,679 |
Template Diversity
The dataset includes 2,607 unique document templates. Here are the top document types by instance count:
Template Type | Instance Count |
---|---|
meeting_summary | 21,983 |
appointment_confirmation | 4,420 |
job_application_confirmation | 3,611 |
job_application | 3,306 |
medical_appointment_confirmation | 2,359 |
loan_application_received | 1,988 |
job_application_received | 1,832 |
bank_account_update | 1,609 |
appointment_reminder | 1,567 |
medical_appointment_reminder | 1,335 |
Dataset Size
- Total Size: ~600MB
- Number of Examples: 115,453
- Format: JSONL
Data Splits
Split | Number of Instances |
---|---|
Training | 97,453 |
Test | 18,000 |
Total | 115,453 |
Languages
- Hebrew (he)
- Locale: Israel (IL)
Supported Tasks
- Token Classification
- Named Entity Recognition (NER)
- PII Detection
- Text Masking
- Privacy-Preserving Text Processing
Dataset Structure
Data Instances
Each instance in the dataset contains:
{
"id": "String", // Unique identifier
"source_text": "String", // Original text with PII
"masked_text": "String", // Text with PII entities masked
"locale": "String", // Always "IL"
"language": "String", // Always "he"
"split": "String", // "train" or "test"
"privacy_mask": [ // List of PII entities with positions
{
"label": "String", // Entity type
"start": "Integer", // Start position
"end": "Integer", // End position
"value": "String", // Original value
"label_index": "Integer" // Index for multiple entities of same type
}
],
"span_labels": "List", // Entity spans in [start, end, label] format
"tokens": "List", // Tokenized text
"token_classes": "List", // Token-level BIO tags
"input_ids": "List", // Model input token IDs
"attention_mask": "List", // Attention mask for padding
"offset_mapping": "List", // Character offsets for tokens
"template_type": "String" // Document type/template
}
Sample
{"id": "entry_000256", "source_text": "ืืฉืชืชืฃ: ืืืื ืฉื ืืื\nืชืืจืื ืืืื: 28.08.97\nืืชืืืช: ืกืืจืื ืืื 158, ืงืจืืืช ืื, 2676389\nืืืืืื: [email protected]\nืืืคืื: +972-58-2892208\n\nืกืืืื ืืคืืืฉื: ืืคืืืฉื ืื ื ืืื ื ืืฉืืืืช ืฉืืคืืจ ืืชืงืฉืืจืช ืืคื ืืืืช ืืื ืืฆืืืชืื. ืืืืื ืืืฆืข ืกืื ืืืช ืืืืฉืจื ื ืืกืคืช ืืฉืืืข ืืื.", "masked_text": "ืืฉืชืชืฃ: [FIRST_NAME_1] [LAST_NAME_1]\nืชืืจืื ืืืื: [DATE_1]\nืืชืืืช: [STREET_1], [CITY_1], [POSTAL_CODE_1]\nืืืืืื: [EMAIL_1]\nืืืคืื: [PHONE_NUM_1]\n\nืกืืืื ืืคืืืฉื: ืืคืืืฉื ืื ื ืืื ื ืืฉืืืืช ืฉืืคืืจ ืืชืงืฉืืจืช ืืคื ืืืืช ืืื ืืฆืืืชืื. ืืืืื ืืืฆืข ืกืื ืืืช ืืืืฉืจื ื ืืกืคืช ืืฉืืืข ืืื.", "locale": "IL", "language": "he", "split": "train", "privacy_mask": [{"label": "FIRST_NAME", "start": 7, "end": 11, "value": "ืืืื", "label_index": 1}, {"label": "LAST_NAME", "start": 12, "end": 18, "value": "ืฉื ืืื", "label_index": 1}, {"label": "DATE", "start": 31, "end": 39, "value": "28.08.97", "label_index": 1}, {"label": "STREET", "start": 47, "end": 60, "value": "ืกืืจืื ืืื 158", "label_index": 1}, {"label": "CITY", "start": 62, "end": 70, "value": "ืงืจืืืช ืื", "label_index": 1}, {"label": "POSTAL_CODE", "start": 72, "end": 79, "value": "2676389", "label_index": 1}, {"label": "EMAIL", "start": 88, "end": 111, "value": "[email protected]", "label_index": 1}, {"label": "PHONE_NUM", "start": 119, "end": 134, "value": "+972-58-2892208", "label_index": 1}], "span_labels": [[7, 11, "FIRST_NAME"], [12, 18, "LAST_NAME"], [31, 39, "DATE"], [47, 60, "STREET"], [62, 70, "CITY"], [72, 79, "POSTAL_CODE"], [88, 111, "EMAIL"], [119, 134, "PHONE_NUM"]], "tokens": ["<s>", "โื", "ืฉืชืชืฃ", ":", "โืื", "ืื", "โืฉื", "โืืื", "โ", "ืชืืจืื", "โ", "ืืืื", ":", "โ28.", "08.", "97", "โืืชืืืช", ":", "โ", "ืกืืจ", "ืื", "โืืื", "โ158", ",", "โืงืจื", "ืืช", "โ", "ืื", ",", "โ26", "76", "389", "โืืืืืื", ":", "โsa", "git", "ben", "dor", "760", "@", "live", ".", "com", "โืืืคืื", ":", "โ+", "97", "2-", "58", "-28", "92", "208", "โ", "ืกืืืื", "โืืค", "ืืืฉื", ":", "โืืค", "ืืืฉื", "โืื", "โ", "ื ื", "ืื ื", "โ", "ืืฉืืืืช", "โ", "ืฉืืคืืจ", "โืืชืงืฉืืจืช", "โืืคื ืืื", "ืช", "โืืื", "โืืฆืืืช", "ืื", ".", "โื", "ืื", "ืื", "โืืืฆืข", "โ", "ืกืื ืืืช", "โื", "ืืืฉืจื", "โื ืืกืคืช", "โืืฉืืืข", "โืืื", ".", "</s>"], "token_classes": ["O", "O", "O", "O", "B-FIRST_NAME", "I-FIRST_NAME", "B-LAST_NAME", "I-LAST_NAME", "O", "O", "O", "O", "O", "B-DATE", "I-DATE", "I-DATE", "O", "O", "B-STREET", "I-STREET", "I-STREET", "I-STREET", "I-STREET", "O", "B-CITY", "I-CITY", "I-CITY", "I-CITY", "O", "B-POSTAL_CODE", "I-POSTAL_CODE", "I-POSTAL_CODE", "O", "O", "B-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "I-EMAIL", "O", "O", "B-PHONE_NUM", "I-PHONE_NUM", "I-PHONE_NUM", "I-PHONE_NUM", "I-PHONE_NUM", "I-PHONE_NUM", "I-PHONE_NUM", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], "input_ids": [0, 874, 177834, 12, 18133, 22849, 16804, 19767, 6, 174081, 6, 119583, 12, 12511, 17331, 14773, 86185, 12, 6, 214797, 18340, 67512, 78373, 4, 46050, 2754, 6, 448, 4, 1381, 11835, 119861, 194921, 12, 57, 15769, 776, 1846, 110216, 981, 24056, 5, 277, 167153, 12, 997, 14773, 18504, 10057, 48590, 12231, 154782, 6, 186708, 18338, 58366, 12, 65412, 58366, 8248, 6, 16747, 15703, 6, 148055, 6, 154997, 185983, 214547, 609, 7501, 192819, 448, 5, 364, 15662, 21295, 107543, 6, 199930, 657, 226909, 122485, 132299, 54641, 5, 2], "attention_mask": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "offset_mapping": [[0, 0], [0, 1], [1, 5], [5, 6], [7, 9], [9, 11], [12, 14], [15, 18], [19, 20], [19, 24], [25, 26], [25, 29], [29, 30], [31, 34], [34, 37], [37, 39], [40, 45], [45, 46], [47, 48], [47, 50], [50, 52], [53, 56], [57, 60], [60, 61], [62, 65], [65, 67], [68, 69], [68, 70], [70, 71], [72, 74], [74, 76], [76, 79], [80, 86], [86, 87], [88, 90], [90, 93], [93, 96], [96, 99], [99, 102], [102, 103], [103, 107], [107, 108], [108, 111], [112, 117], [117, 118], [119, 120], [120, 122], [122, 124], [124, 126], [126, 129], [129, 131], [131, 134], [136, 137], [136, 141], [142, 144], [144, 148], [148, 149], [150, 152], [152, 156], [157, 159], [160, 161], [160, 162], [162, 165], [166, 167], [166, 172], [173, 174], [173, 178], [179, 186], [187, 193], [193, 194], [195, 198], [199, 204], [204, 206], [206, 207], [208, 209], [209, 211], [211, 213], [214, 218], [219, 220], [219, 225], [226, 227], [227, 232], [233, 238], [239, 244], [245, 248], [248, 249], [0, 0]], "template_type": "meeting_summary"}
Considerations for Using the Data
Social Impact of Dataset
The dataset aims to improve privacy protection in Hebrew text processing by enabling better PII detection and masking. This has important applications in:
Application Area | Description |
---|---|
Regulatory Compliance | Support for GDPR and PPLA requirements |
Document Processing | Privacy-preserving text analysis and storage |
Information Security | Automated PII detection and protection |
Data Loss Prevention | Real-time PII identification and masking |
Discussion of Biases
Geographic Bias
- Dataset focuses on Israeli context and formats
Name Distribution
- While effort was made to include diverse names, distribution may not perfectly match population demographics
Additional Information
Dataset Curators
Dataset was generated by Liran Baba.
Model Training
Dataset was generated for training the GolemPII-xlm-roberta-v1 model by Liran Baba (CordwainerSmith), model, demonstrating its effectiveness for PII detection and masking tasks in Hebrew text.
Recommended Uses
Use Case | Description |
---|---|
Privacy Protection | Identifying and masking PII in Hebrew documents |
Compliance Checking | Automated PII detection for regulatory compliance |
Data Sanitization | Cleaning sensitive information from text data |
Information Security | Supporting data loss prevention systems |
Quick Start
from datasets import load_dataset
# Load dataset
dataset = load_dataset("GolemGuard")
# Example usage
sample = dataset['train'][0]
print(f"Original text: {sample['source_text']}")
print(f"Masked text: {sample['masked_text']}")
Versioning
This is the initial release of the dataset. Future versions may include:
- Additional template types
- Expanded entity coverage
- Enhanced demographic representation
- Additional language variants
License
The GolemGuard dataset is released under MIT License with the following additional terms:
MIT License
Copyright (c) 2024 Liran Baba
Permission is hereby granted, free of charge, to any person obtaining a copy
of this dataset and associated documentation files (the "Dataset"), to deal
in the Dataset without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Dataset, and to permit persons to whom the Dataset is
furnished to do so, subject to the following conditions:
1. The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Dataset.
2. Any academic or professional work that uses this Dataset must include an
appropriate citation as specified below.
THE DATASET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE
DATASET.
How to Cite
If you use this dataset in your research, project, or application, please include the following citation:
For informal usage (e.g., blog posts, documentation):
GolemGuard Dataset by Liran Baba (https://huggingface.co/datasets/CordwainerSmith/GolemGuard)
For academic or professional publications:
Baba, L. (2024). GolemGuard: A Professional Hebrew PII Detection Dataset.
Retrieved from https://huggingface.co/datasets/CordwainerSmith/GolemGuard
Related model: GolemPII-xlm-roberta-v1 (https://huggingface.co/CordwainerSmith/GolemPII-xlm-roberta-v1)
Usage Examples
When referencing in your code:
"""
This code uses the GolemGuard dataset by Liran Baba
(https://huggingface.co/datasets/CordwainerSmith/GolemGuard)
"""
from datasets import load_dataset
dataset = load_dataset("GolemGuard")
When referencing in your model card:
dataset_info:
- name: GolemGuard
author: Liran Baba
url: https://huggingface.co/datasets/CordwainerSmith/GolemGuard
year: 2024