File size: 3,374 Bytes
b8c4700 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
Below is a sample README you can adapt for your Hugging Face repository. Feel free to modify the text or structure to suit your needs!
---
# Wikidata 2018-12-17 JSON Dump
This repository hosts a 2018 snapshot of the Wikidata JSON dump. The dataset was originally found on [Zenodo (Record #4436356)](https://zenodo.org/record/4436356).
## Dataset Description
- **Source**: [Wikidata](https://www.wikidata.org/) — Wikidata is a free and open knowledge base that can be read and edited by both humans and machines.
- **Date of Dump**: 2018-12-17
- **Size**: ~ (size in GB)
- **File Format**: `.json.gz` (gzipped JSON).
- This file contains a top-level JSON array, with each element representing a single Wikidata entity.
### License
Wikidata’s data is published under the [Creative Commons CC0 1.0 Universal Public Domain Dedication (CC0)](https://creativecommons.org/publicdomain/zero/1.0/). You can use this dataset freely for any purpose without copyright restriction. However, attribution to [Wikidata](https://www.wikidata.org/) is strongly encouraged as a best practice.
**Important**: Some associated media, such as images referenced within Wikidata items, may be under different licenses. The JSON data itself is CC0.
## How to Cite
If you use this dataset in your work, please cite:
- **Wikidata**:
```
Wikidata contributors. (2018). Wikidata (CC0 1.0 Universal).
Retrieved from https://www.wikidata.org/
```
- **Original Zenodo Record** (optional):
```
Wikidata JSON dumps. Zenodo.
https://zenodo.org/record/4436356
```
## How to Use
This dump is ready to use. It’s stored as a gzipped JSON array where each array element is a single Wikidata entity.
### Example: Python Code to Stream the JSON
Below is a sample script showing how to read the dump without fully decompressing it on disk. This uses the [ijson](https://pypi.org/project/ijson/) library for iterative JSON parsing.
```python
import gzip
import ijson
def stream_wikidata_array(gz_file_path):
"""
Streams each element from a top-level array in the gzipped JSON.
Yields Python dicts (or lists), one for each array element.
"""
with gzip.open(gz_file_path, 'rb') as f:
# 'item' means "each element of the array"
for element in ijson.items(f, 'item'):
yield element
if __name__ == "__main__":
# Replace with the path to your Wikidata dump
wikidata_path = r"E:\wikidata\20181217.json.gz"
# Just print the first few records
max_to_print = 5
for i, record in enumerate(stream_wikidata_array(wikidata_path), start=1):
print(f"Record #{i}:")
print(record)
if i >= max_to_print:
print("...stopping here.")
break
```
You can adapt this approach to load the data into your own workflow, whether that’s local analysis, a database import, or a big data pipeline.
## Disclaimer
- This snapshot is from 2018 and **will not** be up-to-date with the current Wikidata database.
- This repository and uploader are not affiliated with the Wikimedia Foundation or the official Wikidata project beyond using their data.
- Please ensure you comply with any relevant data protection or privacy regulations when using this dataset in production.
---
*Thank you for your interest in Wikidata and open knowledge!*
---
license: cc-by-4.0
---
|