File size: 3,001 Bytes
aefa21c 520ec9f 98d20fa 45a13a2 22f3507 e0e2b95 22f3507 e0e2b95 22f3507 e0e2b95 78eb31b e0e2b95 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: unknown
license_link: https://openlibrary.org/developers/licensing
pretty_name: OpenLibrary Dump (2024-04-30)
size_categories:
- 10M<n<100M
configs:
- config_name: dumps
default: true
data_files:
- split: authors
path: "data/parquet/ol_dump_authors_2024-04-30.parquet"
- split: works
path: "data/parquet/ol_dump_works_2024-04-30.parquet"
- split: editions
path: "data/parquet/ol_dump_editions_2024-04-30.parquet"
- config_name: summaries
data_files:
- split: authors
path: "summaries/parquet/ol_dump_authors_2024-04-30_summary.parquet"
- split: works
path: "summaries/parquet/ol_dump_works_2024-04-30_summary.parquet"
- split: editions
path: "summaries/parquet/ol_dump_editions_2024-04-30_summary.parquet"
---
# OpenLibrary Dump (2024-04-30)
This dataset contains the [OpenLibrary dump](https://openlibrary.org/developers/dumps) of April 2024 converted to Parquet and DuckDB for easier querying.
## Formats
### Original GZIP dumps
The original GZIP dumps are available at [data/dumps](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/dumps). The dumps are gzipped TSV files with the original OL JSON record contained in the fifth column of the TSV.
### DuckDB
The authors, works and editions dumps were imported as tables into [data/duckdb/ol_dump_2024-04-30.duckdb](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/blob/main/data/duckdb/ol_dump_2024-04-30.duckdb) using the script [ol_duckdb.sh](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/blob/main/ol_duckdb.sh). The scripts extract the JSON record from the fifth column and pipes it to DuckDB to import using the flags ```union_by_name=true``` and ```ignore_errors=true``` to account for the inconsistent JSON structure of the dumps.
### Parquet
The authors, works and editions tables were exported as Parquet from DuckDB to [data/parquet](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/parquet). These Parquet files are contained in the default ```data``` config for this dataset.
### Table Summaries
To give users an easy overview of the fields contained in the dump, the DuckDB tables have been summarized using the ```SUMMARIZE``` function as Markdown and Parquet files at [summaries](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/summaries), excluding the summary columns ```min```, ```max```, ```avg``` for easier viewing. You can also explore the table summaries using the dataset viewer by selecting the ```summaries``` config.
The dumps fields are supposed to be consistent with the schema referenced [in the documentation](https://openlibrary.org/developers/dumps#:~:text=Format%20of%20JSON%20records). However, the summaries show that the dumps are not consistent with the documentation: the dumps contain some undocumented fields and some of the fields almost exclusively contain ```null``` values. |