license: unknown
license_link: https://openlibrary.org/developers/licensing
pretty_name: OpenLibrary Dump (2024-04-30)
size_categories:
- 10M<n<100M
configs:
- config_name: dumps
default: true
data_files:
- split: authors
path: data/parquet/ol_dump_authors_2024-04-30.parquet
- split: works
path: data/parquet/ol_dump_works_2024-04-30.parquet
- split: editions
path: data/parquet/ol_dump_editions_2024-04-30.parquet
- config_name: summaries
data_files:
- split: authors
path: summaries/parquet/ol_dump_authors_2024-04-30_summary.parquet
- split: works
path: summaries/parquet/ol_dump_works_2024-04-30_summary.parquet
- split: editions
path: summaries/parquet/ol_dump_editions_2024-04-30_summary.parquet
OpenLibrary Dump (2024-04-30)
This dataset contains the OpenLibrary dump of April 2024 converted to Parquet and DuckDB for easier querying.
Formats
Original GZIP dumps
The original GZIP dumps are available at data/dumps. The dumps are gzipped TSV files with the original OL JSON record contained in the fifth column of the TSV.
DuckDB
The authors, works and editions dumps were imported as tables into data/duckdb/ol_dump_2024-04-30.duckdb using the script ol_duckdb.sh. The scripts extract the JSON record from the fifth column and pipes it to DuckDB to import using the flags union_by_name=true
and ignore_errors=true
to account for the inconsistent JSON structure of the dumps.
Parquet
The authors, works and editions tables were exported as Parquet from DuckDB to data/parquet. These Parquet files are contained in the default data
config for this dataset.
Table Summaries
To give users an easy overview of the fields contained in the dump, the DuckDB tables have been summarized using the SUMMARIZE
function as Markdown and Parquet files at summaries, excluding the summary columns min
, max
, avg
for easier viewing. You can also explore the table summaries using the dataset viewer by selecting the summaries
config.
The dumps fields are supposed to be consistent with the schema referenced in the documentation. However, the summaries show that the dumps are not consistent with the documentation: the dumps contain some undocumented fields and some of the fields almost exclusively contain null
values.