storytracer
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,33 @@ configs:
|
|
17 |
- config_name: summaries
|
18 |
data_files:
|
19 |
- split: authors
|
20 |
-
path: "
|
21 |
- split: works
|
22 |
-
path: "
|
23 |
- split: editions
|
24 |
-
path: "
|
25 |
---
|
26 |
|
27 |
# OpenLibrary Dump (2024-04-30)
|
28 |
|
29 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
- config_name: summaries
|
18 |
data_files:
|
19 |
- split: authors
|
20 |
+
path: "summaries/parquet/ol_dump_authors_2024-04-30_summary.parquet"
|
21 |
- split: works
|
22 |
+
path: "summaries/parquet/ol_dump_works_2024-04-30_summary.parquet"
|
23 |
- split: editions
|
24 |
+
path: "summaries/parquet/ol_dump_editions_2024-04-30_summary.parquet"
|
25 |
---
|
26 |
|
27 |
# OpenLibrary Dump (2024-04-30)
|
28 |
|
29 |
+
This dataset contains the [OpenLibrary dump](https://openlibrary.org/developers/dumps) of April 2024 converted to Parquet and DuckDB for easier querying.
|
30 |
+
|
31 |
+
## Formats
|
32 |
+
|
33 |
+
### Original GZIP dumps
|
34 |
+
|
35 |
+
The original GZIP dumps are available at [data/dumps](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/dumps). The dumps are gzipped TSV files with the original OL JSON record contained in the fifth column of the TSV.
|
36 |
+
|
37 |
+
### DuckDB
|
38 |
+
|
39 |
+
The authors, works and editions dumps were imported as tables into [data/duckdb/ol_dump_2024-04-30.duckdb](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/blob/main/data/duckdb/ol_dump_2024-04-30.duckdb) using the script [ol_duckdb.sh](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/blob/main/ol_duckdb.sh). The scripts extract the JSON record from the fifth column and pipes it to DuckDB to import using the flags ```union_by_name=true``` and ```ignore_errors=true``` to account for the inconsistent JSON structure of the dumps.
|
40 |
+
|
41 |
+
### Parquet
|
42 |
+
|
43 |
+
The authors, works and editions tables were exported as Parquet from DuckDB to [data/parquet](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/parquet). These Parquet files are contained in the default ```data``` config for this dataset.
|
44 |
+
|
45 |
+
### Table Summaries
|
46 |
+
|
47 |
+
To give users an easy overview of the fields contained in the dump, the DuckDB tables have been summarized using the ```SUMMARIZE``` function as Markdown and Parquet files at [summaries](https://huggingface.co/datasets/storytracer/openlibrary_dump_2024-04-30/tree/main/data/summaries), excluding the summary columns ```min```, ```max```, ```avg``` for easier viewing. You can also explore the table summaries using the dataset viewer by selecting the ```summaries``` config.
|
48 |
+
|
49 |
+
The dumps fields are supposed to be consistent with the schema referenced [in the documentation](https://openlibrary.org/developers/dumps#:~:text=Format%20of%20JSON%20records). However, the summaries show that the dumps are not consistent with the documentation: the dumps contain some undocumented fields and some of the fields almost exclusively contain ```null``` values.
|