Commit
·
6069fa3
1
Parent(s):
36b30b3
Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,47 @@ Re-packaged bulk data from [courtlistener.com](https://www.courtlistener.com/hel
|
|
16 |
|
17 |
Prepared by the [Harvard Library Innovation Lab](https://lil.law.harvard.edu) in collaboration with the [Free Law Project](https://free.law/).
|
18 |
|
19 |
-
|
20 |
- [Data Nutrition Label](https://datanutrition.org/labels/v3/?id=c29976b2-858c-4f4e-b7d0-c8ef12ce7dbe) (DRAFT). ([Archive](https://perma.cc/YV5P-B8JL)).
|
21 |
- [Pipeline Source Code](https://github.com/harvard-lil/cold-cases-export)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
Prepared by the [Harvard Library Innovation Lab](https://lil.law.harvard.edu) in collaboration with the [Free Law Project](https://free.law/).
|
18 |
|
19 |
+
See also:
|
20 |
- [Data Nutrition Label](https://datanutrition.org/labels/v3/?id=c29976b2-858c-4f4e-b7d0-c8ef12ce7dbe) (DRAFT). ([Archive](https://perma.cc/YV5P-B8JL)).
|
21 |
- [Pipeline Source Code](https://github.com/harvard-lil/cold-cases-export)
|
22 |
+
|
23 |
+
---
|
24 |
+
|
25 |
+
## Summary
|
26 |
+
- [Formats](#formats)
|
27 |
+
- [File structure](#file-structure)
|
28 |
+
|
29 |
+
---
|
30 |
+
|
31 |
+
## Formats
|
32 |
+
|
33 |
+
We've released this data in two different formats:
|
34 |
+
|
35 |
+
### JSON-L or JSON Lines
|
36 |
+
|
37 |
+
This format consists of a JSON document for every row in the dataset, one per line. This makes the data really easy to take a selection of the data or split it out into multiple files for parallel processing using ordinary command line tools such as `head`, `split` and `jq`.
|
38 |
+
|
39 |
+
Most JSON parsers are unable to easily stream through an enormous JSON array without loading the whole document in RAM or using more difficult APIs, so writing the data as an enormous JSON arrray would make it unwieldy.
|
40 |
+
|
41 |
+
Also, just about any language you can think of has a ready way to parse JSON data, which makes this version of the dataset more compatible.
|
42 |
+
|
43 |
+
See: https://jsonlines.org/
|
44 |
+
|
45 |
+
### Apache Parquet
|
46 |
+
|
47 |
+
Parquet is binary format that makes filtering and retrieving the data quicker because it lays out the data in columns, which means columns that are unnecessary to satisfy a given query or workflow don't need to be read.
|
48 |
+
|
49 |
+
Parquet has more limited support outside the Python and JVM ecosystems, however.
|
50 |
+
|
51 |
+
See: https://parquet.apache.org/
|
52 |
+
|
53 |
+
---
|
54 |
+
|
55 |
+
## File structure
|
56 |
+
|
57 |
+
Both of these datasets were exported by the same system based on [Apache Spark](https://spark.apache.org/), so within each subdirectory, you'll find a similar list of files:
|
58 |
+
|
59 |
+
- **_SUCCESS**: This indicates that the job that built the dataset ran successfully and therefore this is a complete dataset.
|
60 |
+
|
61 |
+
- **.json.gz or .gz.parquet**: Each of these is a slice of the full dataset, encoded in JSON-L or Parquet, and compressed with [GZip](https://www.gnu.org/software/gzip/).
|
62 |
+
- **Hidden `.crc` files**: These can be used to verify that the data transferred correctly and otherwise ignored.
|