--- dataset_info: features: - name: text dtype: string - name: metadata struct: - name: Era dtype: string - name: Lang dtype: string - name: LawType dtype: string - name: Num dtype: int64 - name: Year dtype: int64 - name: PromulgateMonth dtype: int64 - name: PromulgateDay dtype: int64 - name: LawNum dtype: string - name: category_id dtype: int64 - name: id_split dtype: int64 splits: - name: train num_bytes: 1286294957 num_examples: 109532 - name: validation num_bytes: 135176571 num_examples: 11538 - name: test num_bytes: 154350330 num_examples: 13183 download_size: 426424485 dataset_size: 1575821858 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* license: cc-by-4.0 language: - ja tags: - legal --- # Abstract This is the Japapnese law dataset obtained from [e-Gov](https://www.e-gov.go.jp) (Date of download: Oct. 20th, 2024) Each piece of text data is chunked into fewer than 4,096 tokens. Not chunked version is available [HERE](https://huggingface.co/datasets/nlp-waseda/e_gov) # Data Format Each data is consist of 2 fields, "text" and "metadata". * "text" fields contains the legal texts, which are expected to be mainly used. * "metadata" fields contains additional information including 10 subfields below: * "Era": The Japanese Era when the law is promulgated such as "Showa". * "Lang": The language the text is written in. All of them are Japanese. * "LawType": The type of the law including types below. * "Constitution" * "Act" * "CabinetOrder" * "ImperialOrder" * "MinisterialOrdinance" * "Rule" * "Misc" * "Year": The year when the law is promulgated. * "PromulgateMonth/Day": The Month/Day when the law is promulgated. * "LawNum": The string of the (numeric) name of the law. * "category_id": The integer representing the category where the law is categorized. The categories is found in [category.json](category.json) * "id_split": A non-negative integer indicating the number of this chunk. # Data Split This dataset has 3 split, train, validation and test. The data is split randomly but preserving the original distribution of the categories. The ratio is 8:1:1. # Data Chunking * The tokeniser used for tokenising is llm-jp/llm-jp-3-1.8b * Algorithm * If a file is less than 4,096 tokens, it is treated as a single chunk. * If it is 4,096 or more tokens, it is split at the last occurrence of a newline token within 4,096 tokens * The first half of the split is treated as a single chunk * The above is repeated for the second half of the split