|
--- |
|
dataset_info: |
|
features: |
|
- name: identifier |
|
dtype: string |
|
- name: dataset |
|
dtype: string |
|
- name: mime_type |
|
dtype: string |
|
- name: tokens |
|
sequence: int32 |
|
- name: score |
|
dtype: float32 |
|
splits: |
|
- name: train |
|
num_bytes: 67204589673 |
|
num_examples: 6914364 |
|
download_size: 29493291833 |
|
dataset_size: 67204589673 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# KL3M Data Project |
|
|
|
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854). |
|
|
|
## Description |
|
|
|
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models. |
|
|
|
## Dataset Details |
|
|
|
- **Format**: Parquet files containing document text and metadata |
|
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) |
|
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents |
|
|
|
## Abstract |
|
|
|
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract. |
|
|
|
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including: |
|
|
|
1. The source code to acquire and process these documents |
|
2. The original document formats with associated provenance and metadata |
|
3. Extracted content in a standardized format |
|
4. Pre-tokenized representations of the documents |
|
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data |
|
|
|
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models. |
|
|
|
## Legal Basis |
|
|
|
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation: |
|
|
|
- Public domain materials |
|
- US government works |
|
- Open access content under permissive licenses |
|
- Content explicitly licensed for AI training |
|
|
|
## Papers |
|
|
|
For more information about the KL3M Data Project, please refer to: |
|
|
|
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854) |
|
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247) |
|
|
|
## Citation |
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
```bibtex |
|
@misc{bommarito2025kl3mdata, |
|
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models}, |
|
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin}, |
|
year={2025}, |
|
eprint={2504.07854}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
```bibtex |
|
@misc{bommarito2025kl3m, |
|
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications}, |
|
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian}, |
|
year={2025}, |
|
eprint={2503.17247}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
## About ALEA |
|
|
|
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/). |