metadata
license: apache-2.0
MAP-CC
π Homepage | π€ MAP-CC | π€ CHC-Bench | π€ CT-LLM | π arXiv | GitHub
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
Usage Instructions
After downloading the parts of the dataset, you can concatenate them into a single file for each split of the dataset using the following command in a UNIX-like terminal:
cat [split].gz.part* > [split].gz
Replace [split] with the name of the dataset component you wish to merge (zh-cc, zh-baike, zh-papers, zh-books, or zh-others). After merging, decompress the .gz file to access the dataset's content.