The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 12822f6b-f5c1-41ca-9341-665942a0fee4)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1686, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1024, in get_module standalone_yaml_path = cached_path( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 178, in cached_path resolved_path = huggingface_hub.HfFileSystem( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 175, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 121, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2682, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2539, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 12822f6b-f5c1-41ca-9341-665942a0fee4)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
WebOrganizer/Corpus-200B
This dataset is a pre-processed version of the 1b-1x
CommonCrawl pool from DataComps-LM cleaned with
(1) RefinedWeb filters and
(2) BFF deduplication.
We provide the resulting 200B token corpus annotated with two quality scores, WebOrganizer domains, and k-means scores.
Download the dataset by cloning the repository with Git LFS instead of HuggingFace's load_dataset()
.
The dataset has the following folder structure:
Corpus-200B/
documents/ # Pre-processed web documents
- CC_shard_00000000_processed.jsonl.zst
- CC_shard_00000001_processed.jsonl.zst
- ...
tokens/ # number of tokens per document (GPT-NeoX tokenizer)
- CC_shard_00000000_processed.npy
- CC_shard_00000001_processed.npy
- ...
scores_dclm-fasttext/ # DCLM-fasttext score
- CC_shard_00000000_processed.npy
- ...
scores_fineweb-edu/ # FineWeb-Edu score
- CC_shard_00000000_processed.npy
- CC_shard_00000000_processed__rounded.npy # rounded FineWeb-Edu score
- ...
domains_topics/ # TopicClassifier annotations
- CC_shard_00000000_processed__logits.npy # logits for each topic
- CC_shard_00000000_processed__choice.npy # index of top choice
- ...
domains_formats/ # FormatClassifier annotations
- CC_shard_00000000_processed__logits.npy # logits for each format
- CC_shard_00000000_processed__choice.npy # index of top choice
- ...
domains_clusters-k24/ # K-means clusters
- CC_shard_00000000_processed.npy # cluster assignment for each document
- ...
We also include statistics about the presence and co-occurence of domains in the domain_statistics/
folder, computed with the domain_statistics.py
script.
Citation
If you make use of this pre-processed corpus in your work, please cite:
@article{wettig2025organize,
title={Organize the Web: Constructing Domains Enhances Pre-Training Data Curation},
author={Alexander Wettig and Kyle Lo and Sewon Min and Hannaneh Hajishirzi and Danqi Chen and Luca Soldaini},
year={2025}
}
- Downloads last month
- 4