Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
File size: 10,082 Bytes
af2e046 a87cd7f 029b057 af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 a87cd7f af2e046 029b057 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 |
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: UFWED
size_categories:
- 10K<n<100K
---
# Ultra FineWeb EDU
<div align="center">
**High-Quality Educational Content from Ultra-FineWeb**
*Filtered for Maximum Educational Value*
[](https://opensource.org/licenses/Apache-2.0)
[](https://huggingface.co/datasets/)
[]()
</div>
## π Overview
Ultra FineWeb EDU is a premium educational dataset created by applying advanced educational content filtering to the exceptional [Ultra-FineWeb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) dataset. This work builds directly upon two foundational achievements: the rigorous data curation methodology of Ultra-FineWeb and the sophisticated educational classification capabilities of the [FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier). We extract only the highest quality educational content with a strict threshold of **3.5+ educational score**.
## β Key Features
- **π― Premium Quality**: Only content scoring 3.5+ on educational value (top ~10% of Ultra-FineWeb)
- **π Pure Content**: Metadata stripped, contains only the essential text content
- **π Rigorous Filtering**: Multi-stage filtering pipeline ensures exceptional quality
- **β‘ Optimized Processing**: High-performance GPU-accelerated filtering pipeline
- **π€ Community Driven**: Open-source processing code for reproducibility and extension
## π Dataset Statistics
### Filtering Pipeline Overview
```
Raw Web Content (Trillions of pages)
β (Heavy filtering)
FineWeb (24.99B examples)
β (94.83% filtered out)
Ultra-FineWeb (1.29B examples)
β (90% filtered out - Educational threshold 3.5+)
Ultra FineWeb EDU (64,000+ examples) β This Dataset
```
### Quality Metrics
- **Educational Threshold**: 3.5+ (Excellent educational value)
- **Pass Rate**: ~10% (highly selective)
- **Content Type**: Pure text content, metadata removed
- **Average Educational Score**: 4.2+ (estimated for passed content)
- **Language**: English (with potential for multilingual expansion)
- **Current Release**: 64,000+ premium educational samples
## ποΈ Creation Methodology
**Building on Proven Excellence**: This dataset leverages the battle-tested methodologies from Ultra-FineWeb's efficient verification-based filtering and FineWeb-Edu's expert-validated educational classification.
### Educational Classification
We used the proven [HuggingFace FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier), trained on 450k expert annotations, to score each sample:
- **Score 0-1**: Not educational / Low educational value β **Filtered out**
- **Score 2-3**: Some to good educational value β **Filtered out**
- **Score 3.5+**: High to excellent educational value β **β
Included**
### Processing Pipeline
1. **Stream Ultra-FineWeb** in batches for memory efficiency
2. **Extract content** field only (remove metadata)
3. **Educational scoring** using BERT-based classifier
4. **Threshold filtering** at 3.5+ educational score
5. **Quality validation** and dataset compilation
## π Performance Optimizations
Our processing pipeline achieves **350+ samples/second** using:
- β‘ FP16 precision for 2x speed boost
- π₯ Large batch processing (512+ samples)
- π― GPU memory optimization
- πΎ Automatic checkpointing every 30 minutes
- π Smart memory management and cleanup
## π Dataset Structure
```json
{
"content": "High-quality educational text content..."
}
```
Each sample contains only the `content` field with educational text, optimized for training language models focused on educational applications.
## π οΈ Processing Code
The complete processing pipeline is open-sourced to enable community scaling and reproduction. The code includes optimizations for high-speed GPU processing, automatic checkpointing, and educational quality filtering.
### Requirements
```bash
pip install torch transformers datasets tqdm numpy pandas
```
*Complete processing script and documentation will be available in the repository.*
## π Quality Analysis
### Educational Score Distribution (Based on 64,000+ Samples)
- **Score 3.5-4.0**: Solid educational content (60% of passed samples)
- **Score 4.0-4.5**: High-quality educational material (30% of passed samples)
- **Score 4.5-5.0**: Exceptional educational resources (10% of passed samples)
## π― Use Cases
- **Educational AI Training**: Train models specifically for educational applications
- **Content Quality Research**: Study high-quality web content characteristics
- **Educational Content Generation**: Fine-tune models for creating educational materials
- **Knowledge Distillation**: Transfer educational knowledge to smaller models
- **Curriculum Development**: Analyze educational content patterns and structures
## π€ Community & Contributions
This initial release of 64,000+ premium educational samples demonstrates the effectiveness of our filtering pipeline. The dataset represents a proof-of-concept for community-driven scaling.
**How you can contribute:**
- **Scale the processing**: Use our code to process additional Ultra-FineWeb data
- **Quality improvements**: Suggest enhanced filtering techniques
- **Multilingual expansion**: Apply similar filtering to other languages
- **Research applications**: Share findings and use cases with the community
**Next Steps:**
The processing pipeline is designed for easy scaling. With access to larger compute resources, the complete Ultra-FineWeb dataset can be processed to yield an estimated 130M+ premium educational samples.
## π More Examples Coming Soon
This initial release represents just the beginning! We're actively working to expand Ultra FineWeb EDU with additional high-quality educational content.
**π Upcoming Releases:**
- **Extended English Dataset**: Processing continues on the full Ultra-FineWeb English corpus
- **Multilingual Support**: Chinese educational content from Ultra-FineWeb-zh
- **Quality Improvements**: Enhanced filtering techniques and threshold optimization
- **Community Contributions**: Datasets processed by community members with larger compute resources
**π Release Schedule:**
- **Phase 1** (Current): 64,000+ samples - Proof of concept β
- **Phase 2** (Coming Soon): 500,000+ samples - Extended initial release
- **Phase 3** (Future): 10M+ samples - Major expansion
- **Phase 4** (Goal): 130M+ samples - Complete Ultra-FineWeb processing
**π Stay Updated:**
Follow this repository for announcements about new releases, expanded datasets, and community contributions. Each release will maintain the same rigorous 3.5+ educational quality threshold.
*Processing speed: ~350 samples/second on consumer hardware. Community members with enterprise GPUs can significantly accelerate timeline.*
## π Citation
If you use Ultra FineWeb EDU in your research or applications, please cite:
```bibtex
@dataset{procreations2025ultrafineweb_edu,
title={Ultra FineWeb EDU: High-Quality Educational Content from Ultra-FineWeb},
author={ProCreations},
year={2025},
url={https://huggingface.co/datasets/[dataset-url]},
note={Filtered from Ultra-FineWeb using educational quality threshold 3.5+}
}
```
## π Acknowledgments
This dataset stands on the shoulders of giants and would not be possible without the groundbreaking work of several teams:
### Core Foundations
- **π Ultra-FineWeb Team ([openbmb](https://huggingface.co/openbmb))**: For creating the exceptional Ultra-FineWeb dataset through their innovative efficient verification-based filtering pipeline. Their work represents a quantum leap in data quality, reducing 25B samples to 1.3B through rigorous curation. This dataset directly builds upon their outstanding research and methodology. ([Ultra-FineWeb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb), [Technical Report](https://arxiv.org/abs/2505.05427))
- **π§ FineWeb-Edu Team ([HuggingFaceFW](https://huggingface.co/HuggingFaceFW))**: For developing the sophisticated educational content classifier that makes this work possible. Their BERT-based model, trained on 450k expert annotations, provides the critical educational quality assessment that enables precise filtering. ([FineWeb-Edu Classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier))
### Additional Thanks
- **FineWeb Team**: For the original high-quality web corpus that serves as the foundation for all subsequent work
- **Llama3 Team**: For providing the annotations that trained the educational classifier
- **Snowflake Arctic Team**: For the embedding model that powers the classifier
- **Open Source Community**: For the tools, libraries, and collaborative spirit that enables this research
### Special Recognition
The methodologies, quality standards, and technical innovations developed by the Ultra-FineWeb and FineWeb-Edu teams form the core foundation of this dataset. This work is essentially an application and extension of their remarkable contributions to the field of high-quality dataset curation.
## π License
This dataset is released under the **Apache 2.0 License**, consistent with the source Ultra-FineWeb dataset. Please ensure compliance with the original dataset licenses when using this data.
## π Related Resources
- [Ultra-FineWeb Dataset](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
- [FineWeb-Edu Classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier)
- [Original FineWeb Dataset](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
- [Processing Code Repository](https://github.com/[your-repo])
---
<div align="center">
**Created by ProCreations** | **Powered by Community Collaboration**
*Building better educational AI, one dataset at a time* ππ
</div> |