Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 15,034 Bytes
7b46cbe
 
 
 
 
 
 
 
 
 
 
 
0806b28
29dae0c
9e29ab5
 
0ff8851
0806b28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b46cbe
 
 
38d534f
7b46cbe
 
60a5406
7b46cbe
38d534f
7b46cbe
2f3d4ff
 
f84305c
6b50d33
f84305c
 
7b46cbe
 
 
 
 
 
ca25a0d
7b46cbe
 
 
 
 
 
 
38d534f
7b46cbe
 
 
 
 
38d534f
7b46cbe
38d534f
7b46cbe
 
 
 
 
38d534f
7b46cbe
 
 
 
 
 
 
 
 
38d534f
7b46cbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10f1fe8
7b46cbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38d534f
7b46cbe
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
configs:
  - config_name: data-v1.1
    data_files:
      - split: train
        path: data_v1_1/*
  - config_name: CC-MAIN-2024-18
    data_files:
      - split: train
        path: CC-MAIN-2024-18/*
  - config_name: CC-MAIN-2024-10
    data_files:
      - split: train
        path: CC-MAIN-2024-10/*
  - config_name: CC-MAIN-2023-50
    data_files:
      - split: train
        path: CC-MAIN-2023-50/*
  - config_name: CC-MAIN-2023-40
    data_files:
      - split: train
        path: CC-MAIN-2023-40/*
  - config_name: CC-MAIN-2023-23
    data_files:
      - split: train
        path: CC-MAIN-2023-23/*
  - config_name: CC-MAIN-2023-14
    data_files:
      - split: train
        path: CC-MAIN-2023-14/*
  - config_name: CC-MAIN-2023-06
    data_files:
      - split: train
        path: CC-MAIN-2023-06/*
  - config_name: CC-MAIN-2022-49
    data_files:
      - split: train
        path: CC-MAIN-2022-49/*
  - config_name: CC-MAIN-2022-40
    data_files:
      - split: train
        path: CC-MAIN-2022-40/*
  - config_name: CC-MAIN-2022-33
    data_files:
      - split: train
        path: CC-MAIN-2022-33/*
  - config_name: CC-MAIN-2022-27
    data_files:
      - split: train
        path: CC-MAIN-2022-27/*
  - config_name: CC-MAIN-2022-21
    data_files:
      - split: train
        path: CC-MAIN-2022-21/*
  - config_name: CC-MAIN-2022-05
    data_files:
      - split: train
        path: CC-MAIN-2022-05/*
  - config_name: CC-MAIN-2021-49
    data_files:
      - split: train
        path: CC-MAIN-2021-49/*
  - config_name: CC-MAIN-2021-43
    data_files:
      - split: train
        path: CC-MAIN-2021-43/*
  - config_name: CC-MAIN-2021-39
    data_files:
      - split: train
        path: CC-MAIN-2021-39/*
  - config_name: CC-MAIN-2021-31
    data_files:
      - split: train
        path: CC-MAIN-2021-31/*
  - config_name: CC-MAIN-2021-25
    data_files:
      - split: train
        path: CC-MAIN-2021-25/*
  - config_name: CC-MAIN-2021-21
    data_files:
      - split: train
        path: CC-MAIN-2021-21/*
  - config_name: CC-MAIN-2021-17
    data_files:
      - split: train
        path: CC-MAIN-2021-17/*
  - config_name: CC-MAIN-2021-10
    data_files:
      - split: train
        path: CC-MAIN-2021-10/*
  - config_name: CC-MAIN-2021-04
    data_files:
      - split: train
        path: CC-MAIN-2021-04/*
  - config_name: CC-MAIN-2020-50
    data_files:
      - split: train
        path: CC-MAIN-2020-50/*
  - config_name: CC-MAIN-2020-45
    data_files:
      - split: train
        path: CC-MAIN-2020-45/*
  - config_name: CC-MAIN-2020-40
    data_files:
      - split: train
        path: CC-MAIN-2020-40/*
  - config_name: CC-MAIN-2020-34
    data_files:
      - split: train
        path: CC-MAIN-2020-34/*
  - config_name: CC-MAIN-2020-29
    data_files:
      - split: train
        path: CC-MAIN-2020-29/*
  - config_name: CC-MAIN-2020-24
    data_files:
      - split: train
        path: CC-MAIN-2020-24/*
  - config_name: CC-MAIN-2020-16
    data_files:
      - split: train
        path: CC-MAIN-2020-16/*
  - config_name: CC-MAIN-2020-10
    data_files:
      - split: train
        path: CC-MAIN-2020-10/*
  - config_name: CC-MAIN-2020-05
    data_files:
      - split: train
        path: CC-MAIN-2020-05/*
  - config_name: CC-MAIN-2019-51
    data_files:
      - split: train
        path: CC-MAIN-2019-51/*
  - config_name: CC-MAIN-2019-47
    data_files:
      - split: train
        path: CC-MAIN-2019-47/*
  - config_name: CC-MAIN-2019-43
    data_files:
      - split: train
        path: CC-MAIN-2019-43/*
  - config_name: CC-MAIN-2019-39
    data_files:
      - split: train
        path: CC-MAIN-2019-39/*
  - config_name: CC-MAIN-2019-35
    data_files:
      - split: train
        path: CC-MAIN-2019-35/*
  - config_name: CC-MAIN-2019-30
    data_files:
      - split: train
        path: CC-MAIN-2019-30/*
  - config_name: CC-MAIN-2019-26
    data_files:
      - split: train
        path: CC-MAIN-2019-26/*
  - config_name: CC-MAIN-2019-22
    data_files:
      - split: train
        path: CC-MAIN-2019-22/*
  - config_name: CC-MAIN-2019-18
    data_files:
      - split: train
        path: CC-MAIN-2019-18/*
  - config_name: CC-MAIN-2019-13
    data_files:
      - split: train
        path: CC-MAIN-2019-13/*
  - config_name: CC-MAIN-2018-51
    data_files:
      - split: train
        path: CC-MAIN-2018-51/*
  - config_name: CC-MAIN-2018-47
    data_files:
      - split: train
        path: CC-MAIN-2018-47/*
  - config_name: CC-MAIN-2018-43
    data_files:
      - split: train
        path: CC-MAIN-2018-43/*
  - config_name: CC-MAIN-2018-39
    data_files:
      - split: train
        path: CC-MAIN-2018-39/*
  - config_name: CC-MAIN-2018-34
    data_files:
      - split: train
        path: CC-MAIN-2018-34/*
  - config_name: CC-MAIN-2018-30
    data_files:
      - split: train
        path: CC-MAIN-2018-30/*
  - config_name: CC-MAIN-2018-26
    data_files:
      - split: train
        path: CC-MAIN-2018-26/*
  - config_name: CC-MAIN-2018-22
    data_files:
      - split: train
        path: CC-MAIN-2018-22/*
  - config_name: CC-MAIN-2018-17
    data_files:
      - split: train
        path: CC-MAIN-2018-17/*
  - config_name: CC-MAIN-2018-13
    data_files:
      - split: train
        path: CC-MAIN-2018-13/*
  - config_name: CC-MAIN-2018-09
    data_files:
      - split: train
        path: CC-MAIN-2018-09/*
  - config_name: CC-MAIN-2018-05
    data_files:
      - split: train
        path: CC-MAIN-2018-05/*
  - config_name: CC-MAIN-2017-51
    data_files:
      - split: train
        path: CC-MAIN-2017-51/*
  - config_name: CC-MAIN-2017-47
    data_files:
      - split: train
        path: CC-MAIN-2017-47/*
  - config_name: CC-MAIN-2017-43
    data_files:
      - split: train
        path: CC-MAIN-2017-43/*
  - config_name: CC-MAIN-2017-39
    data_files:
      - split: train
        path: CC-MAIN-2017-39/*
  - config_name: CC-MAIN-2017-34
    data_files:
      - split: train
        path: CC-MAIN-2017-34/*
  - config_name: CC-MAIN-2017-30
    data_files:
      - split: train
        path: CC-MAIN-2017-30/*
  - config_name: CC-MAIN-2017-26
    data_files:
      - split: train
        path: CC-MAIN-2017-26/*
  - config_name: CC-MAIN-2017-22
    data_files:
      - split: train
        path: CC-MAIN-2017-22/*
---

<h1 align="center">
  πŸƒ MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>

πŸƒ MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. πŸƒ MINT-1T is designed to facilitate research in multimodal pretraining. πŸƒ MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.

You are currently viewing the HTML subset of πŸƒ MINT-1T. For PDF and ArXiv subsets, please refer to the [πŸƒ MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

![Examples](interleaved-example-twitter.png)

## Updates
### 8/8/24
We have updated MINT-1T (HTML) with fixed document URL filtering and additional image safety filtering. As we prioritize safety, we have decided to only release the HTML data from MINT-1T that passes a rigorous image filtering pipeline; we run an additional image safety classifier, the one created by [Datacomp](https://www.datacomp.ai/dcclip/index.html#home), on data already filtered by our [original NSFW image classifier](https://github.com/GantMan/nsfw_model). The newly released MINT-1T (HTML) contains 792B text tokens and 905M documents.

## Dataset Details

### Dataset Sources

- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/

## Uses

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

πŸƒ MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

πŸƒ MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of πŸƒ MINT-1T.

## Dataset Creation

### Curation Rationale

πŸƒ MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.

### Source Data

The dataset is a comprehensive collection of multimodal documents from various sources:

- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository

In total, πŸƒ MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 26.8 million PDF documents
- 0.6 million ArXiv documents

#### Data Collection and Processing

The data collection and processing involved several steps:

1. Document Extraction:
   - HTML documents were parsed from CommonCrawl WARC files
   - PDF documents were extracted from CommonCrawl WAT files
   - ArXiv papers were directly sourced from ArXiv S3 buckets

2. Filtering Process:
   - Applied text quality filters to ensure content relevance and readability
   - Removed duplicate content at both paragraph and document levels
   - Filtered out undesirable content based on predefined criteria
   - Verified image availability and quality for HTML documents
   - Limited PDF size to 50MB and 50 pages to manage dataset size and quality

3. Image Processing:
   - Used NSFW image detection to remove pornographic or otherwise undesirable images
   - Removed images smaller than 150 pixels or larger than 20,000 pixels
   - Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures

4. Text Processing:
   - Used fasttext for language identification, focusing on English content
   - Masked personally identifiable information such as email addresses and IP addresses
   - Applied paragraph and document-level deduplication using Bloom filters

5. PDF Specific Processing:
   - Used PyMuPDF for parsing PDFs and extracting reading order
   - Clustered text blocks based on columns and ordered from top left to bottom right

6. ArXiv Specific Processing:
   - Used TexSoup to parse LaTeX source code and interleave images with text
   - Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags

Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.

#### Personal and Sensitive Information

Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:

- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out

However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.

## Bias, Risks, and Limitations

Several potential biases, risks, and limitations have been identified:

1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.

2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.

3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.

4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.

5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.

### Recommendations

Given these considerations, the following recommendations are provided:

1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.

2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.

3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.

## License
We release πŸƒ MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

## Citation

```
@article{awadalla2024mint1t,
      title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens}, 
      author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
      year={2024}
}
```