Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: custom-apple-license
|
4 |
+
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE
|
5 |
+
dataset_info: NULL
|
6 |
+
task_categories:
|
7 |
+
- text-to-image
|
8 |
+
- image-to-text
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
---
|
12 |
+
|
13 |
+
# Dataset Card for TiC-DataComp
|
14 |
+
|
15 |
+
<!-- Provide a quick summary of the dataset. -->
|
16 |
+
|
17 |
+
This dataset containts metadata for TiC-DataComp benchmark for time-continual learning of image-text models.
|
18 |
+
The dataset containts timestamp information for DataComp-1B in the form of UIDs groupings by year/month sourced from the original CommonCrawl.
|
19 |
+
We also release UIDs for our TiC-DataCompNet and TiC-DataComp-Retrieval evaluations for continual learning of CLIP models.
|
20 |
+
For details on how to use the metadata, please visit our [github repository](https://github.com/apple/ml-tic-clip).
|
21 |
+
|
22 |
+
## Dataset Details
|
23 |
+
|
24 |
+
### Dataset Description
|
25 |
+
|
26 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
+
|
28 |
+
Keeping large foundation models up to date on latest data is inherently expensive.
|
29 |
+
To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models.
|
30 |
+
This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines.
|
31 |
+
We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models:
|
32 |
+
TiC-DataComp, TiC-YFCC, and TiC-Redcaps. TiC-DataComp, our largest dataset,
|
33 |
+
contains over 12.7B timestamped image-text pairs spanning 9 years (2014-2022).
|
34 |
+
We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models.
|
35 |
+
We show OpenAI's CLIP (trained on data up to 2020) loses ≈8% zero-shot accuracy on our curated retrieval task from 2021-2022 compared with more recently trained models in OpenCLIP repository.
|
36 |
+
We then study how to efficiently train models on time-continuous data.
|
37 |
+
We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by 2.5× when compared to the standard practice of retraining from scratch.
|
38 |
+
Code is available at [this https URL](https://github.com/apple/ml-tic-clip).
|
39 |
+
|
40 |
+
|
41 |
+
- **Developed by:** Apple
|
42 |
+
- **License:** See [LICENSE](https://github.com/apple/ml-tic-clip/blob/main/LICENSE)
|
43 |
+
|
44 |
+
## Uses
|
45 |
+
|
46 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
47 |
+
|
48 |
+
Researchers can use TiC-DataComp dataset to design and evaluate continual learning methods at large-scale for image-text models.
|
49 |
+
|
50 |
+
## Dataset Structure
|
51 |
+
|
52 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
53 |
+
|
54 |
+
```
|
55 |
+
- tic-datacomp_training_monthly/<YYYMM>.npy
|
56 |
+
- List of UIDs for each month.
|
57 |
+
- tic-datacomp_training_yearly_noeval/<YYY>.npy
|
58 |
+
- List of UIDs for each year after removing yearly evaluation sets.
|
59 |
+
- tic-datacomp_retrieval_evals_year2uids: TiC-DataComp-Retrieval evaluation UIDs per year.
|
60 |
+
- tic-datacompnet_year2uids: TiC-DataCompNet evaluation UIDs per year.
|
61 |
+
```
|
62 |
+
|
63 |
+
## Citation
|
64 |
+
|
65 |
+
**[TiC-CLIP: Continual Training of CLIP Models](https://arxiv.org/abs/2310.16226). (ICLR 2024)**
|
66 |
+
*Garg, S., Farajtabar, M., Pouransari, H., Vemulapalli, R., Mehta, S., Tuzel, O., Shankar, V. and Faghri, F..*
|
67 |
+
|
68 |
+
```bibtex
|
69 |
+
@inproceedings{garg2024tic,
|
70 |
+
title={TiC-CLIP: Continual Training of CLIP Models},
|
71 |
+
author={Garg, Saurabh and Farajtabar, Mehrdad and Pouransari, Hadi and Vemulapalli, Raviteja and Mehta, Sachin and Tuzel, Oncel and Shankar, Vaishaal and Faghri, Fartash},
|
72 |
+
booktitle={The Twelfth International Conference on Learning Representations (ICLR)},
|
73 |
+
year={2024},
|
74 |
+
url={https://openreview.net/forum?id=TLADT8Wrhn}
|
75 |
+
}
|