vijaye12 commited on
Commit
a992de0
1 Parent(s): defc940

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +224 -3
README.md CHANGED
@@ -1,3 +1,224 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: time-series-forecasting
4
+ tags:
5
+ - time series
6
+ - forecasting
7
+ - pretrained models
8
+ - foundation models
9
+ - time series foundation models
10
+ - time-series
11
+ ---
12
+
13
+ # TinyTimeMixer (TTM) R2 Model Card
14
+
15
+ <p align="center" width="100%">
16
+ <img src="ttm_image.webp" width="600">
17
+ </p>
18
+
19
+ TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
20
+ **With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
21
+
22
+
23
+ TTM is accepted in NeurIPS 2024.
24
+
25
+ **TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets.**
26
+
27
+
28
+ TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
29
+ forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
30
+ fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
31
+
32
+
33
+ **The current open-source version supports point forecasting use-cases specifically ranging from minutely to hourly resolutions
34
+ (Ex. 10 min, 15 min, 1 hour.).**
35
+
36
+ **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
37
+
38
+
39
+
40
+ ## Model Releases (along with the branch name where the models are stored):
41
+
42
+
43
+ - **512-96-r2**: Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
44
+ in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
45
+ resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 512-96-r2)
46
+
47
+ - **1024-96-r2**: Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
48
+ in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
49
+ resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-r2)
50
+
51
+
52
+ - **1536-96-r2**: Given the last 1536 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
53
+ in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
54
+ resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1536-96-r2)
55
+
56
+
57
+
58
+ ## Model Capabilities with example scripts
59
+
60
+ The below model scripts can be used for any of the above TTM models. Please update the HF model URL and branch name in the `from_pretrained` call appropriately to pick the model of your choice.
61
+
62
+ - Getting Started [[colab]](https://colab.research.google.com/github/IBM/tsfm/blob/main/notebooks/tutorial/ttm_tutorial.ipynb)
63
+ - Zeroshot Multivariate Forecasting [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb)
64
+ - Finetuned Multivariate Forecasting:
65
+ - Channel-Independent Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb) [M4-Hourly finetuning](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
66
+ - Channel-Mix Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_channel_mix_finetuning.ipynb)
67
+ - **New Releases (extended features released on October 2024)**
68
+ - Finetuning and Forecasting with Exogenous/Control Variables [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
69
+ - Finetuning and Forecasting with static categorical features [Example: To be added soon]
70
+ - Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_rolling_prediction_getting_started.ipynb)
71
+ - Helper scripts for optimal Learning Rate suggestions for Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
72
+
73
+ ## Benchmarks
74
+
75
+ TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
76
+ Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
77
+ adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
78
+ - TTM-B referred in the paper maps to the `512-96-r2` model.
79
+ - TTM-E referred in the paper maps to the `1024-96-r2` model.
80
+ - TTM-A referred in the paper maps to the `1536-96-r2' model
81
+ Please note that the Granite TTM models are pre-trained exclusively on datasets
82
+ with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
83
+ paper, which may lead to minor variations in model performance as compared to the published results. Please refer to our paper for more details.
84
+
85
+ ## Recommended Use
86
+ 1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
87
+ 2. The current open-source version supports only minutely and hourly resolutions(Ex. 10 min, 15 min, 1 hour.). Other lower resolutions (say weekly, or monthly) are currently not supported in this version, as the model needs a minimum context length of 512 or 1024.
88
+ 3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
89
+ impact the model performance.
90
+
91
+
92
+ ## Model Description
93
+
94
+ TTM falls under the category of “focused pre-trained models”, wherein each pre-trained TTM is tailored for a particular forecasting
95
+ setting (governed by the context length and forecast length). Instead of building one massive model supporting all forecasting settings,
96
+ we opt for the approach of constructing smaller pre-trained models, each focusing on a specific forecasting setting, thereby
97
+ yielding more accurate results. Furthermore, this approach ensures that our models remain extremely small and exceptionally fast,
98
+ facilitating easy deployment without demanding a ton of resources.
99
+
100
+ Hence, in this model card, we plan to release several pre-trained
101
+ TTMs that can cater to many common forecasting settings in practice. Additionally, we have released our source code along with
102
+ our pretraining scripts that users can utilize to pretrain models on their own. Pretraining TTMs is very easy and fast, taking
103
+ only 3-6 hours using 6 A100 GPUs, as opposed to several days or weeks in traditional approaches.
104
+
105
+ Each pre-trained model will be released in a different branch name in this model card. Kindly access the required model using our
106
+ getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) mentioning the branch name.
107
+
108
+
109
+ ## Model Details
110
+
111
+ For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
112
+
113
+ TTM-1 currently supports 2 modes:
114
+
115
+ - **Zeroshot forecasting**: Directly apply the pre-trained model on your target data to get an initial forecast (with no training).
116
+
117
+ - **Finetuned forecasting**: Finetune the pre-trained model with a subset of your target data to further improve the forecast.
118
+
119
+ **Since, TTM models are extremely small and fast, it is practically very easy to finetune the model with your available target data in few minutes
120
+ to get more accurate forecasts.**
121
+
122
+ The current release supports multivariate forecasting via both channel independence and channel-mixing approaches.
123
+ Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
124
+ time-series variates, a critical capability lacking in existing counterparts.
125
+
126
+ In addition, TTM also supports exogenous infusion and categorical data infusion.
127
+
128
+
129
+ ### Model Sources
130
+
131
+ - **Repository:** https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer
132
+ - **Paper:** https://arxiv.org/pdf/2401.03955.pdf
133
+
134
+
135
+ ### Blogs and articles on TTM:
136
+ - Refer to our [wiki](https://github.com/ibm-granite/granite-tsfm/wiki)
137
+
138
+
139
+ ## Uses
140
+
141
+ ```
142
+ # Load Model from HF Model Hub mentioning the branch name in revision field
143
+
144
+ model = TinyTimeMixerForPrediction.from_pretrained(
145
+ "https://huggingface.co/ibm/TTM", revision="main"
146
+ )
147
+
148
+ # Do zeroshot
149
+ zeroshot_trainer = Trainer(
150
+ model=model,
151
+ args=zeroshot_forecast_args,
152
+ )
153
+ )
154
+
155
+ zeroshot_output = zeroshot_trainer.evaluate(dset_test)
156
+
157
+
158
+ # Freeze backbone and enable few-shot or finetuning:
159
+
160
+ # freeze backbone
161
+ for param in model.backbone.parameters():
162
+ param.requires_grad = False
163
+
164
+ finetune_forecast_trainer = Trainer(
165
+ model=model,
166
+ args=finetune_forecast_args,
167
+ train_dataset=dset_train,
168
+ eval_dataset=dset_val,
169
+ callbacks=[early_stopping_callback, tracking_callback],
170
+ optimizers=(optimizer, scheduler),
171
+ )
172
+ finetune_forecast_trainer.train()
173
+ fewshot_output = finetune_forecast_trainer.evaluate(dset_test)
174
+
175
+ ```
176
+
177
+
178
+ ## Training Data
179
+
180
+ The original r1 TTM models were trained on a collection of datasets as follows:
181
+ - Australian Electricity Demand: https://zenodo.org/records/4659727
182
+ - Australian Weather: https://zenodo.org/records/4654822
183
+ - Bitcoin dataset: https://zenodo.org/records/5122101
184
+ - KDD Cup 2018 dataset: https://zenodo.org/records/4656756
185
+ - London Smart Meters: https://zenodo.org/records/4656091
186
+ - Saugeen River Flow: https://zenodo.org/records/4656058
187
+ - Solar Power: https://zenodo.org/records/4656027
188
+ - Sunspots: https://zenodo.org/records/4654722
189
+ - Solar: https://zenodo.org/records/4656144
190
+ - US Births: https://zenodo.org/records/4656049
191
+ - Wind Farms Production data: https://zenodo.org/records/4654858
192
+ - Wind Power: https://zenodo.org/records/4656032
193
+  - PEMSD3, PEMSD4, PEMSD7, PEMSD8, PEMS_BAY: https://drive.google.com/drive/folders/1g5v2Gq1tkOq8XO0HDCZ9nOTtRpB6-gPe
194
+  - LOS_LOOP: https://drive.google.com/drive/folders/1g5v2Gq1tkOq8XO0HDCZ9nOTtRpB6-gPe
195
+
196
+
197
+
198
+ ## Citation
199
+ Kindly cite the following paper, if you intend to use our model or its associated architectures/approaches in your
200
+ work
201
+
202
+ **BibTeX:**
203
+
204
+ ```
205
+ @inproceedings{ekambaram2024tinytimemixersttms,
206
+ title={Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series},
207
+ author={Vijay Ekambaram and Arindam Jati and Pankaj Dayama and Sumanta Mukherjee and Nam H. Nguyen and Wesley M. Gifford and Chandra Reddy and Jayant Kalagnanam},
208
+ booktitle={Advances in Neural Information Processing Systems (NeurIPS 2024)},
209
+ year={2024},
210
+ }
211
+ ```
212
+
213
+ ## Model Card Authors
214
+
215
+ Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Wesley M. Gifford, Sumanta Mukherjee, Chandra Reddy and Jayant Kalagnanam
216
+
217
+
218
+ ## IBM Public Repository Disclosure:
219
+
220
+ All content in this repository including code has been provided by IBM under the associated
221
+ open source software license and IBM is under no obligation to provide enhancements,
222
+ updates, or support. IBM developers produced this code as an
223
+ open source project (not as an IBM product), and IBM makes no assertions as to
224
+ the level of quality nor security, and will not be maintaining this code going forward.