Update README.md
Browse files
README.md
CHANGED
@@ -20,14 +20,6 @@ TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Serie
|
|
20 |
**With model sizes starting from 1M params, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
21 |
|
22 |
|
23 |
-
|
24 |
-
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples).** We have another set of TTM models released under `TTM-R1` trained on ~250M samples
|
25 |
-
which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1) In general, `TTM-R2` models perform better than `TTM-R1` models as they are
|
26 |
-
trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to try both
|
27 |
-
R1 and R2 variants and pick the best for your data.
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|
32 |
forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
|
33 |
fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
|
@@ -39,6 +31,10 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
|
|
39 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
40 |
|
41 |
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## Model Releases (along with the branch name where the models are stored):
|
44 |
|
|
|
20 |
**With model sizes starting from 1M params, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
21 |
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|
24 |
forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
|
25 |
fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
|
|
|
31 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
32 |
|
33 |
|
34 |
+
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples).** We have another set of TTM models released under `TTM-R1` trained on ~250M samples
|
35 |
+
which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1) In general, `TTM-R2` models perform better than `TTM-R1` models as they are
|
36 |
+
trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to try both
|
37 |
+
R1 and R2 variants and pick the best for your data.
|
38 |
|
39 |
## Model Releases (along with the branch name where the models are stored):
|
40 |
|