wmgifford commited on
Commit
d6a7957
·
1 Parent(s): 61fc549

r2.1 release updates

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ TTM outperforms other models demanding billions of parameters in several popular
25
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
26
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed on 1 GPU or on laptops.**
27
 
28
- TTM r2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples). The TTM r2.1 release increases the pretraining dataset size to approximately (~1B samples). The prior model releases, TTM r1, were trained on ~250M samples and can be accessed [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1). In general, TTM r2 models perform better than TTM r1 models as they are
29
  trained on a larger pretraining dataset. In standard benchmarks, TTM r2 outperform TTM r1 by over 15%. However, the choice of r1 vs. r2 depends on your target data distribution, and hence users should try both variants and pick the best model for your data.
30
  The TTM r2 releases support point forecasting use-cases specifically ranging from minutely to hourly resolutions
31
  (Ex. 10 min, 15 min, 1 hour.). With the TTM r2.1 release, we add support for daily and weekly resolutions.
 
25
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
26
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed on 1 GPU or on laptops.**
27
 
28
+ TTM r2 comprises TTM variants pre-trained on larger pretraining datasets (\~700M samples). The TTM r2.1 release increases the pretraining dataset size to approximately (\~1B samples). The prior model releases, TTM r1, were trained on \~250M samples and can be accessed [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1). In general, TTM r2 models perform better than TTM r1 models as they are
29
  trained on a larger pretraining dataset. In standard benchmarks, TTM r2 outperform TTM r1 by over 15%. However, the choice of r1 vs. r2 depends on your target data distribution, and hence users should try both variants and pick the best model for your data.
30
  The TTM r2 releases support point forecasting use-cases specifically ranging from minutely to hourly resolutions
31
  (Ex. 10 min, 15 min, 1 hour.). With the TTM r2.1 release, we add support for daily and weekly resolutions.