vijaye12 commited on
Commit
8b70129
1 Parent(s): d36e37a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -20,12 +20,6 @@ TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Serie
20
  **With less than 1 Million parameters, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
21
 
22
 
23
- TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released under TTM-R2 trained on a much larger pretraining
24
- dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). In general, TTM-R2 models perform better than
25
- TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
26
- try both R1 and R2 variants and pick the best for your data.
27
-
28
-
29
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
30
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
31
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
@@ -37,6 +31,10 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
37
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
38
 
39
 
 
 
 
 
40
 
41
  ## Model Releases (along with the branch name where the models are stored):
42
 
 
20
  **With less than 1 Million parameters, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
21
 
22
 
 
 
 
 
 
 
23
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
24
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
25
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
 
31
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
32
 
33
 
34
+ TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released under TTM-R2 trained on a much larger pretraining
35
+ dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). In general, TTM-R2 models perform better than
36
+ TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
37
+ try both R1 and R2 variants and pick the best for your data.
38
 
39
  ## Model Releases (along with the branch name where the models are stored):
40