--- license: mit datasets: - AutonLab/Timeseries-PILE metrics: - accuracy - mse - mae - f1 tags: - time series - forecasting - classification - anomaly detection - imputation - transformers - pretrained models - foundation models - time-series --- # MOMENT-Large MOMENT is a family of foundation models for general-purpose time-series analysis. The models in this family (1) serve as a building block for diverse **time-series analysis tasks** (e.g., forecasting, classification, anomaly detection, and imputation, etc.), (2) are effective **out-of-the-box**, i.e., with no (or few) task-specific exemplars (enabling e.g., zero-shot forecasting, few-shot classification, etc.), and (3) are **tunable** using in-distribution and task-specific data to improve performance. For details on MOMENT models, training data, and experimental results, please refer to the paper [MOMENT: A Family of Open Time-series Foundation Models](https://arxiv.org/pdf/2402.03885.pdf). # Usage Install the package using: ```bash pip install git+https://github.com/moment-timeseries-foundation-model/moment-test.git ``` To load the pre-trained model for one of the tasks, use one of the following code snippets: **Forecasting** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={ 'task_name': 'forecasting', 'forecast_horizon': 96 }, ) model.init() ``` **Classification** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={ 'task_name': 'classification', 'n_channels': 1, 'num_class': 2 }, ) model.init() ``` **Anomaly Detection/Imputation/Pre-training** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={"task_name": "reconstruction"}, ) mode.init() ``` **Embedding** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={'task_name': 'embedding'}, ) ``` ## Model Details ### Model Description - **Developed by:** [Auton Lab](https://autonlab.org/), [Carnegie Mellon University](https://www.cmu.edu/) and [University of Pennsylvania](https://www.upenn.edu/) - **Funded by [optional]:** [More Information Needed] - **Model type:** Time-series Foundation Model - **License:** MIT License ### Model Sources - **Repository:** https://github.com/moment-timeseries-foundation-model/ - **Paper:** https://arxiv.org/abs/2402.03885 - **Demo:** https://github.com/moment-timeseries-foundation-model/ ## Uses ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [More Information Needed] ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact We train multiple models over many days resulting in significant energy usage and a sizeable carbon footprint. However, we hope that releasing our models will ensure that future time-series modeling efforts are quicker and more efficient, resulting in lower carbon emissions. We use the Total Graphics Power (TGP) to calculate the total power consumed for training MOMENT models, although the total power consumed by the GPU will likely vary a little based on the GPU utilization while training our model. Our calculations do not account for power demands from other sources of our compute. We use 336.566 Kg C02/MWH as the standard value of CO2 emission per megawatt hour of energy consumed for [Pittsburgh](https://emissionsindex.org/). - **Hardware Type:** NVIDIA RTX A6000 GPU - **GPU Hours:** 404 - **Compute Region:** Pittsburgh, USA - **Carbon Emission (tCO2eq):** ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure #### Hardware All models were trained and evaluated on a computing cluster consisting of 128 AMD EPYC 7502 CPUs, 503 GB of RAM, and 8 NVIDIA RTX A6000 GPUs each with 49 GiB RAM. All MOMENT variants were trained on a single A6000 GPU (with any data or model parallelism). #### Software ## Citation [optional] **BibTeX:** If you use MOMENT please cite our paper: ```bibtex @article{ goswami2024moment, title={{MOMENT: A Family of Open Time-series Foundation Models}}, author={Goswami, Mononito and Szafer, Konrad and Choudhry, Arjun and Cai, Yifu and Li, Shuo and Dubrawski, Artur}, journal={arXiv preprint arXiv:2402.03885}, year={2024}, } ``` **APA:** Goswami, M., Szafer, K., Choudhry, A., Cai, Y., Li, S., & Dubrawski, A. (2024). MOMENT: A Family of Open Time-series Foundation Models. arXiv preprint arXiv:2402.03885. ## Glossary [optional] [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact