DynMoE-Phi-2-2.7B / README.md
kenshinn's picture
Update model card: Correct pipeline tag and add library name (#1)
4d81189 verified
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
<h2 align="center"> <a href="https://arxiv.org/abs/2405.14297">Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models</a></h2>
<h5 align="center"> If our project helps you, please give us a star ⭐ on <a href="https://github.com/LINs-lab/DynMoE">GitHub</a> and cite our paper!</h5>
<h5 align="center">
## πŸ“° News
- **[2025.01.23]**: πŸŽ‰ Our paper is accepted to ICLR 2025!
- **[2024.05.25]** πŸ”₯ Our **checkpoints** are available now!
- **[2024.05.23]** πŸ”₯ Our [paper](https://arxiv.org/abs/2405.14297) is released!
## 😎 What's Interesting?
**Dynamic Mixture of Experts (DynMoE)** incorporates (1) a novel gating method that enables each token to automatically determine the number of experts to activate. (2) An adaptive process automatically adjusts the number of experts during training.
### Top-Any Gating
<video controls src="https://i.imgur.com/bLgNaoH.mp4" title="Top-Any Gating"></video>
### Adaptive Training Process
![](https://cdn.jsdelivr.net/gh/QAQdev/Pics@master/uPic/adaptive.png)
## πŸ’‘ Model Details
- πŸ€” DynMoE-Phi-2 is a MoE model with **dynamic top-k gating**, finetuned on [LanguageBind/MoE-LLaVA-Phi2-Stage2](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-Stage2).
- πŸš€ Our DynMoE-Phi-2-2.7B has totally 5.3B parameters, but **only 3.4B are activated!** (average top-k = 1.68)
- βŒ› With the DynMoE tuning stage, we can complete training on 8 A100 GPUs **within 2 days.**
## πŸ‘ Acknowledgement
We are grateful for the following awesome projects:
- [tutel](https://github.com/microsoft/tutel)
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [GMoE](https://github.com/Luodian/Generalizable-Mixture-of-Experts)
- [EMoE](https://github.com/qiuzh20/EMoE)
- [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA)
- [GLUE-X](https://github.com/YangLinyi/GLUE-X)
## πŸ”’ License
This project is released under the Apache-2.0 license as found in the [LICENSE](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md) file.
## ✏️ Citation
```tex
@misc{guo2024dynamic,
title={Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models},
author={Yongxin Guo and Zhenglin Cheng and Xiaoying Tang and Tao Lin},
year={2024},
eprint={2405.14297},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=LINs-lab/DynMoE&type=Date)](https://star-history.com/#LINs-lab/DynMoE&Date)