Echo1-72B / README.md
ALmonster's picture
Update README.md
417f544 verified
---
license: mit
---
# Echo: A Large Language Model with Temporal Episodic Memory
---
<p align="center">
<img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" width="120" alt="HuggingFace Logo"/>
</p>
---
<p align="center">
<a href="https://github.com/1920993165/echo">πŸ› οΈ <b>Code</b></a> &nbsp; | &nbsp;
<a href="https://huggingface.co/datasets/ALmonster/Echo-v1">πŸ“š <b>Dataset</b></a> &nbsp; | &nbsp;
<a href="https://huggingface.co/ALmonster/Echo1-7B">🧠 <b>7B Model</b></a> &nbsp; | &nbsp;
<a href="https://huggingface.co/ALmonster/Echo1-72B">🧠 <b>72B Model</b></a>
</p>
---
## πŸš€ Welcome to Echo!
**Echo** is a cutting-edge large language model (LLM) designed with temporal episodic memory, enabling advanced reasoning and context retention. We invite the community to explore, use, and cite our work!
---
## πŸ“– Citation
If you use Echo in your research or applications, please cite us:
```bibtex
@misc{liu2025echolargelanguagemodel,
title={Echo: A Large Language Model with Temporal Episodic Memory},
author={WenTao Liu and Ruohua Zhang and Aimin Zhou and Feng Gao and JiaLi Liu},
year={2025},
eprint={2502.16090},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.16090},
}
```
---
## 🌟 Get Involved
We welcome contributions, feedback, and collaboration from the community. Feel free to open issues or pull requests on our [GitHub](https://github.com/1920993165/echo)!
---
<p align="center">
<b>Empowering AI with memory. Echo: Remember, Reason, Respond.</b>
</p>