license: apache-2.0 | |
pipeline_tag: text-generation | |
tags: | |
- pretrained | |
# Sharded version of Mistral-7B-v0.1 | |
This is the sharded version of Mistral-7B-v0.1 so you can use it when you have limited CPU memory | |
# Model Card for Mistral-7B-v0.1 | |
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. | |
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. | |
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/) | |
## Model Architecture | |
Mistral-7B-v0.1 is a transformer model, with the following architecture choices: | |
- Grouped-Query Attention | |
- Sliding-Window Attention | |
- Byte-fallback BPE tokenizer | |
## The Mistral AI Team | |
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |