File size: 822 Bytes
89c334b 2ffcdb9 89c334b 6914a5b 89c334b 2ffcdb9 89c334b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: apache-2.0
---
## Overview
The Mixtral-7x8B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-7x8Boutperforms Llama 2 70B on most benchmarks we tested.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [7x8b-gguf](https://huggingface.co/cortexhub/mixtral/tree/7x8b-gguf) | `cortex run mixtral:7x8b-gguf` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```
cortexhub/mixtral
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```
cortex run mixtral
```
## Credits
- **Author:** Mistralai
- **Converter:** [Homebrew](https://www.homebrew.ltd/) |