metadata
license: apache-2.0
Overview
The Mixtral-7x8B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-7x8Boutperforms Llama 2 70B on most benchmarks we tested.
Variants
No | Variant | Cortex CLI command |
---|---|---|
1 | 7x8b-gguf | cortex run mixtral:7x8b-gguf |
Use it with Jan (UI)
- Install Jan using Quickstart
- Use in Jan model Hub:
cortexhub/mixtral
Use it with Cortex (CLI)
- Install Cortex using Quickstart
- Run the model with command:
cortex run mixtral
Credits
- Author: Mistralai
- Converter: Homebrew