Warning: This model is ranked first on the Open LLM Leaderboard (among the 7B models) (January 28th, 2024). However, note that this model was produced from many merges. I didn't fine-tune any of the models that I merged and I couldn't confirm that none of them have been trained on the evaluation benchmarks.
Model Card for Model ID
This is a mixture of experts created with mergekit and based on mistralai/Mistral-7B-v0.1.
Model Details
Model Description
- Developed by: The Kaitchup
- Model type: Causal
- Language(s) (NLP): English
- License: Apache 2.0
Model Sources
Created with mergekit with this configuration:
models:
- model: mncai/mistral-7b-dpo-v5
# no parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.5
weight: 0.3
- model: BarryFutureman/NeuralTurdusVariant1-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mncai/mistral-7b-dpo-v5
parameters:
normalize: true
dtype: float16
- Downloads last month
- 111
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.