Test merge of 7b models for learning purposees. v0.2 is mostly the same, with minor promting changes and consolidating shards from 1B to 4B to reduce number of files.

Description: This model is a merge of BAAI/Infinity-Instruct-7M-Gen-mistral-7B, SanjiWatsuki/Kunoichi-7B, and uukuguy/speechless-instruct-mistral-7b-v0.2 This is the first model I've ever uploaded and wanted to learn more about the process. Merged using mergekit-moe.

Works up to 8k context, 16k with 2.5 RoPe scaling

Prompt template: Custom format, or Alpaca

Alpaca: Below is an instruction that describes a task. Write a response that appropriately completes the request.

Instruction: {prompt}

Response:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.92
IFEval (0-Shot) 36.36
BBH (3-Shot) 32.26
MATH Lvl 5 (4-Shot) 5.66
GPQA (0-shot) 6.71
MuSR (0-shot) 13.26
MMLU-PRO (5-shot) 25.25
Downloads last month
1
Safetensors
Model size
12.9B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B

Evaluation results