Edit model card

Arcanum-12b Banner

Arcanum-12b πŸ§™β€β™‚οΈ

Arcanum-12b is a merged large language model created by combining TheDrummer/Rocinante-12B-v1.1 and MarinaraSpaghetti/NemoMix-Unleashed-12B using a novel merging technique.

Model Details πŸ“Š

Model Architecture πŸ—οΈ

  • Base model: MarinaraSpaghetti/NemoMix-Unleashed-12B
  • Parameter count: ~12 billion
  • Architecture specifics: Transformer-based language model

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 20.48
IFEval (0-Shot) 29.07
BBH (3-Shot) 31.88
MATH Lvl 5 (4-Shot) 10.27
GPQA (0-shot) 9.40
MuSR (0-shot) 13.53
MMLU-PRO (5-shot) 28.74

Training & Merging πŸ”„

Arcanum-12b was created by merging two existing 12B models:

  1. TheDrummer/Rocinante-12B-v1.1

    • Density parameters: [1, 0.8, 0.6]
    • Weight: 0.7
  2. MarinaraSpaghetti/NemoMix-Unleashed-12B

    • Density parameters: [0.5, 0.7, 0.9]
    • Weight: 0.8

Merging method: Ties Additional parameters:

  • Normalization: True
  • Int8 mask: True
  • Data type: float16

Intended Use 🎯

Conversation with different personas.

Ethical Considerations πŸ€”

As a merged model based on existing language models, Arcanum-12b may inherit biases and limitations from its parent models. Users should be aware of potential biases in generated content and use the model responsibly.

Acknowledgments πŸ™

We acknowledge the contributions of the original model creators:

  • TheDrummer for Rocinante-12B-v1.1
  • MarinaraSpaghetti for NemoMix-Unleashed-12B

Their work formed the foundation for Arcanum-12b.

Downloads last month
147
Safetensors
Model size
12.2B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Xclbr7/Arcanum-12b

Finetunes
2 models
Quantizations
10 models

Evaluation results