MixtureofMerges-MoE-4x7b-v5

MixtureofMerges-MoE-4x7b-v5 is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
    positive_prompts:
      - "Answer this question from the ARC (Argument Reasoning Comprehension)."
      - "Use common sense and logical reasoning skills."
      - "What assumptions does this argument rely on?"
      - "Are these assumptions valid? Explain."
      - "Could this be explained in a different way? Provide an alternative explanation."
      - "Identify any weaknesses in this argument."
      - "Does this argument contain any logical fallacies? If so, which ones?"
    negative_prompts:
      - "misses key evidence"
      - "overly general"
      - "focuses on irrelevant details"
      - "assumes information not provided"
      - "relies on stereotypes"
  - source_model: mlabonne/AlphaMonarch-7B
    positive_prompts:
      - "Answer this question, demonstrating commonsense understanding and using any relevant general knowledge you may have."
      - "Provide a concise summary of this passage, then explain why the highlighted section is essential to the main idea."
      - "Read these two brief articles presenting different viewpoints on the same topic. List their key arguments and highlight where they disagree."
      - "Paraphrase this statement, changing the emotional tone but keeping the core meaning intact. Example: Rephrase a worried statement in a humorous way"
      - "Create a short analogy that helps illustrate the main concept of this article."
    negative_prompts:
      - "sounds too basic"
      - "understated"
      - "dismisses important details"
      - "avoids the question's nuance"
      - "takes this statement too literally"
  - source_model: Kukedlc/Neural4gsm8k
    positive_prompts:
      - "Calculate the answer to this math problem"
      - "My mathematical capabilities are strong, allowing me to handle complex mathematical queries"
      - "solve for"
      - "A store sells apples at $0.50 each. If Emily buys 12 apples, how much does she need to pay?"
      - "Isolate x in the following equation: 2x + 5 = 17"
      - "Solve this equation and show your working."
      - "Explain why you used this formula to solve the problem."
      - "Attempt to divide this number by zero. Explain why this cannot be done."
    negative_prompts:
      - "incorrect"
      - "inaccurate"
      - "creativity"
      - "assumed without proof"
      - "rushed calculation"
      - "confuses mathematical concepts"
      - "draws illogical conclusions"
      - "circular reasoning"
  - source_model: eren23/dpo-binarized-NeutrixOmnibe-7B
    positive_prompts:
      - "Generate a few possible continuations to this scenario."
      - "Demonstrate understanding of everyday commonsense in your response."
      - "Use contextual clues to determine the most likely outcome."
      - "Continue this scenario, but make the writing style sound archaic and overly formal."
      - "This narrative is predictable. Can you introduce an unexpected yet plausible twist?"
      - "The character is angry. Continue this scenario showcasing a furious outburst."
    negative_prompts:
      - "repetitive phrases"
      - "overuse of the same words"
      - "contradicts earlier statements - breaks the internal logic of the scenario"
      - "out of character dialogue"
      - "awkward phrasing - sounds unnatural"
      - "doesn't match the given genre"

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "jsfs11/MixtureofMerges-MoE-4x7b-v5"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.02
AI2 Reasoning Challenge (25-Shot) 73.89
HellaSwag (10-Shot) 89.00
MMLU (5-Shot) 64.69
TruthfulQA (0-shot) 73.73
Winogrande (5-shot) 85.08
GSM8k (5-shot) 69.75
Downloads last month
5,065
Safetensors
Model size
24.2B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jsfs11/MixtureofMerges-MoE-4x7b-v5

Space using jsfs11/MixtureofMerges-MoE-4x7b-v5 1

Evaluation results