Title: Introducing Omningotex-7b: The World's Most Accurate 7B LLM

Today, I'm excited to share the creation of a groundbreaking language model, "liminerity/Omningotex-7b-slerp." This model has achieved an impressive accuracy rate of 76.33%, making it the most accurate 7B LLM in the world. The journey to create Omningotex-7b-slerp began with an experimental process called "merging." I started with a model named "ingot-7b-slerp," which was created by merging two other LLMs, "blurred-beagle-7b-slerp" (by myself, liminerity) and "Macaroni-7b-Tied" (by andrijdavid), a total of eight times over. After the successful creation of ingot-7b-slerp, I proceeded to merge it with another model, "dpo-binarized-NeuralTrix-7B" by eren23, using gradient slerp. The resulting model, "binarized-ingotrix-slerp-7b," achieved an accuracy rate of 76.04%. To further enhance the model's performance, I decided to merge "binarized-ingotrix-slerp-7b" with "dpo-binarized-NeutrixOmnibe-7B" by eren23 once again. The resulting model, "Omningotex-7b," is now the most accurate 7B LLM available. This breakthrough in LLM accuracy was achieved through a combination of careful experimentation and a deep understanding of the underlying algorithms and techniques. I believe that Omningotex-7b-slerp's success demonstrates the potential for further advancements in the field of natural language processing and artificial intelligence. I look forward to sharing more updates and insights as I continue to explore the possibilities of LLMs and push the boundaries of what is possible in the world of AI. Stay tuned for more exciting developments in the future!

A huge thank you to Maxime Labonne and his creation of LazyMergeKit colab project. Use of it helped me gain a further grasp of the concepts at play and led to the creation of this model. I'm sure it won't be number 1 for long which excited me even more!

Next, I set out to learn how to fine-tune with the resources I have available. My next overall goal is to try and find a way to produce a smaller model with high accuracy either through merging down using fewer layers after each merge. I may need to include finetuning between each merge or merging larger more accurate models into a smaller base while maintaining accuracy and performance. Every version of "TinyMistral" I come by seems to be bricked in the sense it spits out nonsense. Thank you for your time If you read this all the way.

Omningotex-7B-slerp

Omningotex-7B-slerp is a merge of the following models using LazyMergekit:

🧩 Configuration

slices:
  - sources:
      - model: liminerity/binarized-ingotrix-slerp-7b
        layer_range: [0, 32]
      - model: eren23/dpo-binarized-NeutrixOmnibe-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/binarized-ingotrix-slerp-7b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "liminerity/Omningotex-7b-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.33
AI2 Reasoning Challenge (25-Shot) 73.29
HellaSwag (10-Shot) 88.96
MMLU (5-Shot) 64.69
TruthfulQA (0-shot) 76.32
Winogrande (5-shot) 84.21
GSM8k (5-shot) 70.51
Downloads last month
79
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for liminerity/Omningotex-7b-slerp

Collection including liminerity/Omningotex-7b-slerp

Evaluation results