Edit model card

mlabonne/NeuralHermes-2.5-Mistral-7B-laser AWQ

Model Sumamry

This is an experimental LASER version of NeuralHermes using laserRMT, based on this paper.

Model AGIEval GPT4All TruthfulQA Bigbench Average
NeuralHermes-2.5-Mistral-7B-laser 43.54 73.44 55.26 42.24 53.62
NeuralHermes-2.5-Mistral-7B 43.67 73.24 55.37 41.76 53.51

Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024.

NeuralHermes is an teknium/OpenHermes-2.5-Mistral-7B model that has been further fine-tuned with Direct Preference Optimization (DPO) using the mlabonne/chatml_dpo_pairs dataset. It surpasses the original model on several benchmarks (see results).

It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.

The code to train this model is available on Google Colab and GitHub. It required an A100 GPU for about an hour.

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/NeuralHermes-2.5-Mistral-7B-laser-AWQ"
system_message = "You are Hermes, incarnated as a powerful AI."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
4
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/NeuralHermes-2.5-Mistral-7B-laser-AWQ

Dataset used to train solidrust/NeuralHermes-2.5-Mistral-7B-laser-AWQ

Collection including solidrust/NeuralHermes-2.5-Mistral-7B-laser-AWQ

Evaluation results