Edit model card

WestLake_Noromaid_OpenHermes_neural-chatv0.1

drawing

This is a merge of pre-trained language models created using mergekit. DPO training data has been used to slightly uncensor the LLM. The model's focus is in conversational roleplay. In limited testing, I've been very happy with the result. It has been able to pick up stories where other models have failed or started to loop their responses, and it seems to pace the story well.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mistral-7B-v0.1
    # No parameters necessary for base model
  - model: cognitivecomputations/WestLake-7B-v2-laser
    parameters:
      density: 0.55
      weight: 0.15
  - model: NeverSleep/Noromaid-7B-0.4-DPO
    parameters:
      density: 0.55
      weight: 0.35
  - model: teknium/OpenHermes-2.5-Mistral-7B
    parameters:
      density: 0.55
      weight: 0.30
  - model: Intel/neural-chat-7b-v3-3
    parameters:
      density: 0.55
      weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

Benchmark Testing

drawing
MT-Bench EQ-Bench v2.1
giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1 7.171875 65.56
(Paper) (Paper)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Avg. AI2 (25-Shot) HellaSwag (10-Shot) MMLU (5-Shot) TruthfulQA (0-shot) Winogrande (5-shot) GSM8k (5-shot)
This model 68.86 66.72 85.37 64.67 51.50 79.72 65.20
cognitivecomputations/WestLake-7B-v2-laser 74.78 73.29 88.66 64.72 67.04 86.74 68.23
NeverSleep/Noromaid-7B-0.4-DPO 59.08 62.29 84.32 63.2 42.28 76.95 25.47
teknium/OpenHermes-2.5-Mistral-7B 61.52 64.93 84.18 63.64 52.24 78.06 26.08
Intel/neural-chat-7b-v3-3 69.83 66.89 85.26 63.07 63.01 79.64 61.11

DPO training data used:

  • unalignment/toxic-dpo-v0.2 (Curated version)
Downloads last month
86
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1

Spaces using giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1 5

Evaluation results