leaderboard-pr-bot's picture
Adding Evaluation Results
ee2b3bb verified
|
raw
history blame
4.47 kB
metadata
license: apache-2.0
model-index:
  - name: OpenHermes-7B-Symbolic
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 63.14
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 82.73
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 62.62
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 48.82
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.85
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 53.45
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hedronstone/OpenHermes-7B-Symbolic
          name: Open LLM Leaderboard

OpenHermes-7B-Symbolic

Model description

OpenHermes-7B-Symbolic is a OpenHermes-2.5-Mistral-7B fine-tuned on 93K comprehensive and meticulously curated samples. Each sample was structured to facilitate the model's understanding and generation of complex, hierarchical ICD medical coding system.

Benchmark OpenHermes-7B-Symbolic OpenHermes-2.5-Mistral-7B
Average 64.44 65.26
ARC 63.14 64.93
HellaSwag 82.73 84.18
MMLU 62.62 63.64
TruthfulQA 48.82 52.24
Winogrande 75.85 78.06
GSM8K 53.45 48.52

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 64.44
AI2 Reasoning Challenge (25-Shot) 63.14
HellaSwag (10-Shot) 82.73
MMLU (5-Shot) 62.62
TruthfulQA (0-shot) 48.82
Winogrande (5-shot) 75.85
GSM8k (5-shot) 53.45