cp2024-instruct / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
1a9ecf2 verified
|
raw
history blame
3.59 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
base_model: cpayne1303/cp2024
datasets:
  - teknium/OpenHermes-2.5
model-index:
  - name: cp2024-instruct
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 17.06
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 2.48
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 0
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 1.34
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.18
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 1.85
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
          name: Open LLM Leaderboard

Model Description

This is a model using the llama2 architecture and only 30 million parameters. It is based off of this model and was finetuned on approximately 85 million tokens of instruct data from the first 20000 rows of the openhermes 2.5 dataset with a low learning rate of 2e-6 and context length of 512.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 4.32
IFEval (0-Shot) 17.06
BBH (3-Shot) 2.48
MATH Lvl 5 (4-Shot) 0.00
GPQA (0-shot) 1.34
MuSR (0-shot) 3.18
MMLU-PRO (5-shot) 1.85