leaderboard-pr-bot's picture
Adding Evaluation Results
aa5b7a3 verified
|
raw
history blame
2.14 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - winglian/no_robots_rlhf
  - HuggingFaceH4/no_robots
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
  - name: qlora-out
    results: []

Built with Axolotl

openhermes-2_5-dpo-no-robots

This model is a RL fine-tuned version of teknium/OpenHermes-2.5-Mistral-7B on a preference dataset derived from HuggingFace's no robots dataset using DPO.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • training_steps: 408

Training results

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.40
AI2 Reasoning Challenge (25-Shot) 64.93
HellaSwag (10-Shot) 84.30
MMLU (5-Shot) 63.86
TruthfulQA (0-shot) 52.12
Winogrande (5-shot) 77.90
GSM8k (5-shot) 55.27