outputs

This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0 on the text only 100k samples subset of sr5434/CodegebraGPT_Data dataset. It stopped at 37k steps(for an unknown reason) instead of at 100k steps.

Model description

It can chat with you about science, engineering, math, or coding.

Intended uses & limitations

This is not finetuned with RLHF and is not intended to be used in production.

Training and evaluation data

CodegebraGPT 100k text dataset

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.36.2
  • Pytorch 2.0.1
  • Datasets 2.16.0
  • Tokenizers 0.15.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.68
AI2 Reasoning Challenge (25-Shot) 59.81
HellaSwag (10-Shot) 83.42
MMLU (5-Shot) 60.20
TruthfulQA (0-shot) 46.57
Winogrande (5-shot) 80.98
GSM8k (5-shot) 45.11
Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sr5434/CodegebraGPT-10b

Adapter
(9)
this model

Dataset used to train sr5434/CodegebraGPT-10b