metadata
library_name: peft
license: llama2
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.5.0
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 44.44 |
ARC (25-shot) | 52.82 |
HellaSwag (10-shot) | 76.1 |
MMLU (5-shot) | 50.58 |
TruthfulQA (0-shot) | 43.4 |
Winogrande (5-shot) | 73.72 |
GSM8K (5-shot) | 7.66 |
DROP (3-shot) | 6.78 |