File size: 1,032 Bytes
53ca2ce
 
c030e73
 
 
 
 
 
 
 
53ca2ce
c030e73
3cd1e6f
 
c030e73
3cd1e6f
20043ab
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: cc-by-sa-4.0
datasets:
- csitfun/LogiCoT
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- logical
---

This model is tuned on the **LogiCoT** data and the GPT-4 alpaca data with the **LLaMa-7b** model.

We use 2 A100 GPUs

We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_csitfun__llama-7b-logicot)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 39.37   |
| ARC (25-shot)         | 47.01          |
| HellaSwag (10-shot)   | 72.56    |
| MMLU (5-shot)         | 38.93         |
| TruthfulQA (0-shot)   | 43.63   |
| Winogrande (5-shot)   | 67.56   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 5.92         |