language:
- en
license: cc-by-nc-4.0
library_name: transformers
metrics:
- accuracy
model-index:
- name: 0ai-7B-v5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.46
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.38
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.19
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 79.86
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.48
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.49
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=0ai/0ai-7B-v5
name: Open LLM Leaderboard
Model Evaluation
The model now ranks number 3 for hellaSwag benchmark and number 1 for TruthfulQA benchmark on Open llm leaderboard!
Expect More High Quality Models Soon!
Experimental Model Warning
This model is an experimental prototype and should not be considered production-ready. Reasons for Experimental Status Potential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results. this is not the instruct/chat version!
Precautions to Take
Use with Caution: Be aware that the model's output may contain factual inaccuracies or biases.
Verify Output: Always verify the model's output with other sources to ensure its accuracy.
Report Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates.
Avoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making.
By understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively
License: This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
Disclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 76.48 |
AI2 Reasoning Challenge (25-Shot) | 73.46 |
HellaSwag (10-Shot) | 89.38 |
MMLU (5-Shot) | 64.19 |
TruthfulQA (0-shot) | 79.86 |
Winogrande (5-shot) | 85.48 |
GSM8k (5-shot) | 66.49 |