File size: 7,013 Bytes
02dd8cb 09b6057 88d4730 09b6057 88d4730 09b6057 88d4730 09b6057 88d4730 09b6057 88d4730 09b6057 88d4730 09b6057 02dd8cb afb0d59 02dd8cb afb0d59 02dd8cb de09a79 02dd8cb 09b6057 f43cf00 afb0d59 f43cf00 afb0d59 f43cf00 88d4730 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
---
license: mit
model-index:
- name: Medium
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 44.06
name: strict accuracy
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 44.06
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 47.73
name: normalized accuracy
- type: acc_norm
value: 47.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.78
name: exact match
- type: exact_match
value: 7.78
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.4
name: acc_norm
- type: acc_norm
value: 10.4
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.73
name: acc_norm
- type: acc_norm
value: 8.73
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.96
name: accuracy
- type: acc
value: 36.96
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dnhkng/Medium
name: Open LLM Leaderboard
---
This is a new kind of model optimization. A paper on the technique is currently being written.
This research was supported with hardware from the [appliedAI Institute](https://www.appliedai-institute.de/en/), whose goal is to generate and communicate high-quality knowledge about trustworthy AI.
## Quickstart
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "dnhkng/Medium"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dnhkng__Medium)
| Metric |Value|
|-------------------|----:|
|Avg. |25.94|
|IFEval (0-Shot) |44.06|
|BBH (3-Shot) |47.73|
|MATH Lvl 5 (4-Shot)| 7.78|
|GPQA (0-shot) |10.40|
|MuSR (0-shot) | 8.73|
|MMLU-PRO (5-shot) |36.96|
___________________________________
# *SHAMELESS ADVERTISING BREAK*
Iโm on the hunt for new challenges and a chance to dive into some exciting research opportunities. Oh, and did I mention I just snagged a top spot on the Open LLM leaderboard? ๐
#### Profile
Innovation enthusiast, AI strategist, and interdisciplinary-tech nerd โ that's me! With over a decade of experience in research and project management, my professional journey has been largely shaped by my passion for artificial intelligence and its potential to transform various industries. With a solid background in artificial intelligence and machine learning, coupled with a knack for innovation and problem-solving (and a healthy dose of curiosity), I'm excited to bring my skills to a new team.
Originally from Australia, where I earned my degrees in Organic Chemistry and Biochemistry, I moved to Germany in 2004. My academic pursuit continued with a PhD in Chemistry at the Max Planck Institute of Biochemistry. Today, I leverage my robust educational background and diverse industry experience to drive AI innovations in a wide range of applications. Hobbies? Lots: I've also built the world's most powerful espresso machine and am working to bring [GLaDOS to life](https://github.com/dnhkng/GlaDOS).
___________________________________
I'm based out of Munich, Germany, but I would be interested in working remotely for a team with more compute than my 2x 4090s ๐
#### Reach out via [LinkedIn - Dr David Noel Ng](https://www.linkedin.com/in/dnhkng)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dnhkng__RYS-Medium)
| Metric |Value|
|-------------------|----:|
|Avg. |25.94|
|IFEval (0-Shot) |44.06|
|BBH (3-Shot) |47.73|
|MATH Lvl 5 (4-Shot)| 7.78|
|GPQA (0-shot) |10.40|
|MuSR (0-shot) | 8.73|
|MMLU-PRO (5-shot) |36.96|
|