GGUF
spectrum
sft
dpo
llama-cpp
gguf-my-repo
Eval Results
Inference Endpoints
conversational
NikolayKozloff commited on
Commit
182329e
1 Parent(s): 7b73884

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +161 -0
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ - en
5
+ - it
6
+ - fr
7
+ - pt
8
+ - nl
9
+ - ar
10
+ - es
11
+ license: apache-2.0
12
+ tags:
13
+ - spectrum
14
+ - sft
15
+ - dpo
16
+ - llama-cpp
17
+ - gguf-my-repo
18
+ base_model: VAGOsolutions/SauerkrautLM-v2-14b-DPO
19
+ datasets:
20
+ - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
21
+ - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
22
+ model-index:
23
+ - name: SauerkrautLM-v2-14b-DPO
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: IFEval (0-Shot)
30
+ type: HuggingFaceH4/ifeval
31
+ args:
32
+ num_few_shot: 0
33
+ metrics:
34
+ - type: inst_level_strict_acc and prompt_level_strict_acc
35
+ value: 74.12
36
+ name: strict accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: BBH (3-Shot)
45
+ type: BBH
46
+ args:
47
+ num_few_shot: 3
48
+ metrics:
49
+ - type: acc_norm
50
+ value: 50.93
51
+ name: normalized accuracy
52
+ source:
53
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
54
+ name: Open LLM Leaderboard
55
+ - task:
56
+ type: text-generation
57
+ name: Text Generation
58
+ dataset:
59
+ name: MATH Lvl 5 (4-Shot)
60
+ type: hendrycks/competition_math
61
+ args:
62
+ num_few_shot: 4
63
+ metrics:
64
+ - type: exact_match
65
+ value: 27.34
66
+ name: exact match
67
+ source:
68
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: GPQA (0-shot)
75
+ type: Idavidrein/gpqa
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: acc_norm
80
+ value: 9.28
81
+ name: acc_norm
82
+ source:
83
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: MuSR (0-shot)
90
+ type: TAUR-Lab/MuSR
91
+ args:
92
+ num_few_shot: 0
93
+ metrics:
94
+ - type: acc_norm
95
+ value: 13.78
96
+ name: acc_norm
97
+ source:
98
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
99
+ name: Open LLM Leaderboard
100
+ - task:
101
+ type: text-generation
102
+ name: Text Generation
103
+ dataset:
104
+ name: MMLU-PRO (5-shot)
105
+ type: TIGER-Lab/MMLU-Pro
106
+ config: main
107
+ split: test
108
+ args:
109
+ num_few_shot: 5
110
+ metrics:
111
+ - type: acc
112
+ value: 45.75
113
+ name: accuracy
114
+ source:
115
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO
116
+ name: Open LLM Leaderboard
117
+ ---
118
+
119
+ # NikolayKozloff/SauerkrautLM-v2-14b-DPO-Q5_K_S-GGUF
120
+ This model was converted to GGUF format from [`VAGOsolutions/SauerkrautLM-v2-14b-DPO`](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
121
+ Refer to the [original model card](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) for more details on the model.
122
+
123
+ ## Use with llama.cpp
124
+ Install llama.cpp through brew (works on Mac and Linux)
125
+
126
+ ```bash
127
+ brew install llama.cpp
128
+
129
+ ```
130
+ Invoke the llama.cpp server or the CLI.
131
+
132
+ ### CLI:
133
+ ```bash
134
+ llama-cli --hf-repo NikolayKozloff/SauerkrautLM-v2-14b-DPO-Q5_K_S-GGUF --hf-file sauerkrautlm-v2-14b-dpo-q5_k_s.gguf -p "The meaning to life and the universe is"
135
+ ```
136
+
137
+ ### Server:
138
+ ```bash
139
+ llama-server --hf-repo NikolayKozloff/SauerkrautLM-v2-14b-DPO-Q5_K_S-GGUF --hf-file sauerkrautlm-v2-14b-dpo-q5_k_s.gguf -c 2048
140
+ ```
141
+
142
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
143
+
144
+ Step 1: Clone llama.cpp from GitHub.
145
+ ```
146
+ git clone https://github.com/ggerganov/llama.cpp
147
+ ```
148
+
149
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
150
+ ```
151
+ cd llama.cpp && LLAMA_CURL=1 make
152
+ ```
153
+
154
+ Step 3: Run inference through the main binary.
155
+ ```
156
+ ./llama-cli --hf-repo NikolayKozloff/SauerkrautLM-v2-14b-DPO-Q5_K_S-GGUF --hf-file sauerkrautlm-v2-14b-dpo-q5_k_s.gguf -p "The meaning to life and the universe is"
157
+ ```
158
+ or
159
+ ```
160
+ ./llama-server --hf-repo NikolayKozloff/SauerkrautLM-v2-14b-DPO-Q5_K_S-GGUF --hf-file sauerkrautlm-v2-14b-dpo-q5_k_s.gguf -c 2048
161
+ ```