mattritchey commited on
Commit
65f8a48
1 Parent(s): 2b0e1d8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - edu
7
+ - continual pretraining
8
+ - llama-cpp
9
+ - gguf-my-repo
10
+ base_model: BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
11
+ datasets:
12
+ - HuggingFaceFW/fineweb-edu
13
+ metrics:
14
+ - accuracy
15
+ inference:
16
+ parameters:
17
+ max_new_tokens: 64
18
+ do_sample: true
19
+ temperature: 0.8
20
+ repetition_penalty: 1.05
21
+ no_repeat_ngram_size: 4
22
+ eta_cutoff: 0.0006
23
+ renormalize_logits: true
24
+ widget:
25
+ - text: My name is El Microondas the Wise, and
26
+ example_title: El Microondas
27
+ - text: Kennesaw State University is a public
28
+ example_title: Kennesaw State University
29
+ - text: Bungie Studios is an American video game developer. They are most famous for
30
+ developing the award winning Halo series of video games. They also made Destiny.
31
+ The studio was founded
32
+ example_title: Bungie
33
+ - text: The Mona Lisa is a world-renowned painting created by
34
+ example_title: Mona Lisa
35
+ - text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
36
+ example_title: Harry Potter Series
37
+ - text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
38
+ have water, but no fish. What am I?
39
+
40
+ Answer:'
41
+ example_title: Riddle
42
+ - text: The process of photosynthesis involves the conversion of
43
+ example_title: Photosynthesis
44
+ - text: Jane went to the store to buy some groceries. She picked up apples, oranges,
45
+ and a loaf of bread. When she got home, she realized she forgot
46
+ example_title: Story Continuation
47
+ - text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
48
+ and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
49
+ they meet if the distance between the stations is 300 miles?
50
+
51
+ To determine'
52
+ example_title: Math Problem
53
+ - text: In the context of computer programming, an algorithm is
54
+ example_title: Algorithm Definition
55
+ pipeline_tag: text-generation
56
+ model-index:
57
+ - name: smol_llama-220M-GQA-fineweb_edu
58
+ results:
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: IFEval (0-Shot)
64
+ type: HuggingFaceH4/ifeval
65
+ args:
66
+ num_few_shot: 0
67
+ metrics:
68
+ - type: inst_level_strict_acc and prompt_level_strict_acc
69
+ value: 19.88
70
+ name: strict accuracy
71
+ source:
72
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: BBH (3-Shot)
79
+ type: BBH
80
+ args:
81
+ num_few_shot: 3
82
+ metrics:
83
+ - type: acc_norm
84
+ value: 2.31
85
+ name: normalized accuracy
86
+ source:
87
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: MATH Lvl 5 (4-Shot)
94
+ type: hendrycks/competition_math
95
+ args:
96
+ num_few_shot: 4
97
+ metrics:
98
+ - type: exact_match
99
+ value: 0.0
100
+ name: exact match
101
+ source:
102
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
103
+ name: Open LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: GPQA (0-shot)
109
+ type: Idavidrein/gpqa
110
+ args:
111
+ num_few_shot: 0
112
+ metrics:
113
+ - type: acc_norm
114
+ value: 1.23
115
+ name: acc_norm
116
+ source:
117
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
118
+ name: Open LLM Leaderboard
119
+ - task:
120
+ type: text-generation
121
+ name: Text Generation
122
+ dataset:
123
+ name: MuSR (0-shot)
124
+ type: TAUR-Lab/MuSR
125
+ args:
126
+ num_few_shot: 0
127
+ metrics:
128
+ - type: acc_norm
129
+ value: 14.26
130
+ name: acc_norm
131
+ source:
132
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
133
+ name: Open LLM Leaderboard
134
+ - task:
135
+ type: text-generation
136
+ name: Text Generation
137
+ dataset:
138
+ name: MMLU-PRO (5-shot)
139
+ type: TIGER-Lab/MMLU-Pro
140
+ config: main
141
+ split: test
142
+ args:
143
+ num_few_shot: 5
144
+ metrics:
145
+ - type: acc
146
+ value: 1.41
147
+ name: accuracy
148
+ source:
149
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
150
+ name: Open LLM Leaderboard
151
+ ---
152
+
153
+ # mattritchey/smol_llama-220M-GQA-fineweb_edu-Q4_K_M-GGUF
154
+ This model was converted to GGUF format from [`BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu`](https://huggingface.co/BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
155
+ Refer to the [original model card](https://huggingface.co/BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu) for more details on the model.
156
+
157
+ ## Use with llama.cpp
158
+ Install llama.cpp through brew (works on Mac and Linux)
159
+
160
+ ```bash
161
+ brew install llama.cpp
162
+
163
+ ```
164
+ Invoke the llama.cpp server or the CLI.
165
+
166
+ ### CLI:
167
+ ```bash
168
+ llama-cli --hf-repo mattritchey/smol_llama-220M-GQA-fineweb_edu-Q4_K_M-GGUF --hf-file smol_llama-220m-gqa-fineweb_edu-q4_k_m.gguf -p "The meaning to life and the universe is"
169
+ ```
170
+
171
+ ### Server:
172
+ ```bash
173
+ llama-server --hf-repo mattritchey/smol_llama-220M-GQA-fineweb_edu-Q4_K_M-GGUF --hf-file smol_llama-220m-gqa-fineweb_edu-q4_k_m.gguf -c 2048
174
+ ```
175
+
176
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
177
+
178
+ Step 1: Clone llama.cpp from GitHub.
179
+ ```
180
+ git clone https://github.com/ggerganov/llama.cpp
181
+ ```
182
+
183
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
184
+ ```
185
+ cd llama.cpp && LLAMA_CURL=1 make
186
+ ```
187
+
188
+ Step 3: Run inference through the main binary.
189
+ ```
190
+ ./llama-cli --hf-repo mattritchey/smol_llama-220M-GQA-fineweb_edu-Q4_K_M-GGUF --hf-file smol_llama-220m-gqa-fineweb_edu-q4_k_m.gguf -p "The meaning to life and the universe is"
191
+ ```
192
+ or
193
+ ```
194
+ ./llama-server --hf-repo mattritchey/smol_llama-220M-GQA-fineweb_edu-Q4_K_M-GGUF --hf-file smol_llama-220m-gqa-fineweb_edu-q4_k_m.gguf -c 2048
195
+ ```