brittlewis12 commited on
Commit
7999b89
1 Parent(s): f27cbbe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ inference: false
4
+ pipeline_tag: text-generation
5
+ language:
6
+ - en
7
+ license: other
8
+ license_name: llama3
9
+ license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
10
+ model_creator: meta-llama
11
+ model_name: Meta-Llama-3-8B-Instruct
12
+ model_type: llama
13
+ tags:
14
+ - facebook
15
+ - meta
16
+ - pytorch
17
+ - llama
18
+ - llama-3
19
+ quantized_by: brittlewis12
20
+
21
+ ---
22
+
23
+ # Meta-Llama-3-8B-Instruct GGUF
24
+
25
+ **Original model**: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
26
+
27
+ **Model creator**: [Meta](https://huggingface.co/meta-llama)
28
+
29
+ This repo contains GGUF format model files for Meta’s Llama-3-8B-Instruct,
30
+ **updated as of 2024-04-20** to handle the `<|eot_id|>` special token as EOS token.
31
+
32
+ Learn more on Meta’s [Llama 3 page](https://llama.meta.com/llama3).
33
+
34
+ ### What is GGUF?
35
+
36
+ GGUF is a file format for representing AI models. It is the third version of the format,
37
+ introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
38
+ Converted with llama.cpp build 2700 (revision [aed82f6](https://github.com/ggerganov/llama.cpp/commit/aed82f6837a3ea515f4d50201cfc77effc7d41b4)),
39
+ using [autogguf](https://github.com/brittlewis12/autogguf).
40
+
41
+ ### Prompt template
42
+
43
+ ```
44
+ <|start_header_id|>system<|end_header_id|>
45
+
46
+ {{system_prompt}}<|eot_id|><|start_header_id|>user<|end_header_id|>
47
+
48
+ {{prompt}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
49
+
50
+
51
+ ```
52
+
53
+ ---
54
+
55
+ ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
56
+
57
+ ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
58
+
59
+ [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
60
+ - create & save **Characters** with custom system prompts & temperature settings
61
+ - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
62
+ - make it your own with custom **Theme colors**
63
+ - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
64
+ - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
65
+ - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
66
+
67
+ ---
68
+
69
+ ## Original Model Evaluation
70
+
71
+ <table>
72
+ <tr>
73
+ <td><strong>Benchmark</strong>
74
+ </td>
75
+ <td><strong>Llama 3 8B</strong>
76
+ </td>
77
+ <td><strong>Llama 2 7B</strong>
78
+ </td>
79
+ <td><strong>Llama 2 13B</strong>
80
+ </td>
81
+ </tr>
82
+ <tr>
83
+ <td>MMLU (5-shot)
84
+ </td>
85
+ <td><b>68.4</b>
86
+ </td>
87
+ <td>34.1
88
+ </td>
89
+ <td>47.8
90
+ </td>
91
+ </tr>
92
+ <tr>
93
+ <td>GPQA (0-shot)
94
+ </td>
95
+ <td><b>34.2</b>
96
+ </td>
97
+ <td>21.7
98
+ </td>
99
+ <td>22.3
100
+ </td>
101
+ </tr>
102
+ <tr>
103
+ <td>HumanEval (0-shot)
104
+ </td>
105
+ <td><b>62.2</b>
106
+ </td>
107
+ <td>7.9
108
+ </td>
109
+ <td>14.0
110
+ </td>
111
+ </tr>
112
+ <tr>
113
+ <td>GSM-8K (8-shot, CoT)
114
+ </td>
115
+ <td><b>79.6</b>
116
+ </td>
117
+ <td>25.7
118
+ </td>
119
+ <td>77.4
120
+ </td>
121
+ </tr>
122
+ <tr>
123
+ <td>MATH (4-shot, CoT)
124
+ </td>
125
+ <td><b>30.0</b>
126
+ </td>
127
+ <td>3.8
128
+ </td>
129
+ <td>6.7
130
+ </td>
131
+ </tr>
132
+ </table>