mradermacher commited on
Commit
c21e2db
·
verified ·
1 Parent(s): 0703238

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,6 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- ### quantize_version: 2 -->
2
  <!-- ### output_tensor_quantised: 1 -->
3
  <!-- ### convert_type: hf -->
4
  <!-- ### vocab_type: -->
5
  <!-- ### tags: nicoboss -->
6
  weighted/imatrix quants of https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: one-man-army/UNA-34Beagles-32K-bf16-v1
3
+ datasets:
4
+ - allenai/ai2_arc
5
+ - unalignment/spicy-3.1
6
+ - codeparrot/apps
7
+ - facebook/belebele
8
+ - boolq
9
+ - jondurbin/cinematika-v0.1
10
+ - drop
11
+ - lmsys/lmsys-chat-1m
12
+ - TIGER-Lab/MathInstruct
13
+ - cais/mmlu
14
+ - Muennighoff/natural-instructions
15
+ - openbookqa
16
+ - piqa
17
+ - Vezora/Tested-22k-Python-Alpaca
18
+ - cakiki/rosetta-code
19
+ - Open-Orca/SlimOrca
20
+ - spider
21
+ - squad_v2
22
+ - migtissera/Synthia-v1.3
23
+ - datasets/winogrande
24
+ - nvidia/HelpSteer
25
+ - Intel/orca_dpo_pairs
26
+ - unalignment/toxic-dpo-v0.1
27
+ - jondurbin/truthy-dpo-v0.1
28
+ - allenai/ultrafeedback_binarized_cleaned
29
+ - Squish42/bluemoon-fandom-1-1-rp-cleaned
30
+ - LDJnr/Capybara
31
+ - JULIELab/EmoBank
32
+ - kingbri/PIPPA-shareGPT
33
+ language:
34
+ - en
35
+ library_name: transformers
36
+ license: apache-2.0
37
+ quantized_by: mradermacher
38
+ ---
39
+ ## About
40
+
41
  <!-- ### quantize_version: 2 -->
42
  <!-- ### output_tensor_quantised: 1 -->
43
  <!-- ### convert_type: hf -->
44
  <!-- ### vocab_type: -->
45
  <!-- ### tags: nicoboss -->
46
  weighted/imatrix quants of https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1
47
+
48
+ <!-- provided-files -->
49
+ static quants are available at https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF
50
+ ## Usage
51
+
52
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
53
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
54
+ more details, including on how to concatenate multi-part files.
55
+
56
+ ## Provided Quants
57
+
58
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
59
+
60
+ | Link | Type | Size/GB | Notes |
61
+ |:-----|:-----|--------:|:------|
62
+ | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
63
+ | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
64
+ | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
65
+
66
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
67
+ types (lower is better):
68
+
69
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
70
+
71
+ And here are Artefact2's thoughts on the matter:
72
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
73
+
74
+ ## FAQ / Model Request
75
+
76
+ See https://huggingface.co/mradermacher/model_requests for some answers to
77
+ questions you might have and/or if you want some other model quantized.
78
+
79
+ ## Thanks
80
+
81
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
82
+ me use its servers and providing upgrades to my workstation to enable
83
+ this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
84
+
85
+ <!-- end -->