morriszms commited on
Commit
5f296d1
1 Parent(s): 483ac80

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ starchat2-15b-v0.1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ starchat2-15b-v0.1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ starchat2-15b-v0.1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ starchat2-15b-v0.1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ starchat2-15b-v0.1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ starchat2-15b-v0.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ starchat2-15b-v0.1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ starchat2-15b-v0.1-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ starchat2-15b-v0.1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ starchat2-15b-v0.1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ starchat2-15b-v0.1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ starchat2-15b-v0.1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: HuggingFaceH4/starchat2-15b-v0.1
3
+ tags:
4
+ - alignment-handbook
5
+ - generated_from_trainer
6
+ - TensorBlock
7
+ - GGUF
8
+ datasets:
9
+ - HuggingFaceH4/ultrafeedback_binarized
10
+ - HuggingFaceH4/orca_dpo_pairs
11
+ model-index:
12
+ - name: starchat2-15b-v0.1
13
+ results: []
14
+ ---
15
+
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
+ </div>
19
+ <div style="display: flex; justify-content: space-between; width: 100%;">
20
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
22
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
23
+ </p>
24
+ </div>
25
+ </div>
26
+
27
+ ## HuggingFaceH4/starchat2-15b-v0.1 - GGUF
28
+
29
+ This repo contains GGUF format model files for [HuggingFaceH4/starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1).
30
+
31
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
+
33
+ ## Prompt template
34
+
35
+ ```
36
+ <|im_start|>system
37
+ {system_prompt}<|im_end|>
38
+ <|im_start|>user
39
+ {prompt}<|im_end|>
40
+ <|im_start|>assistant
41
+ ```
42
+
43
+ ## Model file specification
44
+
45
+ | Filename | Quant type | File Size | Description |
46
+ | -------- | ---------- | --------- | ----------- |
47
+ | [starchat2-15b-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q2_K.gguf) | Q2_K | 5.768 GB | smallest, significant quality loss - not recommended for most purposes |
48
+ | [starchat2-15b-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q3_K_S.gguf) | Q3_K_S | 6.507 GB | very small, high quality loss |
49
+ | [starchat2-15b-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q3_K_M.gguf) | Q3_K_M | 7.492 GB | very small, high quality loss |
50
+ | [starchat2-15b-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q3_K_L.gguf) | Q3_K_L | 8.350 GB | small, substantial quality loss |
51
+ | [starchat2-15b-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q4_0.gguf) | Q4_0 | 8.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
52
+ | [starchat2-15b-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q4_K_S.gguf) | Q4_K_S | 8.532 GB | small, greater quality loss |
53
+ | [starchat2-15b-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q4_K_M.gguf) | Q4_K_M | 9.183 GB | medium, balanced quality - recommended |
54
+ | [starchat2-15b-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q5_0.gguf) | Q5_0 | 10.265 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
55
+ | [starchat2-15b-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q5_K_S.gguf) | Q5_K_S | 10.265 GB | large, low quality loss - recommended |
56
+ | [starchat2-15b-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q5_K_M.gguf) | Q5_K_M | 10.646 GB | large, very low quality loss - recommended |
57
+ | [starchat2-15b-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q6_K.gguf) | Q6_K | 12.201 GB | very large, extremely low quality loss |
58
+ | [starchat2-15b-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/starchat2-15b-v0.1-GGUF/tree/main/starchat2-15b-v0.1-Q8_0.gguf) | Q8_0 | 15.800 GB | very large, extremely low quality loss - not recommended |
59
+
60
+
61
+ ## Downloading instruction
62
+
63
+ ### Command line
64
+
65
+ Firstly, install Huggingface Client
66
+
67
+ ```shell
68
+ pip install -U "huggingface_hub[cli]"
69
+ ```
70
+
71
+ Then, downoad the individual model file the a local directory
72
+
73
+ ```shell
74
+ huggingface-cli download tensorblock/starchat2-15b-v0.1-GGUF --include "starchat2-15b-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
75
+ ```
76
+
77
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
78
+
79
+ ```shell
80
+ huggingface-cli download tensorblock/starchat2-15b-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
81
+ ```
starchat2-15b-v0.1-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:328fb9e88368e741ed342d6e2e353553ad872b20e3bf38a9e9103f885b5597bd
3
+ size 6192972544
starchat2-15b-v0.1-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aff2e897c36b19c776613ee967e1ca5444781e8aeee65b591eebff6280f09446
3
+ size 8965343200
starchat2-15b-v0.1-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c225da6c0eb41ce278a77b57a3cdd7ea3de268357aea96ed0c147bd15ae6536d
3
+ size 8044431328
starchat2-15b-v0.1-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d61476225099dcd598a53f41bd20611a92e9f6bcb237d31ff3a7759342011bfe
3
+ size 6986483680
starchat2-15b-v0.1-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91ce067a2b22f1cfb4349d79fdb2305d942bf87b7b3a718adf5c6606fedab15f
3
+ size 9065418304
starchat2-15b-v0.1-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b350a4c413dde8e753e71ba37e54c2561a684c101a5d7dfaedbf820a949b4048
3
+ size 9860206144
starchat2-15b-v0.1-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82fef285a8e2d784975d8dced6a9b4617e4865321eb4ddfc2877d23be338f305
3
+ size 9161363008
starchat2-15b-v0.1-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a127d04a6ecb517836325f774c193f976e1693a26dd0afe21f30c9f764671255
3
+ size 11022062656
starchat2-15b-v0.1-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47ecf6cdaa682ce67fbd0d6a56792d68746a43730f2e4fcd01fb2e07062ed3a1
3
+ size 11431498816
starchat2-15b-v0.1-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15f990fa567095673543081131cab522c5429b9641a652abfd508af3aab74b8d
3
+ size 11022062656
starchat2-15b-v0.1-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52fdd5866c8dedd7b03b2c7aca9d781ca4a746aa9bfc121c215195d94f314d11
3
+ size 13100997280
starchat2-15b-v0.1-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4edeee34d77f841cb3e0aeba7748613317cfa58f09d52555e155522fbd529276
3
+ size 16965136864