m8than commited on
Commit
863f1ea
Β·
verified Β·
1 Parent(s): 66bfbd8

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Dampfinchen-Llama-3-8B-Ultra-Instruct-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
47
+ featherless-quants.png filter=lfs diff=lfs merge=lfs -text
Dampfinchen-Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7cfb9c770c3ee3569cc6acdf88b25b2227b8cb230edf1fcfb0b932344e5e522
3
+ size 4484364320
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bf1ffad289bfed9eec86820e1984ff8811b0624fa15ed6154d38b6fd1a87cf2
3
+ size 3179132960
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d856d99f1cca984845f934b313e85183a2beefa0531a842d2ec491d0801b03a
3
+ size 4321957920
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47ae1bd992eddd99c8ba0d211ada5fed85c40c5e4e7eb893f727256c6f3f252c
3
+ size 4018919456
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d684bdf1eb2ec66c08bbd547ec85a7d062fab8e221dd0d4c79147c4f767f6a2
3
+ size 3664500768
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfed02ff5029a497395a841f6f42a62f0c85c9bab9fea657890322f22ff3a5d9
3
+ size 4920735776
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bea13fdff2603a7f4704fa9674251d6028184b049802a4e948511df3eae8bbd
3
+ size 4692670496
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85142c27f562e3067da7de7b819f8f07f7441207a14fa8669aa5dc869c9b7699
3
+ size 5732988960
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61a9000af4239a9ae1cb38f182881211167e3b50eb7fdf83e11a214873bfcaaf
3
+ size 5599295520
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b765c845f51a747d4ec525e7620481010b0e00c996e066884c90e5948754dc0c
3
+ size 6596007968
Dampfinchen-Llama-3-8B-Ultra-Instruct-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddea61bd6bf1adafafb32778f37c65c25a58beb79a0f9042f45ba2b014d764a5
3
+ size 8540772384
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Dampfinchen/Llama-3-8B-Ultra-Instruct
3
+ pipeline_tag: text-generation
4
+ quantized_by: featherless-ai-quants
5
+ ---
6
+
7
+ # Dampfinchen/Llama-3-8B-Ultra-Instruct GGUF Quantizations πŸš€
8
+
9
+ ![Featherless AI Quants](./featherless-quants.png)
10
+
11
+ *Optimized GGUF quantization files for enhanced model performance*
12
+
13
+ ---
14
+
15
+ ## Available Quantizations πŸ“Š
16
+
17
+ | Quantization Type | File | Size |
18
+ |-------------------|------|------|
19
+ | Q8_0 | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q8_0.gguf) | 8145.12 MB |
20
+ | Q4_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf) | 4475.28 MB |
21
+ | Q2_K | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q2_K.gguf) | 3031.86 MB |
22
+ | Q6_K | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q6_K.gguf) | 6290.44 MB |
23
+ | Q3_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf) | 3832.74 MB |
24
+ | Q3_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf) | 3494.74 MB |
25
+ | Q3_K_L | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf) | 4121.74 MB |
26
+ | Q4_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf) | 4692.78 MB |
27
+ | Q5_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf) | 5339.90 MB |
28
+ | Q5_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf) | 5467.40 MB |
29
+ | IQ4_XS | [Dampfinchen-Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf) | 4276.62 MB |
30
+
31
+
32
+ ---
33
+
34
+ ## ⚑ Powered by [Featherless AI](https://featherless.ai)
35
+
36
+ ### Key Features
37
+
38
+ - πŸ”₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
39
+ - πŸ› οΈ **Zero Infrastructure** - No server setup or maintenance required
40
+ - πŸ“š **Vast Compatibility** - Support for 2400+ models and counting
41
+ - πŸ’Ž **Affordable Pricing** - Starting at just $10/month
42
+
43
+ ---
44
+
45
+ **Links:**
46
+ [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
featherless-quants.png ADDED

Git LFS Details

  • SHA256: 2e1b4d66c8306c7b0614089381fdf86ea4efb02dffb78d22767a084cb8b88d6b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.61 MB