m8than commited on
Commit
d625d48
Β·
verified Β·
1 Parent(s): 3af17c7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Kukedlc-Ramakrishna-7b-v3-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Kukedlc-Ramakrishna-7b-v3-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Kukedlc-Ramakrishna-7b-v3-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Kukedlc-Ramakrishna-7b-v3-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Kukedlc-Ramakrishna-7b-v3-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Kukedlc-Ramakrishna-7b-v3-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Kukedlc-Ramakrishna-7b-v3-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Kukedlc-Ramakrishna-7b-v3-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Kukedlc-Ramakrishna-7b-v3-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Kukedlc-Ramakrishna-7b-v3-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Kukedlc-Ramakrishna-7b-v3-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
47
+ featherless-quants.png filter=lfs diff=lfs merge=lfs -text
Kukedlc-Ramakrishna-7b-v3-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52e2d9136b9cad387c5d55a5d182c8bd5650588226e01e458dca2615239ce22a
3
+ size 3944390304
Kukedlc-Ramakrishna-7b-v3-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4fdf1070f217d30c34f8de9f300d646db291132df211f1c148cb9cc9062ff8b
3
+ size 2719243936
Kukedlc-Ramakrishna-7b-v3-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:413a7d2c0480d785697fc6641407c82ec8848fca72732618bd3230a40248cf58
3
+ size 3822026400
Kukedlc-Ramakrishna-7b-v3-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7384d79af262e15be1902c28928b5df763286b663dcc6b10129c72144aaf4f14
3
+ size 3518987936
Kukedlc-Ramakrishna-7b-v3-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45e6969a26c7915fd2f8e4bd20d722d1eae772ab2e796dd8e1d47db8a783f6a5
3
+ size 3164569248
Kukedlc-Ramakrishna-7b-v3-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07c1e0b1d78384e0cf3809539e42e5cc6fbafa2b37750358571a33c7019f7a70
3
+ size 4368440992
Kukedlc-Ramakrishna-7b-v3-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2b95d382eedbf55e2de3abf6f7657e25e75b68c4142f97afdc54222cc53cbd6
3
+ size 4140375712
Kukedlc-Ramakrishna-7b-v3-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0291784aee0e9b6f8fd3cb5d27075d42c3d4bfc40ef568d9a547bfdd5c056137
3
+ size 5131411104
Kukedlc-Ramakrishna-7b-v3-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60e229ec1528236b2686af5d8097a9344ad31fe5442f0a488023db8807ef555b
3
+ size 4997717664
Kukedlc-Ramakrishna-7b-v3-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:843028a14ea35f0be7e36be798fab44cf52ce0ff1ae2074efc551b60ee6ee661
3
+ size 5942066848
Kukedlc-Ramakrishna-7b-v3-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79857f5b8dd6a51197db6bfa56eb82d31491df7a8ee639c0c7b2f161a84f7692
3
+ size 7695859360
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Kukedlc/Ramakrishna-7b-v3
3
+ pipeline_tag: text-generation
4
+ quantized_by: featherless-ai-quants
5
+ ---
6
+
7
+ # Kukedlc/Ramakrishna-7b-v3 GGUF Quantizations πŸš€
8
+
9
+ ![Featherless AI Quants](./featherless-quants.png)
10
+
11
+ *Optimized GGUF quantization files for enhanced model performance*
12
+
13
+ > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
14
+ ---
15
+
16
+ ## Available Quantizations πŸ“Š
17
+
18
+ | Quantization Type | File | Size |
19
+ |-------------------|------|------|
20
+ | Q8_0 | [Kukedlc-Ramakrishna-7b-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q8_0.gguf) | 7339.34 MB |
21
+ | Q4_K_S | [Kukedlc-Ramakrishna-7b-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q4_K_S.gguf) | 3948.57 MB |
22
+ | Q2_K | [Kukedlc-Ramakrishna-7b-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q2_K.gguf) | 2593.27 MB |
23
+ | Q6_K | [Kukedlc-Ramakrishna-7b-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q6_K.gguf) | 5666.80 MB |
24
+ | Q3_K_M | [Kukedlc-Ramakrishna-7b-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q3_K_M.gguf) | 3355.97 MB |
25
+ | Q3_K_S | [Kukedlc-Ramakrishna-7b-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q3_K_S.gguf) | 3017.97 MB |
26
+ | Q3_K_L | [Kukedlc-Ramakrishna-7b-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q3_K_L.gguf) | 3644.97 MB |
27
+ | Q4_K_M | [Kukedlc-Ramakrishna-7b-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q4_K_M.gguf) | 4166.07 MB |
28
+ | Q5_K_S | [Kukedlc-Ramakrishna-7b-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q5_K_S.gguf) | 4766.19 MB |
29
+ | Q5_K_M | [Kukedlc-Ramakrishna-7b-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-Q5_K_M.gguf) | 4893.69 MB |
30
+ | IQ4_XS | [Kukedlc-Ramakrishna-7b-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Kukedlc-Ramakrishna-7b-v3-GGUF/blob/main/Kukedlc-Ramakrishna-7b-v3-IQ4_XS.gguf) | 3761.66 MB |
31
+
32
+
33
+ ---
34
+
35
+ ## ⚑ Powered by [Featherless AI](https://featherless.ai)
36
+
37
+ ### Key Features
38
+
39
+ - πŸ”₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
40
+ - πŸ› οΈ **Zero Infrastructure** - No server setup or maintenance required
41
+ - πŸ“š **Vast Compatibility** - Support for 2400+ models and counting
42
+ - πŸ’Ž **Affordable Pricing** - Starting at just $10/month
43
+
44
+ ---
45
+
46
+ **Links:**
47
+ [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
featherless-quants.png ADDED

Git LFS Details

  • SHA256: 2e1b4d66c8306c7b0614089381fdf86ea4efb02dffb78d22767a084cb8b88d6b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.61 MB