MaziyarPanahi commited on
Commit
54eb67e
1 Parent(s): 8f69555

Upload folder using huggingface_hub (#1)

Browse files

- 3e2893ad9182ffc3d26756db638978feb0f7c020f3659ad89d99db5085c59948 (a2d05af3ba93b38bf8e5788759d3fd67ae236724)
- 5a2f6ac9664a91fb3ea03b88f41a437d41deaf6b15aa4e56f66d63ff13ea4d2f (4b2b5d7e2b979a79a40c4b11f9b5d753372c030a)
- 38a45e630e3a2d25399c2061fa34dbe90f339f789a3c2a345986a536a5ad2397 (e6113d8bee9f47118396fee80b65cf5f93efe911)
- caee57d3c5b014e78a8b089a7cb71dc88b290dc9923a24e07be4ee4d4fd4a65e (fb3e3ddbf60bcfe9476a883ab675b1218f1c5ca4)
- cf9a7ecd56b642b93a01534518c08fd56b7df2c8d02f73b882aa447d3dac8b3c (61c77d25064803ea196cb79f4ba6bf11ef6c7bfb)
- 8ec019b711ee2d7aebe6fc37ef2b92e4ce28828b35cac6a84251296713b9494e (fc4e1d9291ba8379f47717714b921fa6d21ffa97)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Llama-3.2-3B-Apex.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Llama-3.2-3B-Apex.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Llama-3.2-3B-Apex.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Llama-3.2-3B-Apex.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Llama-3.2-3B-Apex.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Llama-3.2-3B-Apex-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Llama-3.2-3B-Apex-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d5e8fc05c20beac65a949774ee6bbd3d77894dec0efd82f42973b1fc80b469f
3
+ size 2988366
Llama-3.2-3B-Apex.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4603ab0aaf01584d836a521cf7a1caec6bfdd7d0000ade86d9b3959234520f0
3
+ size 2593545568
Llama-3.2-3B-Apex.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d436f535a2c1f3a5630b6d0ebd2710775c86e2aaf271ecc5064ffb61a5f48b5c
3
+ size 2540903776
Llama-3.2-3B-Apex.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2aca2b8aa67c11ac4391e26bebfd9bf6e789d5cb67518cff153ecfa9766b016
3
+ size 2967573856
Llama-3.2-3B-Apex.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:168dd4bb49afbbd962cb0e4c2b5c888a5dbc20e21cec8ba324f118aa744908f1
3
+ size 3841041760
Llama-3.2-3B-Apex.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82f1f30e8698687304ff30dcf427aa36640cd5b2de57766f2b34f2225f0ab814
3
+ size 7222207584
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Llama-3.2-3B-Apex-GGUF
14
+ base_model: bunnycore/Llama-3.2-3B-Apex
15
+ inference: false
16
+ model_creator: bunnycore
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Llama-3.2-3B-Apex-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Apex-GGUF)
21
+ - Model creator: [bunnycore](https://huggingface.co/bunnycore)
22
+ - Original model: [bunnycore/Llama-3.2-3B-Apex](https://huggingface.co/bunnycore/Llama-3.2-3B-Apex)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Llama-3.2-3B-Apex-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Apex-GGUF) contains GGUF format model files for [bunnycore/Llama-3.2-3B-Apex](https://huggingface.co/bunnycore/Llama-3.2-3B-Apex).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.