Update: Transition Notice - GGML to GGUF
#1
by
dhirajlochib
- opened
README.md
CHANGED
@@ -31,16 +31,44 @@ base_model: jondurbin/airoboros-l2-13b-gpt4-1.4.1
|
|
31 |
# Airoboros Llama 2 13B GPT4 1.4.1 - GGML
|
32 |
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
|
33 |
- Original model: [Airoboros Llama 2 13B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1)
|
|
|
|
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
### About GGML
|
45 |
|
46 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
|
|
31 |
# Airoboros Llama 2 13B GPT4 1.4.1 - GGML
|
32 |
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
|
33 |
- Original model: [Airoboros Llama 2 13B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1)
|
34 |
+
|
35 |
+
## Repository Overview
|
36 |
|
37 |
+
This repository serves as a comprehensive collection of GGML format model files specifically tailored for utilization with [Jon Durbin's Airoboros Llama 2 13B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1).
|
38 |
|
39 |
+
### Critical Update on GGML Files
|
40 |
|
41 |
+
A pivotal development has transpired regarding GGML files within the context of this repository. Commencing from August 21st, 2023, it is imperative to acknowledge that [llama.cpp](https://github.com/ggerganov/llama.cpp) has officially transitioned away from supporting GGML models. While there's an anticipation that third-party clients and libraries may retain GGML support temporarily, the possibility of eventual abandonment exists within the broader ecosystem.
|
42 |
|
43 |
+
We strongly advise adopting GGUF models to align seamlessly with the latest standards and ensure sustained compatibility.
|
44 |
|
45 |
+
### Embracing GGUF for Enhanced Functionality
|
46 |
+
|
47 |
+
The transition from GGML to GGUF marks a significant leap in functionality and compatibility. GGUF introduces advanced features, making it the preferred format for users seeking optimal performance and ongoing support.
|
48 |
+
|
49 |
+
**Why GGUF?**
|
50 |
+
|
51 |
+
- Enhanced features and capabilities
|
52 |
+
- Alignment with evolving AI application landscapes
|
53 |
+
- Future-proofing your AI endeavors
|
54 |
+
|
55 |
+
### Preparing for the Future
|
56 |
+
|
57 |
+
As we bid farewell to the GGML era, embracing GGUF becomes paramount for users aiming to thrive in the ever-evolving AI ecosystem. While GGML support may persist in certain third-party environments temporarily, GGUF stands out as the pathway to long-term viability and continuous innovation.
|
58 |
+
|
59 |
+
### Navigating the Transition
|
60 |
+
|
61 |
+
Seamlessly transitioning to GGUF models involves exploring the repositories available, offering a diverse range of models suited for various use cases. This transition ensures continued compatibility and access to cutting-edge advancements.
|
62 |
+
|
63 |
+
### Your Path Forward
|
64 |
+
|
65 |
+
In the dynamic field of AI, staying ahead is imperative. Opting for GGUF models not only aligns with the current landscape but positions users to explore exciting developments on the horizon. Elevate your AI experience with GGUF for a future-ready approach.
|
66 |
+
|
67 |
+
### Acknowledgments
|
68 |
+
|
69 |
+
We extend our gratitude to the community for ongoing support and to contributors who play a pivotal role in making these advancements possible. Join us on this collective journey as we propel AI capabilities to new heights.
|
70 |
+
|
71 |
+
For detailed instructions, compatibility information, and discussions, please refer to the provided resources within this repository. Your continued journey with [Airoboros Llama 2 13B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1) is now seamlessly guided by GGUF, paving the way for a future filled with limitless possibilities.
|
72 |
### About GGML
|
73 |
|
74 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|