|
--- |
|
library_name: transformers |
|
base_model: OEvortex/lite-hermes |
|
inference: false |
|
language: |
|
- en |
|
license: mit |
|
tags: |
|
- HelpingAI |
|
- lite |
|
- code |
|
--- |
|
|
|
#### Description |
|
|
|
Optimize your engagement with [This project](https://huggingface.co/OEvortex/OEvortex/HelpingAI-unvelite) by seamlessly integrating GGUF Format model files. |
|
Please Subscribe to my youtube channel [OEvortex](https://youtube.com/@OEvortex) |
|
### GGUF Technical Specifications |
|
|
|
Delve into the intricacies of GGUF, a meticulously crafted format that builds upon the robust foundation of the GGJT model. Tailored for heightened extensibility and user-centric functionality, GGUF introduces a suite of indispensable features: |
|
|
|
**Single-file Deployment:** Streamline distribution and loading effortlessly. GGUF models have been meticulously architected for seamless deployment, necessitating no external files for supplementary information. |
|
|
|
**Extensibility:** Safeguard the future of your models. GGUF seamlessly accommodates the integration of new features into GGML-based executors, ensuring compatibility with existing models. |
|
|
|
**mmap Compatibility:** Prioritize efficiency. GGUF models are purposefully engineered to support mmap, facilitating rapid loading and saving, thus optimizing your workflow. |
|
|
|
**User-Friendly:** Simplify your coding endeavors. Load and save models effortlessly, irrespective of the programming language used, obviating the dependency on external libraries. |
|
|
|
**Full Information:** A comprehensive repository in a single file. GGUF models encapsulate all requisite information for loading, eliminating the need for users to furnish additional data. |
|
|
|
The differentiator between GGJT and GGUF lies in the deliberate adoption of a key-value structure for hyperparameters (now termed metadata). Bid farewell to untyped lists, and embrace a structured approach that seamlessly accommodates new metadata without compromising compatibility with existing models. Augment your model with supplementary information for enhanced inference and model identification. |
|
|
|
|
|
**QUANTIZATION_METHODS:** |
|
|
|
| Method | Quantization | Advantages | Trade-offs | |
|
|---|---|---|---| |
|
| q2_k | 2-bit integers | Significant model size reduction | Minimal impact on accuracy | |
|
| q3_k_l | 3-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | |
|
| q3_k_m | 3-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | |
|
| q3_k_s | 3-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | |
|
| q4_0 | 4-bit integers | Significant model size reduction | Moderate impact on accuracy | |
|
| q4_1 | 4-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | |
|
| q4_k_m | 4-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy | |
|
| q4_k_s | 4-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | |
|
| q5_0 | 5-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | |
|
| q5_1 | 5-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity | |
|
| q5_k_m | 5-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy | |
|
| q5_k_s | 5-bit integers | Improved model efficiency with structured pruning | Reduced accuracy | |
|
| q6_k | 6-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy | |
|
| q8_0 | 8-bit integers | Significant model size reduction | Minimal impact on accuracy | |
|
|
|
|