pipeline_tag: text-generation | |
tags: | |
- llama-2 | |
- chat | |
- GGUF | |
- 7b | |
This is a converted model to GGUF from `NousResearch/Llama-2-7b-chat-hf` quantized to `Q2_K` using `llama.cpp` library. |
pipeline_tag: text-generation | |
tags: | |
- llama-2 | |
- chat | |
- GGUF | |
- 7b | |
This is a converted model to GGUF from `NousResearch/Llama-2-7b-chat-hf` quantized to `Q2_K` using `llama.cpp` library. |