base_model: meta-llama/Llama-2-7b-chat-hf | |
16-bit gguf version of https://huggingface.co/meta-llama/Llama-2-7b-chat-hf | |
For quantized versions, see https://huggingface.co/models?search=thebloke/llama-2-7b-chat |
base_model: meta-llama/Llama-2-7b-chat-hf | |
16-bit gguf version of https://huggingface.co/meta-llama/Llama-2-7b-chat-hf | |
For quantized versions, see https://huggingface.co/models?search=thebloke/llama-2-7b-chat |