napopoa32-swallow-hermes-st-v1-gguf

napopoa32ใ•ใ‚“ใŒๅ…ฌ้–‹ใ—ใฆใ„ใ‚‹swallow-hermes-st-v1ใฎggufใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆๅค‰ๆ›็‰ˆใงใ™ใ€‚ ใ“ใกใ‚‰ใฏใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซใซใชใ‚Šใพใ™ใ€‚

Usage

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'napopoa32-swallow-hermes-st-v1-q4_0.gguf' -p "<|im_start|>system\nYou are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>\n<|im_start|>user\n้ข็™ฝใ„้’ๆ˜ฅใฎ็‰ฉ่ชžใ‚’ๆ›ธใ„ใฆใใ ใ•ใ„ใ€‚<|im_end|>\n<|im_start|>assistant" -n 128 
Downloads last month
644
GGUF
Model size
7.33B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support