lightblue-suzume-llama-3-8B-japanese-gguf

lightblueさんが公開しているsuzume-llama-3-8B-japaneseのggufフォーマット変換版です。

imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。

他のモデル

mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf
mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf
mmnga/lightblue-suzume-llama-3-8B-japanese-gguf
mmnga/lightblue-ao-karasu-72B-gguf
mmnga/lightblue-karasu-1.1B-gguf
mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf
mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf

Usage

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'lightblue-suzume-llama-3-8B-japanese-Q4_0.gguf' -p "<|begin_of_text|><|start_header_id|>user <|end_header_id|>\n\nこんにちわ<|eot_id|><|start_header_id|>assistant <|end_header_id|>\n\n" -n 128
Downloads last month
642
GGUF
Model size
8.03B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train mmnga/lightblue-suzume-llama-3-8B-japanese-gguf