File size: 1,394 Bytes
3ec2152
 
e64167e
 
 
 
 
 
 
3ec2152
e64167e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: llama3
language:
- en
- ja
tags:
- llama3
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# lightblue-suzume-llama-3-8B-japanese-gguf 
[lightblueさんが公開しているsuzume-llama-3-8B-japanese](https://huggingface.co/lightblue/suzume-llama-3-8B-japanese)のggufフォーマット変換版です。

imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。  

## 他のモデル
[mmnga/lightblue-suzume-llama-3-8B-japanese-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-japanese-gguf)  
[mmnga/lightblue-ao-karasu-72B-gguf](https://huggingface.co/mmnga/lightblue-ao-karasu-72B-gguf)  
[mmnga/lightblue-karasu-1.1B-gguf](https://huggingface.co/mmnga/lightblue-karasu-1.1B-gguf)  
[mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf)  
[mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf)  

## Usage

```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'lightblue-suzume-llama-3-8B-japanese-Q4_0.gguf' -p "<|begin_of_text|><|start_header_id|>user <|end_header_id|>\n\nこんにちわ<|eot_id|><|start_header_id|>assistant <|end_header_id|>\n\n" -n 128
```