apepkuss79
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -41,33 +41,63 @@ tags:
|
|
41 |
|
42 |
- Prompt template
|
43 |
|
44 |
-
- Prompt type: `llama-3-chat`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
```text
|
49 |
-
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
50 |
-
|
51 |
-
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
52 |
-
|
53 |
-
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
54 |
|
55 |
-
|
56 |
|
57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
```
|
59 |
|
60 |
-
-
|
61 |
-
|
62 |
-
- Run as LlamaEdge service
|
63 |
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
|
72 |
- Run as LlamaEdge command app
|
73 |
|
|
|
41 |
|
42 |
- Prompt template
|
43 |
|
44 |
+
- Prompt type for chat: `llama-3-chat`
|
45 |
+
|
46 |
+
- Prompt string
|
47 |
+
|
48 |
+
```text
|
49 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
50 |
+
|
51 |
+
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
52 |
+
|
53 |
+
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
54 |
+
|
55 |
+
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
56 |
+
|
57 |
+
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
58 |
+
```
|
59 |
+
|
60 |
+
- Prompt type for tool use: `llama-3-tool`
|
61 |
+
|
62 |
+
- Prompt string
|
63 |
+
|
64 |
+
```text
|
65 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
66 |
+
|
67 |
+
{system_message}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
68 |
+
|
69 |
+
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
|
70 |
+
|
71 |
+
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
|
72 |
+
|
73 |
+
[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","description":"The temperature unit to use. Infer this from the users location.","enum":["celsius","fahrenheit"]}},"required":["location","unit"]}}}]
|
74 |
+
|
75 |
+
Question: {user_message}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
76 |
+
```
|
77 |
|
78 |
+
- Context size: `128000`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
+
- Run as LlamaEdge service
|
81 |
|
82 |
+
- Chat
|
83 |
+
|
84 |
+
```bash
|
85 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
|
86 |
+
llama-api-server.wasm \
|
87 |
+
--prompt-template llama-3-chat \
|
88 |
+
--ctx-size 128000 \
|
89 |
+
--model-name Llama-3.2-3b
|
90 |
```
|
91 |
|
92 |
+
- Tool use
|
|
|
|
|
93 |
|
94 |
+
```bash
|
95 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
|
96 |
+
llama-api-server.wasm \
|
97 |
+
--prompt-template llama-3-tool \
|
98 |
+
--ctx-size 128000 \
|
99 |
+
--model-name Llama-3.2-3b
|
100 |
+
```
|
101 |
|
102 |
- Run as LlamaEdge command app
|
103 |
|