apepkuss79 commited on
Commit
70819f4
·
verified ·
1 Parent(s): 028b0ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -88
README.md CHANGED
@@ -1,89 +1,89 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- model_name: xLAM-8x22b-r
4
- base_model: Salesforce/xLAM-8x22b-r
5
- inference: false
6
- model_creator: MediaTek-Research
7
- pipeline_tag: text-generation
8
- quantized_by: Second State Inc.
9
- language:
10
- - en
11
- tags:
12
- - function-calling
13
- - LLM Agent
14
- - tool-use
15
- - mistral
16
- ---
17
-
18
- <!-- header start -->
19
- <!-- 200823 -->
20
- <div style="width: auto; margin-left: auto; margin-right: auto">
21
- <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
22
- </div>
23
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
24
- <!-- header end -->
25
-
26
- # xLAM-8x22b-r-GGUF
27
-
28
- ## Original Model
29
-
30
- [Salesforce/xLAM-8x22b-r](https://huggingface.co/Salesforce/xLAM-8x22b-r)
31
-
32
- ## Run with LlamaEdge
33
-
34
- - LlamaEdge version: coming soon
35
-
36
- <!-- - LlamaEdge version: [v0.14.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.0) and above
37
-
38
- - Prompt template
39
-
40
- - Prompt type: `llama-3-chat`
41
-
42
- - Prompt string
43
-
44
- ```text
45
- <|begin_of_text|><|start_header_id|>system<|end_header_id|>
46
-
47
- {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
48
-
49
- {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
50
-
51
- {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
52
-
53
- {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
54
- ```
55
-
56
- - Context size: `128000`
57
-
58
- - Run as LlamaEdge service
59
-
60
- - Chat
61
-
62
- ```bash
63
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
64
- llama-api-server.wasm \
65
- --prompt-template llama-3-chat \
66
- --ctx-size 128000 \
67
- --model-name Llama-3.1-8b
68
- ```
69
-
70
- - Tool use
71
-
72
- ```bash
73
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
74
- llama-api-server.wasm \
75
- --prompt-template llama-3-tool \
76
- --ctx-size 128000 \
77
- --model-name Llama-3.1-8b
78
- ```
79
-
80
- - Run as LlamaEdge command app
81
-
82
- ```bash
83
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
84
- llama-chat.wasm \
85
- --prompt-template llama-3-chat \
86
- --ctx-size 128000
87
- ``` -->
88
-
89
  *Quantized with llama.cpp b3613*
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ model_name: xLAM-8x22b-r
4
+ base_model: Salesforce/xLAM-8x22b-r
5
+ inference: false
6
+ model_creator: MediaTek-Research
7
+ pipeline_tag: text-generation
8
+ quantized_by: Second State Inc.
9
+ language:
10
+ - en
11
+ tags:
12
+ - function-calling
13
+ - LLM Agent
14
+ - tool-use
15
+ - mistral
16
+ ---
17
+
18
+ <!-- header start -->
19
+ <!-- 200823 -->
20
+ <div style="width: auto; margin-left: auto; margin-right: auto">
21
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
22
+ </div>
23
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
24
+ <!-- header end -->
25
+
26
+ # xLAM-8x22b-r-GGUF
27
+
28
+ ## Original Model
29
+
30
+ [Salesforce/xLAM-8x22b-r](https://huggingface.co/Salesforce/xLAM-8x22b-r)
31
+
32
+ ## Run with LlamaEdge
33
+
34
+ - LlamaEdge version: coming soon
35
+
36
+ <!-- - LlamaEdge version: [v0.14.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.0) and above
37
+
38
+ - Prompt template
39
+
40
+ - Prompt type: `llama-3-chat`
41
+
42
+ - Prompt string
43
+
44
+ ```text
45
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
46
+
47
+ {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
48
+
49
+ {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
50
+
51
+ {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
52
+
53
+ {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
54
+ ```
55
+
56
+ - Context size: `64000`
57
+
58
+ - Run as LlamaEdge service
59
+
60
+ - Chat
61
+
62
+ ```bash
63
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
64
+ llama-api-server.wasm \
65
+ --prompt-template llama-3-chat \
66
+ --ctx-size 128000 \
67
+ --model-name Llama-3.1-8b
68
+ ```
69
+
70
+ - Tool use
71
+
72
+ ```bash
73
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
74
+ llama-api-server.wasm \
75
+ --prompt-template llama-3-tool \
76
+ --ctx-size 64000 \
77
+ --model-name Llama-3.1-8b
78
+ ```
79
+
80
+ - Run as LlamaEdge command app
81
+
82
+ ```bash
83
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:xLAM-8x22b-r-Q5_K_M.gguf \
84
+ llama-chat.wasm \
85
+ --prompt-template llama-3-chat \
86
+ --ctx-size 64000
87
+ ``` -->
88
+
89
  *Quantized with llama.cpp b3613*