nehcgs commited on
Commit
caee42a
1 Parent(s): e8bfdd1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +334 -0
README.md ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: katanemo-research
4
+ license_link: >-
5
+ https://huggingface.co/katanemolabs/Arch-Function-7B.gguf/blob/main/LICENSE
6
+ base_model:
7
+ - Qwen/Qwen2.5-7B-Instruct
8
+ language:
9
+ - en
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
+ ---
13
+
14
+ # katanemolabs/Arch-Function-7B
15
+
16
+ ## Overview
17
+ The Katanemo Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for **function calling** tasks. The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts. Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
18
+
19
+ In summary, the Katanemo Arch-Function collection demonstrates:
20
+ - **State-of-the-art performance** in function calling
21
+ - **Accurate parameter identification and suggestion**, even in ambiguous or incomplete inputs
22
+ - **High generalization** across multiple function calling use cases, from API interactions to automated backend tasks.
23
+ - Optimized **low-latency, high-throughput** performance, making it suitable for real-time, production environments.
24
+
25
+
26
+ ## Key Features
27
+ <table>
28
+ <tr style="text-align: left; vertical-align: middle; font-weight: bold;">
29
+ <td>Functionality</td>
30
+ <td>Definition</td>
31
+ </tr>
32
+ <tr style="text-left: left; vertical-align: middle;">
33
+ <td>Single Function Calling</td>
34
+ <td>Call only one function per user query </td>
35
+ </tr>
36
+ <tr style="text-left: left; vertical-align: middle;">
37
+ <td>Parallel Function Calling</td>
38
+ <td>Call the same function multiple times but with different set of parameter values</td>
39
+ </tr>
40
+ <tr style="text-left: left; vertical-align: middle;">
41
+ <td>Multiple Function Calling</td>
42
+ <td>Call different functions per user query</td>
43
+ </tr>
44
+ <tr style="text-left: left; vertical-align: middle;">
45
+ <td>Parallel & Multiple</td>
46
+ <td>Perform both parallel and multiple function calling</td>
47
+ </tr>
48
+ </table>
49
+
50
+
51
+ ## Training Details
52
+ Katanemo Arch-Function collection is built on top of the [Qwen 2.5](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e). A blog with technical details leading to our models will be published soon.
53
+
54
+
55
+ ## Performance Benchmarks
56
+ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard). For each model family, we select the one with the highest rank. The results are shwon below:
57
+
58
+ <table>
59
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
60
+ <td rowspan=2>Rank</td>
61
+ <td rowspan=2>Model</td>
62
+ <td rowspan=2>Overall</td>
63
+ <td colspan=3>Single Turn</td>
64
+ <td rowspan=1>Multi Turn</td>
65
+ <td colspan=2>Hallucination</td>
66
+ </tr>
67
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
68
+ <td>Non-live (AST)</td>
69
+ <td>Non-live (Exec)</td>
70
+ <td>Live (AST)</td>
71
+ <td>Overall</td>
72
+ <td>Relevance</td>
73
+ <td>Irrelevance</td>
74
+ </tr>
75
+ <tr style="text-align: center; vertical-align: middle;">
76
+ <td>1</td>
77
+ <td>GPT-4-turbo-2024-04-09</td>
78
+ <td>59.49%</td>
79
+ <td>82.65%</td>
80
+ <td>83.80%</td>
81
+ <td>73.39%</td>
82
+ <td>21.62%</td>
83
+ <td>70.73%</td>
84
+ <td>79.79%</td>
85
+ </tr>
86
+ <tr style="text-align: center; vertical-align: middle;">
87
+ <td>3</td>
88
+ <td>xLAM-8x22b-r</td>
89
+ <td>59.13%</td>
90
+ <td>89.75%</td>
91
+ <td>89.32%</td>
92
+ <td>72.81%</td>
93
+ <td>15.62%</td>
94
+ <td>97.56%</td>
95
+ <td>75.23%</td>
96
+ </tr>
97
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
98
+ <td> </td>
99
+ <td>Arch-Function-7B</td>
100
+ <td>57.48%</td>
101
+ <td>87.50%</td>
102
+ <td>86.80%</td>
103
+ <td>72.19%</td>
104
+ <td>13.75%</td>
105
+ <td>82.93%</td>
106
+ <td>79.54%</td>
107
+ </tr>
108
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
109
+ <td> </td>
110
+ <td>Arch-Function-3B</td>
111
+ <td>56.23%</td>
112
+ <td>85.10%</td>
113
+ <td>89.16%</td>
114
+ <td>70.72%</td>
115
+ <td>12.28%</td>
116
+ <td>90.24%</td>
117
+ <td>73.98%</td>
118
+ </tr>
119
+ <tr style="text-align: center; vertical-align: middle;">
120
+ <td>7</td>
121
+ <td>mistral-large-2407</td>
122
+ <td>55.82%</td>
123
+ <td>84.12%</td>
124
+ <td>83.09%</td>
125
+ <td>67.17%</td>
126
+ <td>20.50%</td>
127
+ <td>78.05%</td>
128
+ <td>48.93%</td>
129
+ </tr>
130
+ <tr style="text-align: center; vertical-align: middle;">
131
+ <td>9</td>
132
+ <td>Claude-3.5-Sonnet-20240620</td>
133
+ <td>54.83%</td>
134
+ <td>70.35%</td>
135
+ <td>66.34%</td>
136
+ <td>71.39%</td>
137
+ <td>23.5%</td>
138
+ <td>63.41%</td>
139
+ <td>75.91%</td>
140
+ </tr>
141
+ </tr>
142
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
143
+ <td> </td>
144
+ <td>Arch-Function-7B</td>
145
+ <td>53.61%</td>
146
+ <td>82.60%</td>
147
+ <td>87.36%</td>
148
+ <td>68.19%</td>
149
+ <td>8.62%</td>
150
+ <td>87.80%</td>
151
+ <td>75.90%</td>
152
+ </tr>
153
+ <tr style="text-align: center; vertical-align: middle;">
154
+ <td>11</td>
155
+ <td>o1-mini-2024-09-12</td>
156
+ <td>53.43%</td>
157
+ <td>75.48%</td>
158
+ <td>76.86%</td>
159
+ <td>71.17%</td>
160
+ <td>11.00%</td>
161
+ <td>46.34%</td>
162
+ <td>88.07%</td>
163
+ </tr>
164
+ <tr style="text-align: center; vertical-align: middle;">
165
+ <td>12</td>
166
+ <td>Gemini-1.5-Flash-Preview-0514</td>
167
+ <td>53.01%</td>
168
+ <td>77.10%</td>
169
+ <td>71.23%</td>
170
+ <td>71.17%</td>
171
+ <td>13.12%</td>
172
+ <td>60.98%</td>
173
+ <td>76.15%</td>
174
+ </tr>
175
+ </table>
176
+
177
+
178
+ # Requirements
179
+ The code of Arch-Function-7B has been in the Hugging Face `transformers` library and we advise you to install latest version:
180
+ ```bash
181
+ pip install transformers>=4.37.0
182
+ ```
183
+
184
+
185
+ # How to use
186
+ We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [function-calling mode of ChatGPT](https://platform.openai.com/docs/guides/function-calling).
187
+
188
+
189
+ ### Single Turn Example
190
+ ````python
191
+ import json
192
+ from typing import Any, Dict, List
193
+ from transformers import AutoModelForCausalLM, AutoTokenizer
194
+
195
+ model_name = "katanemolabs/Arch-Function-7B"
196
+ model = AutoModelForCausalLM.from_pretrained(
197
+ model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
198
+ )
199
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
200
+
201
+ # Please use our provided prompt for best performance
202
+ TASK_PROMPT = """
203
+ You are a helpful assistant.
204
+ """.strip()
205
+
206
+ TOOL_PROMPT = """
207
+ # Tools
208
+
209
+ You may call one or more functions to assist with the user query.
210
+
211
+ You are provided with function signatures within <tools></tools> XML tags:
212
+ <tools>
213
+ {tool_text}
214
+ </tools>
215
+ """.strip()
216
+
217
+ FORMAT_PROMPT = """
218
+ For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
219
+ <tool_call>
220
+ {"name": <function-name>, "arguments": <args-json-object>}
221
+ </tool_call>
222
+ """.strip()
223
+
224
+ # Define available tools
225
+ get_weather_api = {
226
+ "type": "function",
227
+ "function": {
228
+ "name": "get_weather",
229
+ "description": "Get the current weather for a location",
230
+ "parameters": {
231
+ "type": "object",
232
+ "properties": {
233
+ "location": {
234
+ "type": "str",
235
+ "description": "The city and state, e.g. San Francisco, New York",
236
+ },
237
+ "unit": {
238
+ "type": "str",
239
+ "enum": ["celsius", "fahrenheit"],
240
+ "description": "The unit of temperature to return",
241
+ },
242
+ },
243
+ "required": ["location"],
244
+ },
245
+ },
246
+ }
247
+
248
+ openai_format_tools = [get_weather_api]
249
+
250
+
251
+ def convert_tools(tools: List[Dict[str, Any]]):
252
+ return "\n".join([json.dumps(tool) for tool in tools])
253
+
254
+ # Helper function to create the system prompt for our model
255
+ def format_prompt(tools: List[Dict[str, Any]]):
256
+ tool_text = convert_tools(tools)
257
+
258
+ return (
259
+ TASK_PROMPT
260
+ + "\n\n"
261
+ + TOOL_PROMPT.format(tool_text=tool_text)
262
+ + "\n\n"
263
+ + FORMAT_PROMPT
264
+ + "\n"
265
+ )
266
+
267
+
268
+ system_prompt = format_prompt(openai_format_tools)
269
+
270
+ messages = [
271
+ {"role": "system", "content": system_prompt},
272
+ {"role": "user", "content": "What is the weather in Seattle?"},
273
+ ]
274
+
275
+ inputs = tokenizer.apply_chat_template(
276
+ messages, add_generation_prompt=True, return_tensors="pt"
277
+ ).to(model.device)
278
+
279
+ outputs = model.generate(
280
+ inputs,
281
+ max_new_tokens=512,
282
+ do_sample=False,
283
+ num_return_sequences=1,
284
+ eos_token_id=tokenizer.eos_token_id,
285
+ )
286
+
287
+ response = tokenizer.decode(outputs[0][len(inputs[0]) :], skip_special_tokens=True)
288
+ print(response)
289
+ ````
290
+
291
+ Then you should be able to see the following output string in JSON format:
292
+ ````python
293
+ <tool_call>
294
+ {"name": "get_weather", "arguments": {"location": "Seattle"}}
295
+ </tool_call>
296
+ ````
297
+
298
+ ### Multi Turn Example
299
+ Upon getting results from functions, you can add it to the `messages` list as a `user` message and pass it to the model to get responses for users.
300
+
301
+ ````python
302
+ # Suppose we receive the following result from the function:
303
+ get_weather_api_result = {'name': 'get_weather', 'results': {'temperature': '62°', 'unit': 'fahrenheit'}}
304
+ execution_results = [get_weather_api_result]
305
+
306
+ def add_execution_results(messages: List[Dict[str, Any]], execution_results: List[Dict[str, Any]]):
307
+ content = "\n".join([f"<tool_response>\n{json.dumps(result)}</tool_response>" for result in execution_results])
308
+ messages.append({"role": "user", "content": content})
309
+ return messages
310
+
311
+ messages = add_execution_results(messages, execution_results)
312
+
313
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
314
+
315
+ outputs = model.generate(
316
+ inputs,
317
+ max_new_tokens=512,
318
+ do_sample=False,
319
+ num_return_sequences=1,
320
+ eos_token_id=tokenizer.eos_token_id,
321
+ )
322
+
323
+ response = tokenizer.decode(outputs[0][len(inputs[0]) :], skip_special_tokens=True)
324
+ print(response)
325
+ ````
326
+
327
+ Then you should be able to see the following output:
328
+ ```
329
+ The current temperature in Seattle is 62 degrees in Fahrenheit.
330
+ ```
331
+
332
+
333
+ # License
334
+ Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemolabs/Arch-Function-7B.gguf/blob/main/LICENSE).