File size: 8,576 Bytes
f8a8008 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 |
# MiniMax-M1 Function Call Guide
[FunctionCall中文使用指南](./function_call_guide_cn.md)
## 📖 Introduction
The MiniMax-M1 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format. This document provides detailed instructions on how to use the function calling feature of MiniMax-M1.
## 🚀 Quick Start
### Using Chat Template
MiniMax-M1 uses a specific chat template format to handle function calls. The chat template is defined in `tokenizer_config.json`, and you can use it in your code through the template.
```python
from transformers import AutoTokenizer
def get_default_tools():
return [
{
{
"name": "get_current_weather",
"description": "Get the latest weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "A certain city, such as Beijing, Shanghai"
}
},
}
"required": ["location"],
"type": "object"
}
}
]
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "What's the weather like in Shanghai today?"
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by Minimax based on MiniMax-M1 model."}]},
{"role": "user", "content": [{"type": "text", "text": prompt}]},
]
# Enable function call tools
tools = get_default_tools()
# Apply chat template and add tool definitions
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
tools=tools
)
```
## 🛠️ Function Call Definition
### Function Structure
Function calls need to be defined in the `tools` field of the request body. Each function consists of the following components:
```json
{
"tools": [
{
"name": "search_web",
"description": "Search function.",
"parameters": {
"properties": {
"query_list": {
"description": "Keywords for search, with list element count of 1.",
"items": { "type": "string" },
"type": "array"
},
"query_tag": {
"description": "Classification of the query",
"items": { "type": "string" },
"type": "array"
}
},
"required": [ "query_list", "query_tag" ],
"type": "object"
}
}
]
}
```
**Field Descriptions:**
- `name`: Function name
- `description`: Function description
- `parameters`: Function parameter definition
- `properties`: Parameter property definitions, where key is the parameter name and value contains detailed parameter description
- `required`: List of required parameters
- `type`: Parameter type (usually "object")
### Internal Model Processing Format
When processed internally by the model, function definitions are converted to a special format and concatenated to the input text:
```
]~!b[]~b]system ai_setting=MiniMax AI
MiniMax AI is an AI assistant independently developed by MiniMax. [e~[
]~b]system tool_setting=tools
You are provided with these tools:
<tools>
{"name": "search_web", "description": "Search function.", "parameters": {"properties": {"query_list": {"description": "Keywords for search, with list element count of 1.", "items": {"type": "string"}, "type": "array"}, "query_tag": {"description": "Classification of the query", "items": {"type": "string"}, "type": "array"}}, "required": ["query_list", "query_tag"], "type": "object"}}
</tools>
If you need to call tools, please respond with <tool_calls></tool_calls> XML tags, and provide tool-name and json-object of arguments, following the format below:
<tool_calls>
{"name": <tool-name>, "arguments": <args-json-object>}
...
</tool_calls>[e~[
]~b]user name=User
When were the most recent launch events for OpenAI and Gemini?[e~[
]~b]ai name=MiniMax AI
```
### Model Output Format
The model outputs function calls in the following format:
```xml
<think>
Okay, I will search for the OpenAI and Gemini latest release.
</think>
<tool_calls>
{"name": "search_web", "arguments": {"query_tag": ["technology", "events"], "query_list": ["\"OpenAI\" \"latest\" \"release\""]}}
{"name": "search_web", "arguments": {"query_tag": ["technology", "events"], "query_list": ["\"Gemini\" \"latest\" \"release\""]}}
</tool_calls>
```
## 📥 Function Call Result Processing
### Parsing Function Calls
You can use the following code to parse function calls from the model output:
```python
import re
import json
def parse_function_calls(content: str):
"""
Parse function calls from model output
"""
function_calls = []
# Match content within <tool_calls> tags
tool_calls_pattern = r"<tool_calls>(.*?)</tool_calls>"
tool_calls_match = re.search(tool_calls_pattern, content, re.DOTALL)
if not tool_calls_match:
return function_calls
tool_calls_content = tool_calls_match.group(1).strip()
# Parse each function call (one JSON object per line)
for line in tool_calls_content.split('\n'):
line = line.strip()
if not line:
continue
try:
# Parse JSON format function call
call_data = json.loads(line)
function_name = call_data.get("name")
arguments = call_data.get("arguments", {})
function_calls.append({
"name": function_name,
"arguments": arguments
})
print(f"Function call: {function_name}, Arguments: {arguments}")
except json.JSONDecodeError as e:
print(f"Parameter parsing failed: {line}, Error: {e}")
return function_calls
# Example: Handle weather query function
def execute_function_call(function_name: str, arguments: dict):
"""
Execute function call and return result
"""
if function_name == "get_current_weather":
location = arguments.get("location", "Unknown location")
# Build function execution result
return {
"role": "tool",
"name": function_name,
"content": json.dumps({
"location": location,
"temperature": "25",
"unit": "celsius",
"weather": "Sunny"
}, ensure_ascii=False)
}
elif function_name == "search_web":
query_list = arguments.get("query_list", [])
query_tag = arguments.get("query_tag", [])
# Simulate search results
return {
"role": "tool",
"name": function_name,
"content": f"Search keywords: {query_list}, Categories: {query_tag}\nSearch results: Relevant information found"
}
return None
```
### Returning Function Execution Results to the Model
After successfully parsing function calls, you should add the function execution results to the conversation history so that the model can access and utilize this information in subsequent interactions.
#### Single Result
If the model decides to call `search_web`, we suggest you to return the function result in the following format, with the `name` field set to the specific tool name.
```json
{
"data": [
{
"role": "tool",
"name": "search_web",
"content": "search_result"
}
]
}
```
Corresponding model input format:
```
]~b]tool name=search_web
search_result[e~[
```
#### Multiple Result
If the model decides to call `search_web` and `get_current_weather` at the same time, we suggest you to return the multiple function results in the following format, with the `name` field set to "tools", and use the `content` field to contain multiple results.
```json
{
"data": [
{
"role": "tool",
"name": "tools",
"content": "Tool name: search_web\nTool result: test_result1\n\nTool name: get_current_weather\nTool result: test_result2"
}
]
}
```
Corresponding model input format:
```
]~b]tool name=tools
Tool name: search_web
Tool result: test_result1
Tool name: get_current_weather
Tool result: test_result2[e~[
```
While we suggest following the above formats, as long as the model input is easy to understand, the specific values of `name` and `content` is entirely up to the caller.
|