File size: 3,828 Bytes
a831e36
 
4d15fd5
 
 
a831e36
9ef0214
38927a0
9ef0214
bb1f392
9ef0214
 
 
bb1f392
9ef0214
75f2aa8
51da8aa
75f2aa8
 
51da8aa
75f2aa8
51da8aa
 
75f2aa8
 
 
 
51da8aa
 
 
 
75f2aa8
51da8aa
75f2aa8
51da8aa
 
75f2aa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51da8aa
75f2aa8
 
 
 
 
6a0d9f1
75f2aa8
51da8aa
75f2aa8
51da8aa
 
75f2aa8
 
 
51da8aa
 
 
 
 
 
 
 
 
5623fdb
51da8aa
 
75f2aa8
 
 
51da8aa
75f2aa8
 
 
6a0d9f1
75f2aa8
 
51da8aa
75f2aa8
51da8aa
 
 
75f2aa8
51da8aa
 
 
 
75f2aa8
 
 
 
 
e7741a3
51da8aa
6a0d9f1
 
 
e7741a3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: cc
tags:
- function-call
- mistral
---

# Fine-tuned Mistral 7B Instruct v0.2 with OpenAI Function Call Support

Finetuned version of [Mistral-7B-Instruct-v0.2 ](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) to support direct function calling. This new capability aligns with the functionality seen in OpenAI's models, enabling Mistral 7B Instruct v0.2 to interact with external data sources and perform more complex tasks, such as fetching real-time information or integrating with custom databases for enriched AI-powered applications.

## Features

- **Direct Function Calls**: Mistral 7B Instruct v0.2 now supports structured function calls, allowing for the integration of external APIs and databases directly into the conversational flow. This makes it possible to execute custom searches, retrieve data from the web or specific databases, and even summarize or explain content in depth.

## Usage
### Importing Libraries
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
```

### Initializing Model and Tokenizer
```python
device = "cuda" 

model = AutoModelForCausalLM.from_pretrained("InterSync/Mistral-7B-Instruct-v0.2-Function-Calling")
tokenizer = AutoTokenizer.from_pretrained("InterSync/Mistral-7B-Instruct-v0.2-Function-Calling")
```

### Creating the Text Streamer
```python
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
```

### Defining Tools
```python
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the user's location.",
                    },
                },
                "required": ["location", "format"],
            },
        }
    }
]
```

### Setting up the Messages
```python
messages = [
    {
        "role": "user",
        "content": (
            "You are Mistral with function-calling supported. You are provided with function signatures within <tools></tools> XML tags. "
            "You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. "
            "Here are the available tools:\n"
            "<tools>\n"
            f"{tools}\n"
            "</tools>\n\n"
            "For each function call, return a JSON object with the function name and arguments within <tool_call></tool_call> XML tags as follows:\n"
            "<tool_call>\n"
            "{'arguments': <args-dict>, 'name': <function-name>}\n"
            "</tool_call>"
        )
    },
    {
        "role": "assistant",
        "content": "How can I help you today?"
    },
    {
        "role": "user",
        "content": "What is the current weather in San Francisco?"
    },
]
```

### Preparing Model Inputs
```python
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
model_inputs = inputs.to(device)
```

### Generating the Response
```python
model.to(device)
generate_ids = model.generate(model_inputs, streamer=streamer, do_sample=True, max_length=4096)
decoded = tokenizer.batch_decode(generate_ids)
```

### Expected Output
```python
<tool_call>
{"arguments": {"location": "San Francisco, CA", "format": "celsius"}, "name": "get_current_weather"}
</tool_call>
```