kinqsradio commited on
Commit
75f2aa8
·
verified ·
1 Parent(s): 537ab62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md CHANGED
@@ -10,5 +10,100 @@ Finetuned version of [Mistral-7B-Instruct-v0.2 ](https://huggingface.co/mistrala
10
 
11
  - **Direct Function Calls**: Mistral 7B Instruct v0.2 now supports structured function calls, allowing for the integration of external APIs and databases directly into the conversational flow. This makes it possible to execute custom searches, retrieve data from the web or specific databases, and even summarize or explain content in depth.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Quantization Models
14
  - Updating
 
10
 
11
  - **Direct Function Calls**: Mistral 7B Instruct v0.2 now supports structured function calls, allowing for the integration of external APIs and databases directly into the conversational flow. This makes it possible to execute custom searches, retrieve data from the web or specific databases, and even summarize or explain content in depth.
12
 
13
+ ## Usage
14
+ ```python
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
16
+
17
+ device = "cuda"
18
+
19
+ model = AutoModelForCausalLM.from_pretrained("InterSync/Mistral-7B-Instruct-v0.2-Function-Calling")
20
+ tokenizer = AutoTokenizer.from_pretrained("InterSync/Mistral-7B-Instruct-v0.2-Function-Calling")
21
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
22
+
23
+ tools = [
24
+ {
25
+ "type": "function",
26
+ "function": {
27
+ "name": "get_current_weather",
28
+ "description": "Get the current weather",
29
+ "parameters": {
30
+ "type": "object",
31
+ "properties": {
32
+ "location": {
33
+ "type": "string",
34
+ "description": "The city and state, e.g. San Francisco, CA",
35
+ },
36
+ "format": {
37
+ "type": "string",
38
+ "enum": ["celsius", "fahrenheit"],
39
+ "description": "The temperature unit to use. Infer this from the users location.",
40
+ },
41
+ },
42
+ "required": ["location", "format"],
43
+ },
44
+ }
45
+ },
46
+ {
47
+ "type": "function",
48
+ "function": {
49
+ "name": "get_n_day_weather_forecast",
50
+ "description": "Get an N-day weather forecast",
51
+ "parameters": {
52
+ "type": "object",
53
+ "properties": {
54
+ "location": {
55
+ "type": "string",
56
+ "description": "The city and state, e.g. San Francisco, CA",
57
+ },
58
+ "format": {
59
+ "type": "string",
60
+ "enum": ["celsius", "fahrenheit"],
61
+ "description": "The temperature unit to use. Infer this from the users location.",
62
+ },
63
+ "num_days": {
64
+ "type": "integer",
65
+ "description": "The number of days to forecast",
66
+ }
67
+ },
68
+ "required": ["location", "format", "num_days"]
69
+ },
70
+ }
71
+ },
72
+ ]
73
+
74
+ messages = [
75
+ {
76
+ "role": "user",
77
+ "content": f"""
78
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
79
+ <tools>
80
+ {tools}
81
+ </tools>
82
+
83
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
84
+ <tool_call>
85
+ {{'arguments': <args-dict>, 'name': <function-name>}}
86
+ </tool_call>
87
+ """
88
+ },
89
+ {
90
+ "role": "assistant",
91
+ "content": f"""How can I help you today?"""
92
+ },
93
+ {
94
+ "role": "user",
95
+ "content": "What is the current weather in San Francisco? And Can you forecast that in the next 10 days?"
96
+ },
97
+ ]
98
+
99
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
100
+
101
+ model_inputs = inputs.to(device)
102
+ model.to(device)
103
+
104
+ generate_ids = model.generate(model_inputs, streamer=streamer, do_sample=True, max_length=4096)
105
+ decoded = tokenizer.batch_decode(generate_ids)
106
+ ```
107
+
108
  ## Quantization Models
109
  - Updating