fakezeta commited on
Commit
b90302f
1 Parent(s): fddd869

Update README.md

Browse files

Add function calling configuration

Files changed (1) hide show
  1. README.md +69 -1
README.md CHANGED
@@ -32,7 +32,7 @@ widget:
32
  # OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B
33
 
34
  Model definition for LocalAI:
35
- ```
36
  name: hermes-2-pro-llama3
37
  backend: transformers
38
  parameters:
@@ -43,6 +43,74 @@ template:
43
  use_tokenizer_template: true
44
  ```
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  # Hermes 2 Pro - Llama-3 8B
48
 
 
32
  # OpenVINO IR model with int8 quantization of Hermes-2-Pro-Llama-3-8B
33
 
34
  Model definition for LocalAI:
35
+ ```yaml
36
  name: hermes-2-pro-llama3
37
  backend: transformers
38
  parameters:
 
43
  use_tokenizer_template: true
44
  ```
45
 
46
+ LocalAI configuration for function calling
47
+ ```yaml
48
+ name: hermes-2-pro-llama3
49
+ backend: transformers
50
+ parameters:
51
+ model: fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
52
+ context_size: 8192
53
+ type: OVModelForCausalLM
54
+ function:
55
+ # disable injecting the "answer" tool
56
+ disable_no_action: true
57
+ # This allows the grammar to also return messages
58
+ grammar_message: true
59
+ # Suffix to add to the grammar
60
+ grammar_prefix: '<tool_call>\n'
61
+ return_name_in_function_response: true
62
+ # Without grammar uncomment the lines below
63
+ # Warning: this is relying only on the capability of the
64
+ # LLM model to generate the correct function call.
65
+ # no_grammar: true
66
+ # json_regex_match: "(?s)<tool_call>(.*?)</tool_call>"
67
+ replace_results:
68
+ "<tool_call>": ""
69
+ "\'": "\""
70
+
71
+ template:
72
+ chat_message: |
73
+ <|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}
74
+ {{- if .FunctionCall }}
75
+ <tool_call>
76
+ {{- else if eq .RoleName "tool" }}
77
+ <tool_response>
78
+ {{- end }}
79
+ {{- if .Content}}
80
+ {{.Content }}
81
+ {{- end }}
82
+ {{- if .FunctionCall}}
83
+ {{toJson .FunctionCall}}
84
+ {{- end }}
85
+ {{- if .FunctionCall }}
86
+ </tool_call>
87
+ {{- else if eq .RoleName "tool" }}
88
+ </tool_response>
89
+ {{- end }}<|im_end|>
90
+ # https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF#prompt-format-for-function-calling
91
+ function: |
92
+ <|im_start|>system
93
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
94
+ <tools>
95
+ {{range .Functions}}
96
+ {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
97
+ {{end}}
98
+ </tools>
99
+ Use the following pydantic model json schema for each tool call you will make:
100
+ {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}
101
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
102
+ <tool_call>
103
+ {'arguments': <args-dict>, 'name': <function-name>}
104
+ </tool_call><|im_end|>
105
+ {{.Input -}}
106
+ <|im_start|>assistant
107
+ <tool_call>
108
+ chat: |
109
+ {{.Input -}}
110
+ <|im_start|>assistant
111
+ completion: |
112
+ {{.Input}}
113
+ ```
114
 
115
  # Hermes 2 Pro - Llama-3 8B
116