andthattoo commited on
Commit
b2682bc
·
verified ·
1 Parent(s): 41a7605

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +240 -3
README.md CHANGED
@@ -1,3 +1,240 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen2.5-Coder-3B-Instruct
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - code
13
+ - chat
14
+ - qwen
15
+ - qwen-coder
16
+ - agent
17
+ ---
18
+
19
+ # Tiny-Agent-α
20
+
21
+ ## Introduction
22
+
23
+ ***Tiny-Agent-α*** is an extension of Dria-Agent-a, trained on top of the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) series to be used in edge devices. These models are carefully fine tuned with quantization aware training to minimize performance degradation after quantization.
24
+
25
+ Tiny-Agent-α employs ***Pythonic function calling***, which is LLMs using blocks of Python code to interact with provided tools and output actions. This method was inspired by many previous work, including but not limited to [DynaSaur](https://arxiv.org/pdf/2411.01747), [RLEF](https://arxiv.org/pdf/2410.02089), [ADAS](https://arxiv.org/pdf/2408.08435) and [CAMEL](https://arxiv.org/pdf/2303.17760). This way of function calling has a few advantages over traditional JSON-based function calling methods:
26
+
27
+ 1. **One-shot Parallel Multiple Function Calls:** The model can can utilise many synchronous processes in one chat turn to arrive to a solution, which would require other function calling models multiple turns of conversation.
28
+ 2. **Free-form Reasoning and Actions:** The model provides reasoning traces freely in natural language and the actions in between \`\`\`python \`\`\` blocks, as it already tends to do without special prompting or tuning. This tries to mitigate the possible performance loss caused by imposing specific formats on LLM outputs discussed in [Let Me Speak Freely?](https://arxiv.org/pdf/2408.02442)
29
+ 3. **On-the-fly Complex Solution Generation:** The solution provided by the model is essentially a Python program with the exclusion of some "risky" builtins like `exec`, `eval` and `compile` (see full list in **Quickstart** below). This enables the model to implement custom complex logic with conditionals and synchronous pipelines (using the output of one function in the next function's arguments) which would not be possible with the current JSON-based function calling methods (as far as we know).
30
+
31
+ ## Quickstart
32
+
33
+ You can use tiny-agents easily with the dria_agent package:
34
+
35
+ ````bash
36
+ pip install dria_agent
37
+ ````
38
+
39
+ This package handles models, tools, code execution, and backend (supports mlx, ollama, and transformers).
40
+
41
+ ### Usage
42
+
43
+ Decorate functions with @tool to expose them to the agent.
44
+
45
+ ````python
46
+ from dria_agent import tool
47
+
48
+ @tool
49
+ def check_availability(day: str, start_time: str, end_time: str) -> bool:
50
+ """
51
+ Checks if a given time slot is available.
52
+
53
+ :param day: The date in "YYYY-MM-DD" format.
54
+ :param start_time: The start time of the desired slot (HH:MM format, 24-hour).
55
+ :param end_time: The end time of the desired slot (HH:MM format, 24-hour).
56
+ :return: True if the slot is available, otherwise False.
57
+ """
58
+ # Mock implementation
59
+ if start_time == "12:00" and end_time == "13:00":
60
+ return False
61
+ return True
62
+ ````
63
+
64
+ Create an agent with custom tools:
65
+
66
+ ```python
67
+ from dria_agent import ToolCallingAgent
68
+
69
+ agent = ToolCallingAgent(
70
+ tools=[check_availability]
71
+ )
72
+ ```
73
+
74
+ Use agent.run(query) to execute tasks with tools.
75
+
76
+ ```python
77
+ execution = agent.run("Check my calendar for tomorrow noon", print_results=True)
78
+ ```
79
+
80
+ Where output is
81
+ ````
82
+ let me help you check your availability for a 1-hour meditation session
83
+ starting at noon tomorrow.
84
+
85
+ Step-by-step reasoning:
86
+ 1. We need to check availability for a specific time slot (noon)
87
+ 2. The duration is 1 hour, so we'll use the same start and end times
88
+ 3. Since it's tomorrow, we should format the date as "YYYY-MM-DD"
89
+ 4. Use the check_availability() function with these parameters
90
+
91
+ Here's the code to check your availability:
92
+
93
+ ```python
94
+ tomorrow = (datetime.now() + timedelta(days=1)).strftime("%Y-%m-%d")
95
+ start_time = "12:00" # Noon in 24-hour format
96
+ end_time = "13:00" # One hour after noon
97
+
98
+ availability = check_availability(tomorrow, start_time, end_time)
99
+ ```
100
+
101
+ The code will:
102
+ - Calculate tomorrow's date using datetime and timedelta
103
+ - Set the time slot to noon (12:00) for 1 hour duration
104
+ - Check if this time slot is available using the check_availability function
105
+
106
+ The availability variable will contain True if you're available, or False if
107
+ not.
108
+ ````
109
+
110
+ If using dria_agent, system prompts and tooling work out of the box—no extra setup needed.
111
+
112
+ You can use Tiny-Agent-a-3B directly with transformers:
113
+
114
+ ````
115
+ import json
116
+ from typing import Any, Dict, List
117
+ from transformers import AutoModelForCausalLM, AutoTokenizer
118
+
119
+ # Load model and tokenizer
120
+ model_name = "driaforall/Tiny-Agent-a-3B"
121
+ model = AutoModelForCausalLM.from_pretrained(
122
+ model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
123
+ )
124
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
125
+
126
+ # System prompt for optimal performance
127
+ SYSTEM_PROMPT = """
128
+ You are an expert AI assistant that specializes in providing Python code to solve the task/problem at hand provided by the user.
129
+
130
+ You can use Python code freely, including the following available functions:
131
+
132
+ <|functions_schema|>
133
+ {{functions_schema}}
134
+ <|end_functions_schema|>
135
+
136
+ The following dangerous builtins are restricted for security:
137
+ - exec
138
+ - eval
139
+ - execfile
140
+ - compile
141
+ - importlib
142
+ - input
143
+ - exit
144
+
145
+ Think step by step and provide your reasoning, outside of function calls.
146
+ You can write Python code and use the available functions. Provide all your Python code in a SINGLE markdown code block.
147
+
148
+ DO NOT use print() statements AT ALL. Avoid mutating variables whenever possible.
149
+ """.strip()
150
+
151
+ # Example function schema (define the functions available to the model)
152
+ FUNCTIONS_SCHEMA = """
153
+ def check_availability(day: str, start_time: str, end_time: str) -> bool:
154
+ """
155
+ Checks if a given time slot is available.
156
+
157
+ :param day: The date in "YYYY-MM-DD" format.
158
+ :param start_time: The start time of the desired slot (HH:MM format, 24-hour).
159
+ :param end_time: The end time of the desired slot (HH:MM format, 24-hour).
160
+ :return: True if the slot is available, otherwise False.
161
+ """
162
+ pass
163
+ """
164
+
165
+ # Format system prompt
166
+ system_prompt = SYSTEM_PROMPT.replace("{{functions_schema}}", FUNCTIONS_SCHEMA)
167
+
168
+ # Example user query
169
+ USER_QUERY = "Check if I'm available for an hour long meditation at tomorrow noon."
170
+
171
+ # Format messages for the model
172
+ messages = [
173
+ {"role": "system", "content": system_prompt},
174
+ {"role": "user", "content": USER_QUERY},
175
+ ]
176
+
177
+ # Prepare input for the model
178
+ text = tokenizer.apply_chat_template(
179
+ messages,
180
+ tokenize=False,
181
+ add_generation_prompt=True
182
+ )
183
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
184
+
185
+ # Generate response
186
+ generated_ids = model.generate(
187
+ **model_inputs,
188
+ max_new_tokens=512
189
+ )
190
+
191
+ # Extract new generated tokens
192
+ generated_ids = [
193
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
194
+ ]
195
+
196
+ # Decode response
197
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
198
+ print(response)
199
+ ````
200
+
201
+ ## Evaluation & Performance
202
+
203
+ We evaluate the model on the **Dria-Pythonic-Agent-Benchmark ([DPAB](https://github.com/firstbatchxyz/function-calling-eval)):** The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. See [blog](https://huggingface.co/blog/andthattoo/dpab-a) for more information.
204
+
205
+ Below are the DPAB results:
206
+
207
+ Current benchmark results for various models **(strict)**:
208
+
209
+ | Model Name | Pythonic | JSON |
210
+ |---------------------------------|----------|------|
211
+ | **Closed Models** | | |
212
+ | Claude 3.5 Sonnet | 87 | 45 |
213
+ | gpt-4o-2024-11-20 | 60 | 30 |
214
+ | **Open Models** | | |
215
+ | **> 100B Parameters** | | |
216
+ | DeepSeek V3 (685B) | 63 | 33 |
217
+ | MiniMax-01 | 62 | 40 |
218
+ | Llama-3.1-405B-Instruct | 60 | 38 |
219
+ | **> 30B Parameters** | | |
220
+ | Qwen-2.5-Coder-32b-Instruct | 68 | 32 |
221
+ | Qwen-2.5-72b-instruct | 65 | 39 |
222
+ | Llama-3.3-70b-Instruct | 59 | 40 |
223
+ | QwQ-32b-Preview | 47 | 21 |
224
+ | **< 20B Parameters** | | |
225
+ | Phi-4 (14B) | 55 | 35 |
226
+ | Qwen2.5-Coder-7B-Instruct | 44 | 39 |
227
+ | Qwen-2.5-7B-Instruct | 47 | 34 |
228
+ | **Tiny-Agent-a-3B** | **72** | 34 |
229
+ | Qwen2.5-Coder-3B-Instruct | 26 | 37 |
230
+ | **Tiny-Agent-a-1.5B** | **73** | 30 |
231
+
232
+ #### Citation
233
+
234
+ ```
235
+ @misc{Dria-Agent-a,
236
+ url={https://huggingface.co/blog/andthattoo/dria-agent-a},
237
+ title={Dria-Agent-a},
238
+ author={"andthattoo", "Atakan Tekparmak"}
239
+ }
240
+ ```