kunato commited on
Commit
1eea05e
·
verified ·
1 Parent(s): d2cdcf6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +341 -0
README.md ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ pipeline_tag: text-generation
4
+ ---
5
+
6
+ **Llama3.1-Typhoon2-70B**: Thai Large Language Model (Instruct)
7
+
8
+ **Llama3.1-Typhoon2-70B-instruct** is a instruct Thai 🇹🇭 large language model with 70 billion parameters, and it is based on Llama3.1-70B.
9
+
10
+ | Model | IFEval - TH | IFEval - EN | MT-Bench TH | MT-Bench EN | Thai Code-Switching(t=0.7) | Thai Code-Switching(t=1.0) | FunctionCall-TH | FunctionCall-EN | GSM8K-TH | GSM8K-EN | MATH-TH | MATH-EN | HumanEval-TH | HumanEval-EN | MBPP-TH | MBPP-EN |
11
+ |--------------------------------|-------------|-------------|-------------|-------------|--------------------------------|--------------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-------------|-------------|-----------|-----------|
12
+ | **Typhoon2 Llama3.1 70B Instruct**| **81.45%** | 88.72% | **7.3626** | 8.8562 | **98.8%** | **94.8%** | **70.8%** | 65.7% | **88.79%** | **93.43%** | **59.60%** | 64.96% | 79.9% | 83.5% | 86.0% | 84.9% |
13
+ | **Llama3.3 70b instruct** | 81.01% | **91.51%** | 6.7967 | 8.8343 | 72.6% | 39.2% | 50.3% | 56.3% | 61.63% | 87.71% | 44.37% | **73.58%** | 81.7% | 84.1% | 84.9% | 87.3% |
14
+ | **Openthaigpt1.5 72b** | 80.37% | 84.56% | 7.3131 | **9.0893** | 95.6% | 50.4% | 67.1% | **74.6%** | 79.15% | 89.91% | 43.65% | 81.8% | **81.7%** | **84.8%** | **88.9%** | **89.7%** |
15
+
16
+
17
+ # TODO add image - general / domain specific / long context
18
+
19
+
20
+ For release post, please see our [blog](...).
21
+ *To acknowledge Meta's effort in creating the foundation model and to comply with the license, we explicitly include "llama-3.1" in the model name.
22
+
23
+ ## **Model Description**
24
+
25
+ - **Model type**: A 8B instruct decoder-only model based on Llama architecture.
26
+ - **Requirement**: transformers 4.45.0 or newer.
27
+ - **Context length**: 90k
28
+ - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
29
+ - **License**: [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
30
+
31
+
32
+ ## Usage Example
33
+
34
+ ```python
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM
36
+ import torch
37
+
38
+ model_id = "scb10x/llama3.1-typhoon2-8b-instruct"
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model = AutoModelForCausalLM.from_pretrained(
42
+ model_id,
43
+ torch_dtype=torch.bfloat16,
44
+ device_map="auto",
45
+ )
46
+
47
+ messages = [
48
+ {"role": "system", "content": "You are Typhoon, an AI assistant created by SCB 10X, designed to be helpful, harmless, and honest. Typhoon assists with analysis, answering questions, math, coding, creative writing, teaching, role-play, discussions, and more. Typhoon responds directly without affirmations or filler phrases (e.g., “Certainly,” “Of course”). Responses do not start with “Certainly” in any form. Typhoon adheres to these rules in all languages and always replies in the user's language or as requested. Communicate in fluid, conversational prose, showing genuine interest, empathy, and presenting information clearly and visually."},
49
+ {"role": "user", "content": "ขอสูตรไก่ย่าง"},
50
+ ]
51
+
52
+ input_ids = tokenizer.apply_chat_template(
53
+ messages,
54
+ add_generation_prompt=True,
55
+ return_tensors="pt"
56
+ ).to(model.device)
57
+
58
+ terminators = [
59
+ tokenizer.eos_token_id,
60
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
61
+ ]
62
+
63
+ outputs = model.generate(
64
+ input_ids,
65
+ max_new_tokens=512,
66
+ eos_token_id=terminators,
67
+ do_sample=True,
68
+ temperature=0.4,
69
+ top_p=0.9,
70
+ )
71
+ response = outputs[0][input_ids.shape[-1]:]
72
+ print(tokenizer.decode(response, skip_special_tokens=True))
73
+ ```
74
+
75
+ ## Inference Server Hosting Example
76
+ ```bash
77
+ pip install vllm
78
+ vllm serve scb10x/llama3.1-typhoon2-8b-instruct
79
+ # see more information at https://docs.vllm.ai/
80
+ ```
81
+
82
+
83
+ ## Function-Call Example
84
+ ```python
85
+ import json
86
+ import torch
87
+ from transformers import AutoModelForCausalLM, AutoTokenizer
88
+ import os
89
+ import ast
90
+
91
+ model_name = "scb10x/llama3.1-typhoon2-8b-instruct"
92
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
93
+ model = AutoModelForCausalLM.from_pretrained(
94
+ model_name, torch_dtype=torch.bfloat16
95
+ )
96
+
97
+ get_weather_api = {
98
+ "name": "get_weather",
99
+ "description": "Get the current weather for a location",
100
+ "parameters": {
101
+ "type": "object",
102
+ "properties": {
103
+ "location": {
104
+ "type": "string",
105
+ "description": "The city and state, e.g. San Francisco, New York",
106
+ },
107
+ "unit": {
108
+ "type": "string",
109
+ "enum": ["celsius", "fahrenheit"],
110
+ "description": "The unit of temperature to return",
111
+ },
112
+ },
113
+ "required": ["location"],
114
+ },
115
+ }
116
+
117
+
118
+ search_api = {
119
+ "name": "search",
120
+ "description": "Search for information on the internet",
121
+ "parameters": {
122
+ "type": "object",
123
+ "properties": {
124
+ "query": {
125
+ "type": "string",
126
+ "description": "The search query, e.g. 'latest news on AI'",
127
+ }
128
+ },
129
+ "required": ["query"],
130
+ },
131
+ }
132
+
133
+ get_stock = {
134
+ "name": "get_stock_price",
135
+ "description": "Get the stock price",
136
+ "parameters": {
137
+ "type": "object",
138
+ "properties": {
139
+ "symbol": {
140
+ "type": "string",
141
+ "description": "The stock symbol, e.g. AAPL, GOOG",
142
+ }
143
+ },
144
+ "required": ["symbol"],
145
+ },
146
+ }
147
+ # Tool input are same format with OpenAI tools
148
+ openai_format_tools = [get_weather_api, search_api, get_stock]
149
+
150
+ messages = [
151
+ {"role": "system", "content": "You are helpful assistance."},
152
+ {"role": "user", "content": "ขอราคาหุ้น Tasla (TLS) และ Amazon (AMZ) ?"},
153
+ ]
154
+
155
+ final_prompt = tokenizer.apply_chat_template(
156
+ messages, tools=openai_format_tools, add_generation_prompt=True, tokenize=False
157
+ )
158
+
159
+ inputs = tokenizer.apply_chat_template(
160
+ messages, tools=openai_format_tools, add_generation_prompt=True, return_tensors="pt"
161
+ ).to(model.device)
162
+
163
+ outputs = model.generate(
164
+ inputs,
165
+ max_new_tokens=512,
166
+ do_sample=True,
167
+ temperature=0.7,
168
+ num_return_sequences=1,
169
+ eos_token_id=[tokenizer.eos_token_id, 128009],
170
+ )
171
+ response = outputs[0][input_ids.shape[-1]:]
172
+
173
+ print("Here Output:", tokenizer.decode(response, skip_special_tokens=True))
174
+
175
+
176
+ # Decoding function utility
177
+ def resolve_ast_by_type(value):
178
+ if isinstance(value, ast.Constant):
179
+ if value.value is Ellipsis:
180
+ output = "..."
181
+ else:
182
+ output = value.value
183
+ elif isinstance(value, ast.UnaryOp):
184
+ output = -value.operand.value
185
+ elif isinstance(value, ast.List):
186
+ output = [resolve_ast_by_type(v) for v in value.elts]
187
+ elif isinstance(value, ast.Dict):
188
+ output = {
189
+ resolve_ast_by_type(k): resolve_ast_by_type(v)
190
+ for k, v in zip(value.keys, value.values)
191
+ }
192
+ elif isinstance(
193
+ value, ast.NameConstant
194
+ ): # Added this condition to handle boolean values
195
+ output = value.value
196
+ elif isinstance(
197
+ value, ast.BinOp
198
+ ): # Added this condition to handle function calls as arguments
199
+ output = eval(ast.unparse(value))
200
+ elif isinstance(value, ast.Name):
201
+ output = value.id
202
+ elif isinstance(value, ast.Call):
203
+ if len(value.keywords) == 0:
204
+ output = ast.unparse(value)
205
+ else:
206
+ output = resolve_ast_call(value)
207
+ elif isinstance(value, ast.Tuple):
208
+ output = tuple(resolve_ast_by_type(v) for v in value.elts)
209
+ elif isinstance(value, ast.Lambda):
210
+ output = eval(ast.unparse(value.body[0].value))
211
+ elif isinstance(value, ast.Ellipsis):
212
+ output = "..."
213
+ elif isinstance(value, ast.Subscript):
214
+ try:
215
+ output = ast.unparse(value.body[0].value)
216
+ except:
217
+ output = ast.unparse(value.value) + "[" + ast.unparse(value.slice) + "]"
218
+ else:
219
+ raise Exception(f"Unsupported AST type: {type(value)}")
220
+ return output
221
+
222
+
223
+ def resolve_ast_call(elem):
224
+ func_parts = []
225
+ func_part = elem.func
226
+ while isinstance(func_part, ast.Attribute):
227
+ func_parts.append(func_part.attr)
228
+ func_part = func_part.value
229
+ if isinstance(func_part, ast.Name):
230
+ func_parts.append(func_part.id)
231
+ func_name = ".".join(reversed(func_parts))
232
+ args_dict = {}
233
+ for arg in elem.keywords:
234
+ output = resolve_ast_by_type(arg.value)
235
+ args_dict[arg.arg] = output
236
+ return {func_name: args_dict}
237
+
238
+
239
+ def ast_parse(input_str, language="Python"):
240
+ if language == "Python":
241
+ cleaned_input = input_str.strip("[]'")
242
+ parsed = ast.parse(cleaned_input, mode="eval")
243
+ extracted = []
244
+ if isinstance(parsed.body, ast.Call):
245
+ extracted.append(resolve_ast_call(parsed.body))
246
+ else:
247
+ for elem in parsed.body.elts:
248
+ assert isinstance(elem, ast.Call)
249
+ extracted.append(resolve_ast_call(elem))
250
+ return extracted
251
+ else:
252
+ raise NotImplementedError(f"Unsupported language: {language}")
253
+
254
+
255
+ def parse_nested_value(value):
256
+ """
257
+ Parse a potentially nested value from the AST output.
258
+
259
+ Args:
260
+ value: The value to parse, which could be a nested dictionary, which includes another function call, or a simple value.
261
+
262
+ Returns:
263
+ str: A string representation of the value, handling nested function calls and nested dictionary function arguments.
264
+ """
265
+ if isinstance(value, dict):
266
+ # Check if the dictionary represents a function call (i.e., the value is another dictionary or complex structure)
267
+ if all(isinstance(v, dict) for v in value.values()):
268
+ func_name = list(value.keys())[0]
269
+ args = value[func_name]
270
+ args_str = ", ".join(
271
+ f"{k}={parse_nested_value(v)}" for k, v in args.items()
272
+ )
273
+ return f"{func_name}({args_str})"
274
+ else:
275
+ # If it's a simple dictionary, treat it as key-value pairs
276
+ return (
277
+ "{"
278
+ + ", ".join(f"'{k}': {parse_nested_value(v)}" for k, v in value.items())
279
+ + "}"
280
+ )
281
+ return repr(value)
282
+
283
+
284
+ def decoded_output_to_execution_list(decoded_output):
285
+ """
286
+ Convert decoded output to a list of executable function calls.
287
+
288
+ Args:
289
+ decoded_output (list): A list of dictionaries representing function calls.
290
+
291
+ Returns:
292
+ list: A list of strings, each representing an executable function call.
293
+ """
294
+ execution_list = []
295
+ for function_call in decoded_output:
296
+ for key, value in function_call.items():
297
+ args_str = ", ".join(
298
+ f"{k}={parse_nested_value(v)}" for k, v in value.items()
299
+ )
300
+ execution_list.append(f"{key}({args_str})")
301
+ return execution_list
302
+
303
+
304
+ def default_decode_ast_prompting(result, language="Python"):
305
+ result = result.strip("`\n ")
306
+ if not result.startswith("["):
307
+ result = "[" + result
308
+ if not result.endswith("]"):
309
+ result = result + "]"
310
+ decoded_output = ast_parse(result, language)
311
+ return decoded_output
312
+
313
+
314
+ fc_result = default_decode_ast_prompting(tokenizer.decode(response, skip_special_tokens=True))
315
+ print(fc_result) # [{'Function': {'arguments': '{"symbol": "TLS"}', 'name': 'get_stock_price'}}, {'Function': {'arguments': '{"symbol": "AMZ"}', 'name': 'get_stock_price'}}]
316
+ ```
317
+
318
+ ## **Intended Uses & Limitations**
319
+
320
+ This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case.
321
+
322
+ ## **Follow us**
323
+
324
+ **https://twitter.com/opentyphoon**
325
+
326
+ ## **Support**
327
+
328
+ **https://discord.gg/CqyBscMFpg**
329
+
330
+ ## **Citation**
331
+
332
+ - If you find Typhoon2 useful for your work, please cite it using:
333
+ ```
334
+ @article{pipatanakul2023typhoon,
335
+ title={Typhoon: Thai Large Language Models},
336
+ author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
337
+ year={2023},
338
+ journal={arXiv preprint arXiv:2312.13951},
339
+ url={https://arxiv.org/abs/2312.13951}
340
+ }
341
+ ```