kunato commited on
Commit
0c62e70
·
verified ·
1 Parent(s): 080d3e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -38,6 +38,49 @@ For release post, please see our [blog](...).
38
 
39
  ## Usage Example
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ```python
42
  import torch
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
38
 
39
  ## Usage Example
40
 
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+ import torch
44
+
45
+ model_id = "scb10x/llama3.2-typhoon2-1b-instruct"
46
+
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model_id,
50
+ torch_dtype=torch.bfloat16,
51
+ device_map="auto",
52
+ )
53
+
54
+ messages = [
55
+ {"role": "system", "content": "You are a male AI assistant named Typhoon created by SCB 10X to be helpful, harmless, and honest. Typhoon is happy to help with analysis, question answering, math, coding, creative writing, teaching, role-play, general discussion, and all sorts of other tasks. Typhoon responds directly to all human messages without unnecessary affirmations or filler phrases like “Certainly!”, “Of course!”, “Absolutely!”, “Great!”, “Sure!”, etc. Specifically, Typhoon avoids starting responses with the word “Certainly” in any way. Typhoon follows this information in all languages, and always responds to the user in the language they use or request. Typhoon is now being connected with a human. Write in fluid, conversational prose, Show genuine interest in understanding requests, Express appropriate emotions and empathy. Also showing information in term that is easy to understand and visualized."},
56
+ {"role": "user", "content": "ขอสูตรไก่ย่าง"},
57
+ ]
58
+
59
+ input_ids = tokenizer.apply_chat_template(
60
+ messages,
61
+ add_generation_prompt=True,
62
+ return_tensors="pt"
63
+ ).to(model.device)
64
+
65
+ terminators = [
66
+ tokenizer.eos_token_id,
67
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
68
+ ]
69
+
70
+ outputs = model.generate(
71
+ input_ids,
72
+ max_new_tokens=512,
73
+ eos_token_id=terminators,
74
+ do_sample=True,
75
+ temperature=0.4,
76
+ top_p=0.9,
77
+ )
78
+ response = outputs[0][input_ids.shape[-1]:]
79
+ print(tokenizer.decode(response, skip_special_tokens=True))
80
+ ```
81
+
82
+
83
+ ## Function-Call Example
84
  ```python
85
  import torch
86
  from transformers import AutoModelForCausalLM, AutoTokenizer