JW17 commited on
Commit
d2f8801
·
verified ·
1 Parent(s): 64e37fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -5
README.md CHANGED
@@ -53,10 +53,34 @@ model-index:
53
  |Llama-2-Chat |7B|RLHF|6.27|71.37|4.96|
54
  |Llama-2-Chat |13B|RLHF|6.65|81.09|7.70|
55
 
 
56
 
57
- ## Chat Template
58
- ```
59
- <|user|>
60
- Hi! How are you doing?</s>
61
- <|assistant|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ```
 
53
  |Llama-2-Chat |7B|RLHF|6.27|71.37|4.96|
54
  |Llama-2-Chat |13B|RLHF|6.65|81.09|7.70|
55
 
56
+ ## MT-Bench
57
 
58
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6415c043486c7c9a5d151583/zZvNuUFVLp6jeSvYIbdDN.png)
59
+
60
+ ## Inference
61
+
62
+ ```python
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+
65
+ model = AutoModelForCausalLM.from_pretrained("kaist-ai/mistral-orpo-alpha")
66
+ tokenizer = AutoTokenizer.from_pretrained("kaist-ai/mistral-orpo-alpha")
67
+
68
+ # Apply chat template
69
+ query = [{'role': 'user', 'content': 'Hi! How are you doing?'}]
70
+ prompt = tokenizer.apply_chat_template(query, tokenize=False, add_generation_prompt=True)
71
+ inputs = tokenizer(prompt, return_tensors='pt')
72
+
73
+ # Generation with specific configurations
74
+ output = model.generate(
75
+ **inputs,
76
+ max_new_tokens=128,
77
+ do_sample=True,
78
+ temperature=0.7
79
+ )
80
+ response = tokenizer.batch_decode(output)
81
+
82
+ #<|user|>
83
+ #Hi! How are you doing?</s>
84
+ #<|assistant|>
85
+ #I'm doing well, thank you! How are you?</s>
86
  ```