Text Generation
Transformers
Safetensors
English
Chinese
llama
conversational
text-generation-inference
Inference Endpoints
Siming Huang commited on
Commit
81c3370
1 Parent(s): c46e4b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -81,7 +81,7 @@ messages=[
81
 
82
  inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
83
 
84
- outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1)
85
 
86
  result = tokenizer.decode(outputs[0], skip_special_tokens=True)
87
  print(result)
 
81
 
82
  inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
83
 
84
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False)
85
 
86
  result = tokenizer.decode(outputs[0], skip_special_tokens=True)
87
  print(result)