Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
mfromm commited on
Commit
6c94e76
·
verified ·
1 Parent(s): 8f4c21d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -119,7 +119,7 @@ prediction = model.generate(
119
  temperature=0.7,
120
  num_return_sequences=1,
121
  )
122
- prediction_text = tokenizer.decode(prediction[0])
123
  print(prediction_text)
124
  ```
125
 
 
119
  temperature=0.7,
120
  num_return_sequences=1,
121
  )
122
+ prediction_text = tokenizer.decode(prediction[0].tolist())
123
  print(prediction_text)
124
  ```
125