yoonniverse commited on
Commit
f32dcdf
1 Parent(s): ec2ffbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
56
  del inputs['token_type_ids']
57
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
58
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
59
- output_text = tokenizer.decode(output[0], skip_prompt=True, skip_special_tokens=True)
60
  ```
61
 
62
  **Our model can handle >10k input tokens thanks to the `rope_scaling` option.**
 
56
  del inputs['token_type_ids']
57
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
58
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
59
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
60
  ```
61
 
62
  **Our model can handle >10k input tokens thanks to the `rope_scaling` option.**