yoonniverse commited on
Commit
a60a273
1 Parent(s): 76a162e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -59,7 +59,7 @@ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tok
59
  output_text = tokenizer.decode(output[0], skip_prompt=True, skip_special_tokens=True)
60
  ```
61
 
62
- Our model can handle >10k tokens thanks to the rope_scaling option.
63
 
64
  ## Hardware and Software
65
 
 
59
  output_text = tokenizer.decode(output[0], skip_prompt=True, skip_special_tokens=True)
60
  ```
61
 
62
+ Our model can handle >10k input tokens thanks to the rope_scaling option.
63
 
64
  ## Hardware and Software
65