Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,8 @@ pipeline_tag: text-generation
|
|
38 |
{Assistant}
|
39 |
```
|
40 |
|
41 |
-
|
|
|
42 |
- Tested on A100 80GB
|
43 |
- Our model can handle up to 10k input tokens, thanks to the `rope_scaling` option
|
44 |
|
@@ -64,7 +65,6 @@ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tok
|
|
64 |
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
65 |
```
|
66 |
|
67 |
-
|
68 |
## Hardware and Software
|
69 |
|
70 |
* **Hardware**: We utilized an A100x8 * 4 for training our model
|
|
|
38 |
{Assistant}
|
39 |
```
|
40 |
|
41 |
+
## Usage
|
42 |
+
|
43 |
- Tested on A100 80GB
|
44 |
- Our model can handle up to 10k input tokens, thanks to the `rope_scaling` option
|
45 |
|
|
|
65 |
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
66 |
```
|
67 |
|
|
|
68 |
## Hardware and Software
|
69 |
|
70 |
* **Hardware**: We utilized an A100x8 * 4 for training our model
|