optimum sample code
Browse files
README.md
CHANGED
@@ -52,7 +52,7 @@ This model is light and performs very fast. No matter on GPU or CPU, it always s
|
|
52 |
|
53 |
Click the following link to open the model's demo:<br/> https://huggingface.co/spaces/Rahmat82/RHM-text-summarizer-light
|
54 |
|
55 |
-
###
|
56 |
```python
|
57 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
|
58 |
|
@@ -76,7 +76,7 @@ within a decade. That is a shockingly fast decline." He added that koalas risk
|
|
76 |
pipe = pipeline("summarization",model = model, tokenizer=tokenizer)
|
77 |
print(pipe(text_to_summarize)[0]["summary_text"])
|
78 |
```
|
79 |
-
###
|
80 |
```python
|
81 |
#!pip install -q transformers accelerate optimum onnxruntime onnx
|
82 |
|
|
|
52 |
|
53 |
Click the following link to open the model's demo:<br/> https://huggingface.co/spaces/Rahmat82/RHM-text-summarizer-light
|
54 |
|
55 |
+
### Use model:
|
56 |
```python
|
57 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
|
58 |
|
|
|
76 |
pipe = pipeline("summarization",model = model, tokenizer=tokenizer)
|
77 |
print(pipe(text_to_summarize)[0]["summary_text"])
|
78 |
```
|
79 |
+
### Use model with optimum/onnxruntime - super fast:
|
80 |
```python
|
81 |
#!pip install -q transformers accelerate optimum onnxruntime onnx
|
82 |
|