Rahmat82 commited on
Commit
6c07abc
·
verified ·
1 Parent(s): ac99e65

optimum sample code

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -52,7 +52,7 @@ This model is light and performs very fast. No matter on GPU or CPU, it always s
52
 
53
  Click the following link to open the model's demo:<br/> https://huggingface.co/spaces/Rahmat82/RHM-text-summarizer-light
54
 
55
- ### Code without using optimum:
56
  ```python
57
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
58
 
@@ -76,7 +76,7 @@ within a decade. That is a shockingly fast decline." He added that koalas risk
76
  pipe = pipeline("summarization",model = model, tokenizer=tokenizer)
77
  print(pipe(text_to_summarize)[0]["summary_text"])
78
  ```
79
- ### Code for using optimum/onnxruntime - super fast:
80
  ```python
81
  #!pip install -q transformers accelerate optimum onnxruntime onnx
82
 
 
52
 
53
  Click the following link to open the model's demo:<br/> https://huggingface.co/spaces/Rahmat82/RHM-text-summarizer-light
54
 
55
+ ### Use model:
56
  ```python
57
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
58
 
 
76
  pipe = pipeline("summarization",model = model, tokenizer=tokenizer)
77
  print(pipe(text_to_summarize)[0]["summary_text"])
78
  ```
79
+ ### Use model with optimum/onnxruntime - super fast:
80
  ```python
81
  #!pip install -q transformers accelerate optimum onnxruntime onnx
82