|
--- |
|
language: |
|
- en |
|
tags: |
|
- fluency |
|
license: apache-2.0 |
|
--- |
|
|
|
This model represents an ONNX-optimized version of the original [parrot_fluency_model](https://huggingface.co/prithivida/parrot_fluency_model) model. |
|
It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs. |
|
|
|
## How to use |
|
```python |
|
from transformers import AutoTokenizer |
|
from optimum.onnxruntime import ORTModelForSequenceClassification |
|
from optimum.pipelines import pipeline |
|
|
|
# load tokenizer and model weights |
|
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
|
|
# prepare the pipeline and generate inferences |
|
pip = pipeline(task='text-classification', model=onnx_model, tokenizer=tokenizer, device=device, accelerator="ort") |
|
res = pip(user_inputs, batch_size=64, truncation="only_first") |
|
|
|
``` |