|
--- |
|
language: |
|
- en |
|
tags: |
|
- fluency |
|
license: apache-2.0 |
|
--- |
|
|
|
This model represents an ONNX-optimized version of the original [parrot_fluency_model](https://huggingface.co/prithivida/parrot_fluency_model) model. |
|
It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs. |
|
|
|
## Dependencies |
|
|
|
Please install the following dependency before you begin working with the model: |
|
```sh |
|
pip install optimum[onnxruntime-gpu] |
|
``` |
|
|
|
## How to use |
|
```python |
|
from transformers import AutoTokenizer |
|
from optimum.onnxruntime import ORTModelForSequenceClassification |
|
from optimum.pipelines import pipeline |
|
|
|
# load tokenizer and model weights |
|
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
|
|
# prepare the pipeline and generate inferences |
|
user_inputs = ['Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence.', |
|
'Pass on what you have learned. Strength, mastery, hmm… but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.', |
|
'Whispering dreams, forgotten desires, chaotic thoughts, dance with words, meaning elusive, swirling amidst.'] |
|
pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort") |
|
res = pip(user_inputs, batch_size=64, truncation="only_first") |
|
|
|
``` |