File size: 1,510 Bytes
629540c
58c3447
 
 
42d604c
c04c903
629540c
58c3447
694ccbf
 
58c3447
f1fa289
 
 
 
 
 
 
58c3447
 
 
 
 
 
 
 
 
 
 
374d521
 
 
 
58c3447
 
3dd2a6a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
language:
- en
tags:
- fluency
license: apache-2.0
---

This model represents an ONNX-optimized version of the original [parrot_fluency_model](https://huggingface.co/prithivida/parrot_fluency_model) model.
It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs.

## Dependencies

Please install the following dependency before you begin working with the model:
```sh
pip install optimum[onnxruntime-gpu]
```

## How to use
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline

# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/parrot_fluency_model_onnx')
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/parrot_fluency_model_onnx')

# prepare the pipeline and generate inferences
user_inputs = ['Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence.',
               'Pass on what you have learned. Strength, mastery, hmm… but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.',
               'Whispering dreams, forgotten desires, chaotic thoughts, dance with words, meaning elusive, swirling amidst.']
pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort")
res = pip(user_inputs, batch_size=64, truncation="only_first")

```