Added architecture reminder in code-sample
Browse files
README.md
CHANGED
@@ -20,7 +20,6 @@ The Tiny-Toxic-Detector achieves an impressive 90.26% on the Toxigen benchmark a
|
|
20 |
| --------------------------------- | ----------------- | ----------- | ---------- |
|
21 |
| lmsys/toxicchat-t5-large-v1.0 | 738M | 72.67 | 88.82 |
|
22 |
| s-nlp/roberta toxicity classifier | 124M | 88.41 | **94.92** |
|
23 |
-
| unitary/toxic-bert | 109M | 49.50 | 89.70 |
|
24 |
| mohsenfayyaz/toxicity-classifier | 109M | 81.50 | 83.31 |
|
25 |
| martin-ha/toxic-comment-model | 67M | 68.02 | 91.56 |
|
26 |
| **Tiny-toxic-detector** | **2M** | **90.26** | 87.34 |
|
@@ -194,6 +193,7 @@ if __name__ == "__main__":
|
|
194 |
|
195 |
Please note that to predict toxicity you can use the following example:
|
196 |
```python
|
|
|
197 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128, padding="max_length").to(device)
|
198 |
if "token_type_ids" in inputs:
|
199 |
del inputs["token_type_ids"]
|
|
|
20 |
| --------------------------------- | ----------------- | ----------- | ---------- |
|
21 |
| lmsys/toxicchat-t5-large-v1.0 | 738M | 72.67 | 88.82 |
|
22 |
| s-nlp/roberta toxicity classifier | 124M | 88.41 | **94.92** |
|
|
|
23 |
| mohsenfayyaz/toxicity-classifier | 109M | 81.50 | 83.31 |
|
24 |
| martin-ha/toxic-comment-model | 67M | 68.02 | 91.56 |
|
25 |
| **Tiny-toxic-detector** | **2M** | **90.26** | 87.34 |
|
|
|
193 |
|
194 |
Please note that to predict toxicity you can use the following example:
|
195 |
```python
|
196 |
+
# Define architecture before this!
|
197 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128, padding="max_length").to(device)
|
198 |
if "token_type_ids" in inputs:
|
199 |
del inputs["token_type_ids"]
|