boryana commited on
Commit
3e67532
·
verified ·
1 Parent(s): 8d07ed6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -4
README.md CHANGED
@@ -68,14 +68,36 @@ pip install transformers torch accelerate
68
 
69
  Then the model can be downloaded and used for inference:
70
  ```py
 
71
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
72
 
73
- model = AutoModelForSequenceClassification.from_pretrained("identrics/wasper_propaganda_classifier_en", num_labels=5)
 
 
 
 
 
 
 
 
 
 
74
  tokenizer = AutoTokenizer.from_pretrained("identrics/wasper_propaganda_classifier_en")
75
 
76
- tokens = tokenizer("Our country is the most powerful country in the world!", return_tensors="pt")
77
- output = model(**tokens)
78
- print(output.logits)
 
 
 
 
 
 
 
 
 
 
 
79
  ```
80
 
81
 
 
68
 
69
  Then the model can be downloaded and used for inference:
70
  ```py
71
+ import torch
72
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
73
 
74
+ labels = [
75
+ "Legitimisation Techniques",
76
+ "Rhetorical Devices",
77
+ "Logical Fallacies",
78
+ "Self-Identification Techniques",
79
+ "Defamation Techniques",
80
+ ]
81
+
82
+ model = AutoModelForSequenceClassification.from_pretrained(
83
+ "identrics/wasper_propaganda_classifier_en", num_labels=5
84
+ )
85
  tokenizer = AutoTokenizer.from_pretrained("identrics/wasper_propaganda_classifier_en")
86
 
87
+ text = "Газа евтин, американското ядрено гориво евтино, пълно с фотоволтаици а пък тока с 30% нагоре. Защо ?"
88
+
89
+ inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
90
+
91
+ with torch.no_grad():
92
+ outputs = model(**inputs)
93
+ logits = outputs.logits
94
+
95
+ probabilities = torch.sigmoid(logits).cpu().numpy().flatten()
96
+
97
+ # Format predictions
98
+
99
+ predictions = {labels[i]: probabilities[i] for i in range(len(labels))}
100
+ print(predictions)
101
  ```
102
 
103