File size: 4,243 Bytes
ab6db54
8d07ed6
ab6db54
9780512
 
8d07ed6
9780512
 
ab6db54
 
8d07ed6
ab6db54
 
 
 
9780512
ab6db54
9780512
8d07ed6
9780512
8d07ed6
9780512
ab6db54
9780512
ab6db54
8d07ed6
 
 
 
ab6db54
 
9780512
ab6db54
8d07ed6
ab6db54
8d07ed6
9780512
ab6db54
 
8d07ed6
9780512
ab6db54
8d07ed6
9780512
ab6db54
8d07ed6
9780512
ab6db54
8d07ed6
9780512
ab6db54
 
 
 
9780512
ab6db54
304130b
ab6db54
9780512
ab6db54
da173ea
 
 
9780512
 
 
 
ab6db54
9780512
 
3e67532
9780512
ab6db54
3e67532
 
 
 
 
 
 
 
 
 
 
8d07ed6
ab6db54
834e31a
3e67532
 
 
 
 
 
 
 
 
 
 
 
 
9780512
ab6db54
 
da173ea
 
 
 
 
 
 
 
 
 
 
 
ab6db54
 
 
8d07ed6
 
 
 
 
 
ab6db54
 
8d07ed6
ab6db54
8d07ed6
ab6db54
8d07ed6
e82f3eb
 
 
 
40de477
 
 
 
e82f3eb
 
 
 
40de477
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
license: apache-2.0
language:
- en
tags:
- propaganda
---

# Model Card for identrics/wasper_propaganda_classifier_en




## Model Description

- **Developed by:** [`Identrics`](https://identrics.ai/)
- **Language:** English
- **License:** apache-2.0
- **Finetuned from model:** [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Context window :** 8192 tokens

## Model Description

This model consists of a fine-tuned version of mistralai/Mistral-7B-v0.1 for a propaganda detection task. It is effectively a multilabel classifier, determining whether a given propaganda text in English contains or not 5 predefined propaganda types.


This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project. The detailed taxonomy of the full pipeline could be found [here](https://github.com/Identrics/wasper/).


## Propaganda taxonomy

The propaganda techniques identifiable with this model are classified into five categories:

1. **Self-Identification Techniques**:
These techniques exploit the audience's feelings of association (or desire to be associated) with a larger group. They suggest that the audience should feel united, motivated, or threatened by the same factors that unite, motivate, or threaten that group.


2. **Defamation Techniques**:
These techniques represent direct or indirect attacks against an entity's reputation and worth.

3. **Legitimisation Techniques**:
These techniques attempt to prove and legitimise the propagandist's statements by using arguments that cannot be falsified because they are based on moral values or personal experiences.

4. **Logical Fallacies**:
These techniques appeal to the audience's reason and masquerade as objective and factual arguments, but in reality, they exploit distractions and flawed logic.

5. **Rhetorical Devices**:
These techniques seek to influence the audience and control the conversation by using linguistic methods.




## Uses

To be used as a multilabel classifier to identify if the English sample text contains one or more of the five propaganda techniques mentioned above.

### Example




First install direct dependencies:
```
pip install transformers torch accelerate
```

Then the model can be downloaded and used for inference:
```py
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

labels = [
    "Legitimisation Techniques",
    "Rhetorical Devices",
    "Logical Fallacies",
    "Self-Identification Techniques",
    "Defamation Techniques",
]

model = AutoModelForSequenceClassification.from_pretrained(
    "identrics/wasper_propaganda_classifier_en", num_labels=5
)
tokenizer = AutoTokenizer.from_pretrained("identrics/wasper_propaganda_classifier_en")

text = "Our country is the most powerful country in the world!"

inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)
    logits = outputs.logits

probabilities = torch.sigmoid(logits).cpu().numpy().flatten()

# Format predictions

predictions = {labels[i]: probabilities[i] for i in range(len(labels))}
print(predictions)
```














## Training Details


During the training stage, the objective was to develop the multi-label classifier to identify different types of propaganda using a dataset containing both real and artificially generated samples.

The data has been carefully annotated by domain experts based on a predefined taxonomy, which covers five primary categories. Some examples are assigned to a single category, while others are classified into multiple categories, reflecting the nuanced nature of propaganda where multiple techniques can  be found within a single text.


The model reached an F1-weighted score of **0.464** during training.


## Compute Infrastructure

This model was fine-tuned using a **GPU / 2xNVIDIA Tesla V100 32GB**.

## Citation [this section is to be updated soon] 

If you find our work useful, please consider citing WASPer:

```
@article{...2024wasper,
  title={WASPer: Propaganda Detection in Bulgarian and English}, 
  author={....},
  journal={arXiv preprint arXiv:...},
  year={2024}
}
```