Up readme
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# PaReS-sentimenTw-political-PL
|
3 |
+
|
4 |
+
This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) to predict 3-categorical sentiment.
|
5 |
+
Fine-tuned on 1k sample of manually annotated Twitter data.
|
6 |
+
|
7 |
+
|
8 |
+
Mapping (id2label):
|
9 |
+
mapping = {
|
10 |
+
0:'negative',
|
11 |
+
1:'neutral',
|
12 |
+
2:'positive'
|
13 |
+
}
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
## Intended uses & limitations
|
18 |
+
|
19 |
+
Sentiment detection in Polish data (fine-tuned on tweets from political domain).
|
20 |
+
|
21 |
+
|
22 |
+
## Training and evaluation data
|
23 |
+
|
24 |
+
Trained for 3 epochs, mini-batch size of 8.
|
25 |
+
Training results: loss: 0.1358926964368792
|
26 |
+
|
27 |
+
|
28 |
+
## Evaluation procedure
|
29 |
+
|
30 |
+
|
31 |
+
It achieves the following results on the test set (10%):
|
32 |
+
|
33 |
+
Num examples = 100
|
34 |
+
Batch size = 8
|
35 |
+
Accuracy = 0.950
|
36 |
+
F1-macro = 0.944
|
37 |
+
|
38 |
+
precision recall f1-score support
|
39 |
+
|
40 |
+
0 0.960 0.980 0.970 49
|
41 |
+
1 0.958 0.885 0.920 26
|
42 |
+
2 0.923 0.960 0.941 25
|
43 |
+
|
44 |
+
accuracy 0.950 100
|
45 |
+
macro avg 0.947 0.941 0.944 100
|
46 |
+
weighted avg 0.950 0.950 0.950 100
|
47 |
+
|