eevvgg commited on
Commit
149c9e6
·
1 Parent(s): 8d7c991

update readme 1

Browse files
Files changed (1) hide show
  1. README.md +32 -7
README.md CHANGED
@@ -1,3 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  # PaReS-sentimenTw-political-PL
3
 
@@ -5,6 +32,7 @@ This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https
5
  Fine-tuned on 1k sample of manually annotated Twitter data.
6
 
7
 
 
8
  Mapping (id2label):
9
  mapping = {
10
  0:'negative',
@@ -30,10 +58,10 @@ Training results: loss: 0.1358926964368792
30
 
31
  It achieves the following results on the test set (10%):
32
 
33
- Num examples = 100
34
- Batch size = 8
35
- Accuracy = 0.950
36
- F1-macro = 0.944
37
 
38
  precision recall f1-score support
39
 
@@ -41,7 +69,4 @@ It achieves the following results on the test set (10%):
41
  1 0.958 0.885 0.920 26
42
  2 0.923 0.960 0.941 25
43
 
44
- accuracy 0.950 100
45
- macro avg 0.947 0.941 0.944 100
46
- weighted avg 0.950 0.950 0.950 100
47
 
 
1
+ ---
2
+ language:
3
+ - pl
4
+
5
+ tags:
6
+ - text
7
+ - sentiment
8
+ - political
9
+
10
+ metrics:
11
+ - accuracy
12
+ - f1
13
+
14
+ model-index:
15
+ - name: PaReS-sentimenTw-political-PL
16
+ results:
17
+ - task:
18
+ type: sentiment-classification # Required. Example: automatic-speech-recognition
19
+ name: Text Classification # Optional. Example: Speech Recognition
20
+ dataset:
21
+ type: tweets # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
22
+ name: tweets_2020_electionsPL # Required. A pretty name for the dataset. Example: Common Voice (French)
23
+ metrics:
24
+ - type: f1 # Required. Example: wer. Use metric id from https://hf.co/metrics
25
+ value: 94.4 # Required. Example: 20.90
26
+
27
+ ---
28
 
29
  # PaReS-sentimenTw-political-PL
30
 
 
32
  Fine-tuned on 1k sample of manually annotated Twitter data.
33
 
34
 
35
+
36
  Mapping (id2label):
37
  mapping = {
38
  0:'negative',
 
58
 
59
  It achieves the following results on the test set (10%):
60
 
61
+ Num examples = 100 \n
62
+ Batch size = 8 \n
63
+ Accuracy = 0.950 \n
64
+ F1-macro = 0.944 \n
65
 
66
  precision recall f1-score support
67
 
 
69
  1 0.958 0.885 0.920 26
70
  2 0.923 0.960 0.941 25
71
 
 
 
 
72