larkkin commited on
Commit
1653b72
·
1 Parent(s): 8044721

Update readme in app preview

Browse files
Files changed (1) hide show
  1. app.py +14 -3
app.py CHANGED
@@ -39,13 +39,24 @@ markdown_text = '''
39
  <br>
40
  <br>
41
  This space provides a gradio demo and an easy-to-run wrapper of the pre-trained model for structured sentiment analysis in Norwegian language, pre-trained on the [NoReC dataset](https://huggingface.co/datasets/norec).
42
- This model is an implementation of the paper "Direct parsing to sentiment graphs" (Samuel _et al._, ACL 2022). The main repository that also contains the scripts for training the model, can be found on the project [github](https://github.com/jerbarnes/direct_parsing_to_sent_graph).
43
 
44
- The current model uses the 'labeled-edge' graph encoding, and achieves the following results on the NoReC dataset:
45
 
46
  | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
47
  |:----------------------------:|:----------:|:---------------------------:|
48
- | 0.393 | 0.468 | 0.939 |
 
 
 
 
 
 
 
 
 
 
 
49
 
50
 
51
  The model can be easily used for predicting sentiment tuples as follows:
 
39
  <br>
40
  <br>
41
  This space provides a gradio demo and an easy-to-run wrapper of the pre-trained model for structured sentiment analysis in Norwegian language, pre-trained on the [NoReC dataset](https://huggingface.co/datasets/norec).
42
+ This space containt an implementation of method described in "Direct parsing to sentiment graphs" (Samuel _et al._, ACL 2022). The main repository that also contains the scripts for training the model, can be found on the project [github](https://github.com/jerbarnes/direct_parsing_to_sent_graph).
43
 
44
+ The proposed method suggests three different ways to encode the sentiment graph: "node-centric", "labeled-edge", and "opinion-tuple". The current model uses the "labeled-edge" graph encoding, and achieves the following results on the held-out set of the NoReC dataset:
45
 
46
  | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
47
  |:----------------------------:|:----------:|:---------------------------:|
48
+ | 0.434 | 0.541 | 0.926 |
49
+
50
+
51
+ In "Word Substitution with Masked Language Models as Data Augmentation for Sentiment Analysis", we analyzed data augmentation strategies for improving performance of the model. Using masked-language modeling (MLM), we augmented the sentences with MLM-substituted words inside, outside, or inside+outside the actual sentiment tuples. The results below show that augmentation may be improve the model performance. This space, however, runs the original model trained without augmentation.
52
+
53
+ | | Augmentation rate | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
54
+ |----------------|-------------------|------------------------------|-----------|-----------------------------|
55
+ | Baseline | 0% | 43.39 | 54.13 | 92.59 |
56
+ | Outside | 59% | **45.08** | 56.18 | 92.95 |
57
+ | Inside | 9% | 43.38 | 55.62 | 92.49 |
58
+ | Inside+Outside | 27% | 44.12 | **56.44** | **93.19** |
59
+
60
 
61
 
62
  The model can be easily used for predicting sentiment tuples as follows: