table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D19-1543table_2
Performance improvement of the neural semantic parser on Spider with different hardness levels.
2
[['Method', 'SyntaxSQLNet (Yu et al., 2018b)'], ['Method', 'SyntaxSQLNet + DAE'], ['Method', 'SyntaxSQLNetAug (Yu et al., 2018b)'], ['Method', 'SyntaxSQLNetAug + DAE']]
1
[['Easy (%)'], ['Medium (%)'], ['Hard (%)'], ['Extra Hard (%)'], ['All (%)']]
[['38.4', '15.0', '16.1', '3.5', '18.9'], ['39.6(+1.2)', '18.2(+3.2)', '20.7(+4.6)', '7.6(+4.1)', '22.1(+3.2)'], ['44.4', '23.0', '23.0', '2.9', '24.9'], ['44.8(+0.4)', '27.0(+4.0)', '24.1(+1.1)', '5.9(+3.0)', '27.4(+2.5)']]
column
['Easy (%)', 'Medium (%)', 'Hard (%)', 'Extra Hard (%)', 'All (%)']
['SyntaxSQLNet + DAE', 'SyntaxSQLNetAug + DAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Easy (%)</th> <th>Medium (%)</th> <th>Hard (%)</th> <th>Extra Hard (%)</th> <th>All (%)</th> </tr> </thead> <tbody> <tr> <td>Method || SyntaxSQLNet (Yu et al., 2018b)</td> <td>38.4</td> <td>15.0</td> <td>16.1</td> <td>3.5</td> <td>18.9</td> </tr> <tr> <td>Method || SyntaxSQLNet + DAE</td> <td>39.6(+1.2)</td> <td>18.2(+3.2)</td> <td>20.7(+4.6)</td> <td>7.6(+4.1)</td> <td>22.1(+3.2)</td> </tr> <tr> <td>Method || SyntaxSQLNetAug (Yu et al., 2018b)</td> <td>44.4</td> <td>23.0</td> <td>23.0</td> <td>2.9</td> <td>24.9</td> </tr> <tr> <td>Method || SyntaxSQLNetAug + DAE</td> <td>44.8(+0.4)</td> <td>27.0(+4.0)</td> <td>24.1(+1.1)</td> <td>5.9(+3.0)</td> <td>27.4(+2.5)</td> </tr> </tbody></table>
Table 2
table_2
D19-1543
7
emnlp2019
On Spider, the performance is evaluated by exact-match accuracy on different difficulty levels of SQL queries, i.e., easy, medium, hard and extra hard. (Yu et al., 2018c). Table 2 shows the results. First, the overall accuracy can be improved by 3.2% and 2.5% respectively. Furthermore, performances on medium, hard and extra hard SQL queries achieve more improvement than that on easy SQL queries, indicating that our approach is more helpful for solving complicated cases.
[2, 1, 1, 1]
['On Spider, the performance is evaluated by exact-match accuracy on different difficulty levels of SQL queries, i.e., easy, medium, hard and extra hard. (Yu et al., 2018c).', 'Table 2 shows the results.', 'First, the overall accuracy can be improved by 3.2% and 2.5% respectively.', 'Furthermore, performances on medium, hard and extra hard SQL queries achieve more improvement than that on easy SQL queries, indicating that our approach is more helpful for solving complicated cases.']
[None, None, ['SyntaxSQLNet + DAE', 'SyntaxSQLNetAug + DAE'], ['Medium (%)', 'Hard (%)', 'Extra Hard (%)', 'Easy (%)']]
1
D19-1543table_5
Performances of different anonymization models on WikiSQL.
2
[['Method', 'TypeSQL (Yu et al., 2018a)'], ['Method', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['Method', 'DAE']]
2
[['Dev (%)', 'ACCSC'], ['Dev (%)', 'ACCOC'], ['Dev (%)', 'ACCCE'], ['Test (%)', 'ACCSC'], ['Test (%)', 'ACCOC'], ['Test (%)', 'ACCCE']]
[['75.9', '92.9 −', '76.0', '92.9 −'], ['88.8 64.6 −', '88.8 63.6 −'], ['92.6', '93.6', '86.7', '92.0', '93.7', '86.2']]
column
['ACCSC', 'ACCOC', 'ACCCE', 'ACCSC', 'ACCOC', 'ACCCE']
['DAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev (%) || ACCSC</th> <th>Dev (%) || ACCOC</th> <th>Dev (%) || ACCCE</th> <th>Test (%) || ACCSC</th> <th>Test (%) || ACCOC</th> <th>Test (%) || ACCCE</th> </tr> </thead> <tbody> <tr> <td>Method || TypeSQL (Yu et al., 2018a)</td> <td>75.9</td> <td>92.9 −</td> <td>76.0</td> <td>92.9 −</td> <td>None</td> <td>None</td> </tr> <tr> <td>Method || AnnotatedSeq2Seq (Wang et al., 2018b)</td> <td>88.8 64.6 −</td> <td>88.8 63.6 −</td> <td>None</td> <td>None</td> <td>None</td> <td>None</td> </tr> <tr> <td>Method || DAE</td> <td>92.6</td> <td>93.6</td> <td>86.7</td> <td>92.0</td> <td>93.7</td> <td>86.2</td> </tr> </tbody></table>
Table 5
table_5
D19-1543
8
emnlp2019
Table 5 shows that DAE significantly outperforms TypeSQL and AnnotatedSeq2Seq on all the evaluation metrics. First, for ACCSC, DAE outperforms TypeSQL and AnotatedSeq2Seq by 16% and 3.5% on test data; for ACCOC, DAE outperforms TypeSQL and AnnotatedSeq2Seq by 0.8% and 28% on test data. Moreover, DAE can achieve around 86% for ACCCE, while other methods fail to recognize cells when the table content is not available due to the privacy problem.
[1, 1, 1]
['Table 5 shows that DAE significantly outperforms TypeSQL and AnnotatedSeq2Seq on all the evaluation metrics.', 'First, for ACCSC, DAE outperforms TypeSQL and AnotatedSeq2Seq by 16% and 3.5% on test data; for ACCOC, DAE outperforms TypeSQL and AnnotatedSeq2Seq by 0.8% and 28% on test data.', 'Moreover, DAE can achieve around 86% for ACCCE, while other methods fail to recognize cells when the table content is not available due to the privacy problem.']
[['DAE', 'TypeSQL (Yu et al., 2018a)', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['DAE', 'TypeSQL (Yu et al., 2018a)', 'ACCOC', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['ACCCE', 'DAE']]
1
D19-1544table_5
Labeled F1 score (including senses) for all languages on the CoNLL-2009 in-domain test sets. For previous best result, Japanese (Ja) is from Watanabe et al. (2010), Catalan (Ca) is from Zhao et al. (2009), Spanish (Es) and German (De) are from Roth and Lapata (2016), Czech (Cz) is from Henderson et al. (2013), Chinese (Zh) is from Cai et al. (2018) and English (En) is from Li et al. (2019b).
2
[['Model', 'Previous Best Single Model'], ['Model', 'Baseline Model'], ['Model', 'CapsuleNet SRL (This Work)']]
1
[['Ja'], ['Es'], ['Ca'], ['De'], ['Cz'], ['Zh'], ['En'], ['Avg.']]
[['78.69', '80.50', '80.32', '80.10', '86.02', '84.30', '90.40', '82.90'], ['80.12', '81.0', '81.39', '76.01', '87.79', '81.05', '90.49', '82.55'], ['81.26', '81.32', '81.65', '76.44', '88.08', '81.65', '91.06', '83.07']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['CapsuleNet SRL (This Work)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ja</th> <th>Es</th> <th>Ca</th> <th>De</th> <th>Cz</th> <th>Zh</th> <th>En</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Model || Previous Best Single Model</td> <td>78.69</td> <td>80.50</td> <td>80.32</td> <td>80.10</td> <td>86.02</td> <td>84.30</td> <td>90.40</td> <td>82.90</td> </tr> <tr> <td>Model || Baseline Model</td> <td>80.12</td> <td>81.0</td> <td>81.39</td> <td>76.01</td> <td>87.79</td> <td>81.05</td> <td>90.49</td> <td>82.55</td> </tr> <tr> <td>Model || CapsuleNet SRL (This Work)</td> <td>81.26</td> <td>81.32</td> <td>81.65</td> <td>76.44</td> <td>88.08</td> <td>81.65</td> <td>91.06</td> <td>83.07</td> </tr> </tbody></table>
Table 5
table_5
D19-1544
8
emnlp2019
Table 5 gives the results of the proposed CapsuleNet SRL (with global node) on the in-domain test sets of all languages from CoNLL-2009. As shown in Table 5, the proposed model consistently outperforms the non-refinement baseline model and achieves state-of-the-art performance on Catalan (Ca), Czech (Cz), English (En), Japanese (Jp) and Spanish (Es). Interestingly, the effectiveness of the refinement method does not seem to be dependent on the dataset size: the improvements on the smallest (Japanese) and the largest datasets (English) are among the largest.
[1, 2, 1]
['Table 5 gives the results of the proposed CapsuleNet SRL (with global node) on the in-domain test sets of all languages from CoNLL-2009.', 'As shown in Table 5, the proposed model consistently outperforms the non-refinement baseline model and achieves state-of-the-art performance on Catalan (Ca), Czech (Cz), English (En), Japanese (Jp) and Spanish (Es).', 'Interestingly, the effectiveness of the refinement method does not seem to be dependent on the dataset size: the improvements on the smallest (Japanese) and the largest datasets (English) are among the largest.']
[['CapsuleNet SRL (This Work)'], ['CapsuleNet SRL (This Work)', 'Baseline Model', 'Ca', 'Cz', 'En', 'Es'], None]
1
D19-1545table_1
Exact Match and BLEU scores for our simplified model (Iyer-Simp) with and without idioms, compared with results from Iyer et al. (2018)† on the test (validation) set of CONCODE. Iyer-Simp achieves significantly better EM and BLEU score and reduces training time from 40 hours to 27 hours. Augmenting the decoding process with 200 code idioms further pushes up BLEU and reduces training time to 13 hours.
2
[['Model', 'Seq2Seq'], ['Model', 'Seq2Prod'], ['Model', 'Iyer et al. (2018)â€\xa0'], ['Model', 'Iyer-Simp'], ['Model', 'Iyer-Simp + 200 idioms']]
1
[['Exact'], ['BLEU']]
[['3.2 (2.9)', '23.5 (21.0)'], ['6.7 (5.6)', '21.3 (20.6)'], ['8.6 (7.1)', '22.1 (21.3)'], ['12.5 (9.8)', '24.4 (23.2)'], ['12.2 (9.8)', '26.6 (24.0)']]
column
['Exact', 'BLEU']
['Iyer-Simp + 200 idioms', 'Iyer-Simp']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>3.2 (2.9)</td> <td>23.5 (21.0)</td> </tr> <tr> <td>Model || Seq2Prod</td> <td>6.7 (5.6)</td> <td>21.3 (20.6)</td> </tr> <tr> <td>Model || Iyer et al. (2018)â€</td> <td>8.6 (7.1)</td> <td>22.1 (21.3)</td> </tr> <tr> <td>Model || Iyer-Simp</td> <td>12.5 (9.8)</td> <td>24.4 (23.2)</td> </tr> <tr> <td>Model || Iyer-Simp + 200 idioms</td> <td>12.2 (9.8)</td> <td>26.6 (24.0)</td> </tr> </tbody></table>
Table 1
table_1
D19-1545
6
emnlp2019
7 Results and Discussion. Table 1 presents exact match and BLEU scores on the original CONCODE train/validation/test split. Iyer-Simp yields a large improvement of 3.9 EM and 2.2 BLEU over the best model of Iyer et al. (2018), while also being significantly faster (27 hours for 30 training epochs as compared to 40 hours). Using a reduced BPE vocabulary makes the model memory efficient, which allows us to use a larger batch size that in turn speeds up training. Furthermore, using 200 code idioms further improves BLEU by 2.2% while maintaining comparable EM accuracy. Using the top-200 idioms results in a target AST compression of more than 50%, which results in fewer decoder RNN steps being performed. This reduces training time further by more than 50%, from 27 hours to 13 hours.
[2, 1, 1, 2, 1, 2, 2]
['7 Results and Discussion.', 'Table 1 presents exact match and BLEU scores on the original CONCODE train/validation/test split.', 'Iyer-Simp yields a large improvement of 3.9 EM and 2.2 BLEU over the best model of Iyer et al. (2018), while also being significantly faster (27 hours for 30 training epochs as compared to 40 hours).', 'Using a reduced BPE vocabulary makes the model memory efficient, which allows us to use a larger batch size that in turn speeds up training.', 'Furthermore, using 200 code idioms further improves BLEU by 2.2% while maintaining comparable EM accuracy.', 'Using the top-200 idioms results in a target AST compression of more than 50%, which results in fewer decoder RNN steps being performed.', 'This reduces training time further by more than 50%, from 27 hours to 13 hours.']
[None, ['Exact', 'BLEU'], ['Iyer-Simp', 'Exact', 'BLEU', 'Iyer et al. (2018)â€\xa0'], None, ['Iyer-Simp + 200 idioms', 'BLEU', 'Exact'], ['Iyer-Simp + 200 idioms'], None]
1
D19-1547table_5
Human evaluation on 100 random examples for MISP-SQL agents based on SQLNet, SQLova and SyntaxSQLNet, respectively.
3
[['System', 'SQLNet', 'no interaction'], ['System', 'SQLNet', 'MISP-SQL (simulation)'], ['System', 'SQLNet', 'MISP-SQL (real user)'], ['System', 'SQLova', 'no interaction'], ['System', 'SQLova', 'MISP-SQL (simulation)'], ['System', 'SQLova', 'MISP-SQL (real user)'], ['System', 'SQLova', '+ w/ full info.'], ['System', 'SyntaxSQLNet', 'no interaction'], ['System', 'SyntaxSQLNet', 'MISP-SQL (simulation)'], ['System', 'SyntaxSQLNet', 'MISP-SQL (real user)']]
1
[['Accqm/em'], ['Accex'], ['Avg. #q']]
[['0.580', '0.660', 'N/A'], ['0.770', '0.810', '1.800'], ['0.633', '0.717', '1.510'], ['0.830', '0.890', 'N/A'], ['0.920', '0.950', '0.550'], ['0.837', '0.880', '0.533'], ['0.907', '0.937', '0.547'], ['0.180', 'N/A', 'N/A'], ['0.290', 'N/A', '2.730'], ['0.230', 'N/A', '2.647']]
column
['Accqm/em', 'Accex', 'Avg. #q']
['MISP-SQL (real user)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accqm/em</th> <th>Accex</th> <th>Avg. #q</th> </tr> </thead> <tbody> <tr> <td>System || SQLNet || no interaction</td> <td>0.580</td> <td>0.660</td> <td>N/A</td> </tr> <tr> <td>System || SQLNet || MISP-SQL (simulation)</td> <td>0.770</td> <td>0.810</td> <td>1.800</td> </tr> <tr> <td>System || SQLNet || MISP-SQL (real user)</td> <td>0.633</td> <td>0.717</td> <td>1.510</td> </tr> <tr> <td>System || SQLova || no interaction</td> <td>0.830</td> <td>0.890</td> <td>N/A</td> </tr> <tr> <td>System || SQLova || MISP-SQL (simulation)</td> <td>0.920</td> <td>0.950</td> <td>0.550</td> </tr> <tr> <td>System || SQLova || MISP-SQL (real user)</td> <td>0.837</td> <td>0.880</td> <td>0.533</td> </tr> <tr> <td>System || SQLova || + w/ full info.</td> <td>0.907</td> <td>0.937</td> <td>0.547</td> </tr> <tr> <td>System || SyntaxSQLNet || no interaction</td> <td>0.180</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>System || SyntaxSQLNet || MISP-SQL (simulation)</td> <td>0.290</td> <td>N/A</td> <td>2.730</td> </tr> <tr> <td>System || SyntaxSQLNet || MISP-SQL (real user)</td> <td>0.230</td> <td>N/A</td> <td>2.647</td> </tr> </tbody></table>
Table 5
table_5
D19-1547
8
emnlp2019
Table 5 shows the results. In all settings, MISP-SQL improves the base parser’s performance, demonstrating the benefit of involving human interaction. However, we also notice that the gain is not as large as in simulation, especially on SQLova. Through interviews with the human evaluators, we found that the major reason is that they sometimes had difficulties understanding the true intent of some test questions that are ambiguous, vague, or contain entities they are not familiar with. We believe this reflects a general challenge of setting up human evaluation for semantic parsing that is close to the real application setting, and thus set forth the following discussion.
[1, 1, 1, 2, 2]
['Table 5 shows the results.', 'In all settings, MISP-SQL improves the base parser’s performance, demonstrating the benefit of involving human interaction.', 'However, we also notice that the gain is not as large as in simulation, especially on SQLova.', 'Through interviews with the human evaluators, we found that the major reason is that they sometimes had difficulties understanding the true intent of some test questions that are ambiguous, vague, or contain entities they are not familiar with.', 'We believe this reflects a general challenge of setting up human evaluation for semantic parsing that is close to the real application setting, and thus set forth the following discussion.']
[None, ['MISP-SQL (real user)'], ['MISP-SQL (real user)', 'SQLova'], None, None]
1
D19-1548table_2
Ablation results of our baseline system on the LDC2015E86 development set.
2
[['Model', 'Baseline'], ['Model', '-BPE'], ['Model', '-Share Vocab.'], ['Model', '-Both']]
1
[['BLEU'], ['Meteor'], ['CHRF++']]
[['24.93', '33.2', '60.3'], ['23.02', '31.6', '58.09'], ['23.24', '31.78', '58.43'], ['18.77', '28.04', '51.88']]
column
['BLEU', 'Meteor', 'CHRF++']
['-Both']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Meteor</th> <th>CHRF++</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>24.93</td> <td>33.2</td> <td>60.3</td> </tr> <tr> <td>Model || -BPE</td> <td>23.02</td> <td>31.6</td> <td>58.09</td> </tr> <tr> <td>Model || -Share Vocab.</td> <td>23.24</td> <td>31.78</td> <td>58.43</td> </tr> <tr> <td>Model || -Both</td> <td>18.77</td> <td>28.04</td> <td>51.88</td> </tr> </tbody></table>
Table 2
table_2
D19-1548
5
emnlp2019
3.2 Experimental Results. We first show the performance of our baseline system. As mentioned before, BPE and sharing vocabulary are two techniques we applied to relieving data sparsity. Table 2 presents the results of the ablation test on the development set of LDC2015E86 by either removing BPE, or vocabulary sharing, or both of them from the baseline system. From the results we can see that BPE and vocabulary sharing are critical to building our baseline system (an improvement from 18.77 to 24.93 in BLEU), revealing the fact that they are two effective ways to address the issue of data sparseness for AMR-to-text generation.
[2, 2, 2, 1, 1]
['3.2 Experimental Results.', 'We first show the performance of our baseline system.', 'As mentioned before, BPE and sharing vocabulary are two techniques we applied to relieving data sparsity.', 'Table 2 presents the results of the ablation test on the development set of LDC2015E86 by either removing BPE, or vocabulary sharing, or both of them from the baseline system.', 'From the results we can see that BPE and vocabulary sharing are critical to building our baseline system (an improvement from 18.77 to 24.93 in BLEU), revealing the fact that they are two effective ways to address the issue of data sparseness for AMR-to-text generation.']
[None, None, None, ['-BPE', '-Share Vocab.', '-Both'], ['-Both', 'Baseline']]
1
D19-1548table_3
Comparison results of our approaches and related studies on the test sets of LDC2015E86 and LDC2017T10. #P indicates the size of parameters in millions. ∗ indicates seq2seq-based systems while † for graph-based models, and ‡ for other models. All our proposed systems are significant over the baseline at 0.01, tested by bootstrap resampling (Koehn, 2004).
3
[['System', 'Baseline', 'Baseline'], ['System', 'Our Approach', 'feature-based'], ['System', 'Our Approach', 'avg-based'], ['System', 'Our Approach', 'sum-based'], ['System', 'Our Approach', 'SA-based'], ['System', 'Our Approach', 'CNN-based'], ['System', 'Previous works with single models', 'Konstas et al. (2017)'], ['System', 'Previous works with single models', 'Cao and Clark (2019)'], ['System', 'Previous works with single models', 'Song et al. (2018)'], ['System', 'Previous works with single models', 'Beck et al. (2018)'], ['System', 'Previous works with single models', 'Damonte and Cohen (2019)'], ['System', 'Previous works with single models', 'Guo et al. (2019)'], ['System', 'Previous works with single models', 'Song et al. (2016)'], ['System', 'Previous works with either ensemble models or unlabelled data or both', 'Konstas et al. (2017)'], ['System', 'Previous works with either ensemble models or unlabelled data or both', 'Song et al. (2018)'], ['System', 'Previous works with either ensemble models or unlabelled data or both', 'Beck et al. (2018)'], ['System', 'Previous works with either ensemble models or unlabelled data or both', 'Guo et al. (2019)']]
2
[['LDC2015E86', 'BLEU'], ['LDC2015E86', 'Meteor'], ['LDC2015E86', 'CHRF++'], ['LDC2015E86', '#P (M)'], ['LDC2017T10', 'BLEU'], ['LDC2017T10', 'Meteor'], ['LDC2017T10', 'CHRF++']]
[['25.50', '33.16', '59.88', '49.1', '27.43', '34.62', '61.85'], ['27.23', '34.53', '61.55', '49.4', '30.18', '35.83', '63.20'], ['28.37', '35.10', '62.29', '49.1', '29.56', '35.24', '62.86'], ['28.69', '34.97', '62.05', '49.1', '29.92', '35.68', '63.04'], ['29.66', '35.45', '63.00', '49.3', '31.54', '36.02', '63.84'], ['29.10', '35.00', '62.10', '49.2', '31.82', '36.38', '64.05'], ['22.00', '-', '-', '-', '-', '-', '-'], ['23.5', '-', '-', '-', '26.8', '-', '-'], ['23.30', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '23.3', '-', '50.4'], ['24.40', '23.60', '-', '-', '24.54', '24.07 -'], ['25.7', '-', '-', '-', '27.6', '-', '57.3'], ['22.44', '-', '-', '-', '-', '-', '-'], ['33.8', '-', '-', '-', '-', '-', '-'], ['33.0', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '27.5', '-', '53.5'], ['35.3', '-', '-', '-', '-', '-', '-']]
column
['BLEU', 'Meteor', 'CHRF++', '#P (M)', 'BLEU', 'Meteor', 'CHRF++']
['Our Approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LDC2015E86 || BLEU</th> <th>LDC2015E86 || Meteor</th> <th>LDC2015E86 || CHRF++</th> <th>LDC2015E86 || #P (M)</th> <th>LDC2017T10 || BLEU</th> <th>LDC2017T10 || Meteor</th> <th>LDC2017T10 || CHRF++</th> </tr> </thead> <tbody> <tr> <td>System || Baseline || Baseline</td> <td>25.50</td> <td>33.16</td> <td>59.88</td> <td>49.1</td> <td>27.43</td> <td>34.62</td> <td>61.85</td> </tr> <tr> <td>System || Our Approach || feature-based</td> <td>27.23</td> <td>34.53</td> <td>61.55</td> <td>49.4</td> <td>30.18</td> <td>35.83</td> <td>63.20</td> </tr> <tr> <td>System || Our Approach || avg-based</td> <td>28.37</td> <td>35.10</td> <td>62.29</td> <td>49.1</td> <td>29.56</td> <td>35.24</td> <td>62.86</td> </tr> <tr> <td>System || Our Approach || sum-based</td> <td>28.69</td> <td>34.97</td> <td>62.05</td> <td>49.1</td> <td>29.92</td> <td>35.68</td> <td>63.04</td> </tr> <tr> <td>System || Our Approach || SA-based</td> <td>29.66</td> <td>35.45</td> <td>63.00</td> <td>49.3</td> <td>31.54</td> <td>36.02</td> <td>63.84</td> </tr> <tr> <td>System || Our Approach || CNN-based</td> <td>29.10</td> <td>35.00</td> <td>62.10</td> <td>49.2</td> <td>31.82</td> <td>36.38</td> <td>64.05</td> </tr> <tr> <td>System || Previous works with single models || Konstas et al. (2017)</td> <td>22.00</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with single models || Cao and Clark (2019)</td> <td>23.5</td> <td>-</td> <td>-</td> <td>-</td> <td>26.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with single models || Song et al. (2018)</td> <td>23.30</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with single models || Beck et al. (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>23.3</td> <td>-</td> <td>50.4</td> </tr> <tr> <td>System || Previous works with single models || Damonte and Cohen (2019)</td> <td>24.40</td> <td>23.60</td> <td>-</td> <td>-</td> <td>24.54</td> <td>24.07 -</td> <td>None</td> </tr> <tr> <td>System || Previous works with single models || Guo et al. (2019)</td> <td>25.7</td> <td>-</td> <td>-</td> <td>-</td> <td>27.6</td> <td>-</td> <td>57.3</td> </tr> <tr> <td>System || Previous works with single models || Song et al. (2016)</td> <td>22.44</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with either ensemble models or unlabelled data or both || Konstas et al. (2017)</td> <td>33.8</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with either ensemble models or unlabelled data or both || Song et al. (2018)</td> <td>33.0</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Previous works with either ensemble models or unlabelled data or both || Beck et al. (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>27.5</td> <td>-</td> <td>53.5</td> </tr> <tr> <td>System || Previous works with either ensemble models or unlabelled data or both || Guo et al. (2019)</td> <td>35.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 3
table_3
D19-1548
6
emnlp2019
Table 3 presents the comparison of our approach and related works on the test sets of LDC2015E86 and LDC2017T10. From the results we can see that the Transformer-based baseline outperforms most of graph-to-sequence models and is comparable with the latest work by Guo et al. (2019). The strong performance of the baseline is attributed to the capability of the Transformer to encode global and implicit structural information in AMR graphs. By comparing the five methods of learning graph structure representations, we have the following observations. All of them achieve significant improvements over the baseline: the biggest improvements are 4.16 and 4.39 BLEU scores on LDC2015E86 and LDC2017T10, respectively. Methods using continuous representations (such as SA-based and CNN-based) outperform the methods using discrete representations (such as feature-based). Compared to the baseline, the methods have very limited affect on the sizes of model parameters (see the column of #P (M) in Table 3). Finally, our best-performing models are the best among all the single and supervised models.
[1, 1, 2, 2, 1, 1, 1, 1]
['Table 3 presents the comparison of our approach and related works on the test sets of LDC2015E86 and LDC2017T10.', 'From the results we can see that the Transformer-based baseline outperforms most of graph-to-sequence models and is comparable with the latest work by Guo et al. (2019).', 'The strong performance of the baseline is attributed to the capability of the Transformer to encode global and implicit structural information in AMR graphs.', 'By comparing the five methods of learning graph structure representations, we have the following observations.', 'All of them achieve significant improvements over the baseline: the biggest improvements are 4.16 and 4.39 BLEU scores on LDC2015E86 and LDC2017T10, respectively.', 'Methods using continuous representations (such as SA-based and CNN-based) outperform the methods using discrete representations (such as feature-based).', 'Compared to the baseline, the methods have very limited affect on the sizes of model parameters (see the column of #P (M) in Table 3).', 'Finally, our best-performing models are the best among all the single and supervised models.']
[['Our Approach', 'LDC2015E86', 'LDC2017T10'], ['Our Approach', 'Previous works with single models', 'Previous works with either ensemble models or unlabelled data or both', 'Guo et al. (2019)'], ['Baseline'], None, ['Our Approach', 'Baseline', 'BLEU', 'LDC2015E86', 'LDC2017T10'], ['SA-based', 'CNN-based', 'feature-based'], ['Baseline', 'Our Approach', '#P (M)'], ['SA-based']]
1
D19-1548table_4
Performance on the test set of our approach with or without modeling structural information of indirectly connected concept pairs.
2
[['System', 'Baseline'], ['System', 'Our approach'], ['System', 'No indirectly connected concept pairs']]
1
[['BLEU']]
[['27.43'], ['31.82'], ['29.92']]
column
['BLEU']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>System || Baseline</td> <td>27.43</td> </tr> <tr> <td>System || Our approach</td> <td>31.82</td> </tr> <tr> <td>System || No indirectly connected concept pairs</td> <td>29.92</td> </tr> </tbody></table>
Table 4
table_4
D19-1548
6
emnlp2019
Table 4 compares the performance of our approach with or without modeling structural information of indirectly connected concept pairs. It shows that by modeling structural information of indirectly connected concept pairs, our approach improves the performance on the test set from 29.92 to 31.82 in BLEU scores. It also shows that even without modeling structural information of indirectly connected concept pairs, our approach achieves better performance than the baseline.
[1, 1, 1]
['Table 4 compares the performance of our approach with or without modeling structural information of indirectly connected concept pairs.', 'It shows that by modeling structural information of indirectly connected concept pairs, our approach improves the performance on the test set from 29.92 to 31.82 in BLEU scores.', 'It also shows that even without modeling structural information of indirectly connected concept pairs, our approach achieves better performance than the baseline.']
[['Our approach', 'No indirectly connected concept pairs'], ['No indirectly connected concept pairs', 'Our approach'], ['Our approach', 'Baseline']]
1
D19-1554table_7
Effects of different replacement actions. As we can see, neighbor and synonymy words contribute most to the performance.
2
[['Setting', 'all words (RNN)'], ['Setting', '-super words'], ['Setting', '-subordinate words'], ['Setting', '-synonymy words'], ['Setting', '-neighbor words'], ['Setting', 'all words (CNN)'], ['Setting', '-super words'], ['Setting', '-subordinate words'], ['Setting', '-synonymy words'], ['Setting', '-neighbor words']]
1
[['SST-2'], ['SST-5'], ['RT'], ['Average']]
[['81.60', '41.14', '75.76', '66.17'], ['80.72', '41.17', '75.02', '65.64'], ['80.56', '41.67', '75.48', '65.90'], ['80.45', '41.99', '76.12', '66.19'], ['80.56', '40.91', '74.66', '65.38'], ['80.18', '41.86', '74.84', '65.63'], ['79.96', '41.67', '76.22', '65.95'], ['81.58', '41.49', '75.39', '66.15'], ['79.24', '41.67', '74.74', '65.22'], ['81.33', '40.68', '74.47', '65.49']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['-neighbor words']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-2</th> <th>SST-5</th> <th>RT</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>Setting || all words (RNN)</td> <td>81.60</td> <td>41.14</td> <td>75.76</td> <td>66.17</td> </tr> <tr> <td>Setting || -super words</td> <td>80.72</td> <td>41.17</td> <td>75.02</td> <td>65.64</td> </tr> <tr> <td>Setting || -subordinate words</td> <td>80.56</td> <td>41.67</td> <td>75.48</td> <td>65.90</td> </tr> <tr> <td>Setting || -synonymy words</td> <td>80.45</td> <td>41.99</td> <td>76.12</td> <td>66.19</td> </tr> <tr> <td>Setting || -neighbor words</td> <td>80.56</td> <td>40.91</td> <td>74.66</td> <td>65.38</td> </tr> <tr> <td>Setting || all words (CNN)</td> <td>80.18</td> <td>41.86</td> <td>74.84</td> <td>65.63</td> </tr> <tr> <td>Setting || -super words</td> <td>79.96</td> <td>41.67</td> <td>76.22</td> <td>65.95</td> </tr> <tr> <td>Setting || -subordinate words</td> <td>81.58</td> <td>41.49</td> <td>75.39</td> <td>66.15</td> </tr> <tr> <td>Setting || -synonymy words</td> <td>79.24</td> <td>41.67</td> <td>74.74</td> <td>65.22</td> </tr> <tr> <td>Setting || -neighbor words</td> <td>81.33</td> <td>40.68</td> <td>74.47</td> <td>65.49</td> </tr> </tbody></table>
Table 7
table_7
D19-1554
7
emnlp2019
4.5 Analysis. What is the effect of each action?. To show the effect of different actions, we take RNN and CNN on SST-2, SST-5, and RT as examples and conduct experiments by dropping one action at a time. As Table 7 shows, only some actions are useful for the robustness improvement. The average performance becomes better after dropping subordinate words under two different settings. Surprisingly, the neighbor words largely contribute to the performance. Without neighbor words, the average accuracies are dropped from 66.17 to 65.38 for RNN, and from 65.63 to 65.22 for CNN. Neighbor words share the same super word, like “fox” and “wolf”. In WordNet, neighbor words usually have similar semantic meanings. By replacing a word with its similar word, the semantic diversity can be largely enhanced. Furthermore, the semantic similarity can reduce the risk of changing the label when modifying the input text. Therefore, it is a good choice to include this relation in models unless the replacement exactly impacts the original label in specific tasks.
[2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
['4.5 Analysis.', 'What is the effect of each action?.', 'To show the effect of different actions, we take RNN and CNN on SST-2, SST-5, and RT as examples and conduct experiments by dropping one action at a time.', 'As Table 7 shows, only some actions are useful for the robustness improvement.', 'The average performance becomes better after dropping subordinate words under two different settings.', 'Surprisingly, the neighbor words largely contribute to the performance.', 'Without neighbor words, the average accuracies are dropped from 66.17 to 65.38 for RNN, and from 65.63 to 65.22 for CNN.', 'Neighbor words share the same super word, like “fox” and “wolf”.', 'In WordNet, neighbor words usually have similar semantic meanings.', 'By replacing a word with its similar word, the semantic diversity can be largely enhanced.', 'Furthermore, the semantic similarity can reduce the risk of changing the label when modifying the input text.', 'Therefore, it is a good choice to include this relation in models unless the replacement exactly impacts the original label in specific tasks.']
[None, None, ['all words (RNN)', 'all words (CNN)', 'SST-2', 'SST-5', 'RT'], None, ['Average'], ['-neighbor words'], ['-neighbor words', 'all words (RNN)', 'all words (CNN)'], ['-neighbor words'], ['-neighbor words'], None, None, None]
1
D19-1555table_1
Summary of the evaluation datasets.
2
[['Statistic', '# of reviews'], ['Statistic', '# of sentences'], ['Statistic', 'Sentence/Review'], ['Statistic', 'Words/Sentence']]
1
[['HotelUser'], ['ResUser'], ['HotelType'], ['HotelLoc']]
[['28165', '23873', '22984', '136446'], ['362153', '276008', '302920', '1428722'], ['13', '12', '13', '10'], ['8', '7', '7', '7']]
row
['# of reviews', '# of sentences', 'Sentence/Review', 'Words/Sentence']
['# of reviews']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HotelUser</th> <th>ResUser</th> <th>HotelType</th> <th>HotelLoc</th> </tr> </thead> <tbody> <tr> <td>Statistic || # of reviews</td> <td>28165</td> <td>23873</td> <td>22984</td> <td>136446</td> </tr> <tr> <td>Statistic || # of sentences</td> <td>362153</td> <td>276008</td> <td>302920</td> <td>1428722</td> </tr> <tr> <td>Statistic || Sentence/Review</td> <td>13</td> <td>12</td> <td>13</td> <td>10</td> </tr> <tr> <td>Statistic || Words/Sentence</td> <td>8</td> <td>7</td> <td>7</td> <td>7</td> </tr> </tbody></table>
Table 1
table_1
D19-1555
5
emnlp2019
To assess Trait‰Ûªs effectiveness, we select the hotel and restaurant domains and prepare four review datasets associated with three attributes: author, trip type, and location. HotelUser, HotelType, and HotelLoc are sets of hotel reviews collected from TripAdvisor. HotelUser contains 28165 reviews posted by 202 randomly selected reviewers, each of whom contributes at least 100 hotel reviews. HotelType contains reviews associated with five trip types including business, couple, family, friend, and solo. HotelLoc contains a total of 136446 reviews about seven US cities, split approximately equally. ResUser is a set of restaurant reviews from Yelp Dataset Challenge (2019). It contains 23874 restaurant reviews posted by 144 users, each of whom contributes at least 100 reviews. Table 1 summarizes our datasets. Datasets and source code are available for research purposes (Trait, 2019).
[2, 2, 1, 2, 1, 2, 1, 1, 2]
['To assess Trait‰Ûªs effectiveness, we select the hotel and restaurant domains and prepare four review datasets associated with three attributes: author, trip type, and location.', 'HotelUser, HotelType, and HotelLoc are sets of hotel reviews collected from TripAdvisor.', 'HotelUser contains 28165 reviews posted by 202 randomly selected reviewers, each of whom contributes at least 100 hotel reviews.', 'HotelType contains reviews associated with five trip types including business, couple, family, friend, and solo.', 'HotelLoc contains a total of 136446 reviews about seven US cities, split approximately equally.', 'ResUser is a set of restaurant reviews from Yelp Dataset Challenge (2019).', 'It contains 23874 restaurant reviews posted by 144 users, each of whom contributes at least 100 reviews.', 'Table 1 summarizes our datasets.', 'Datasets and source code are available for research purposes (Trait, 2019).']
[None, ['HotelUser', 'HotelType', 'HotelLoc'], ['HotelUser', '# of reviews'], ['HotelType'], ['HotelLoc', '# of reviews'], ['ResUser'], ['ResUser', '# of reviews'], None, None]
1
D19-1557table_3
This table reports performance (Accuracy) on the MR and SST data sets. Results with * indicate that performance metric is reported on the test dataset after training on a subset of the original data set. Bold face indicates best performing algorithm
2
[['Algorithm', 'BoW (Generic)'], ['Algorithm', 'BoW (DA embeddings)'], ['Algorithm', 'Vanilla CNN'], ['Algorithm', 'Vanilla BiLSTM'], ['Algorithm', 'LR-Bi-LSTM'], ['Algorithm', 'Self-attention'], ['Algorithm', 'Adapted CNN'], ['Algorithm', 'Adapted BiLSTM'], ['Algorithm', 'BERT']]
1
[['MR'], ['SST']]
[['75.7*', '48.9*'], ['77.0*', '49.2*'], ['72.5*', '49.06*'], ['81.8*', '50.3*'], ['82.1', '50.6'], ['81.7', '48.9'], ['80.8*', '50.0*'], ['83.1*', '51.2'], ['74.4*', '51.5']]
column
['accuracy', 'accuracy']
['Adapted BiLSTM', 'Adapted CNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>SST</th> </tr> </thead> <tbody> <tr> <td>Algorithm || BoW (Generic)</td> <td>75.7*</td> <td>48.9*</td> </tr> <tr> <td>Algorithm || BoW (DA embeddings)</td> <td>77.0*</td> <td>49.2*</td> </tr> <tr> <td>Algorithm || Vanilla CNN</td> <td>72.5*</td> <td>49.06*</td> </tr> <tr> <td>Algorithm || Vanilla BiLSTM</td> <td>81.8*</td> <td>50.3*</td> </tr> <tr> <td>Algorithm || LR-Bi-LSTM</td> <td>82.1</td> <td>50.6</td> </tr> <tr> <td>Algorithm || Self-attention</td> <td>81.7</td> <td>48.9</td> </tr> <tr> <td>Algorithm || Adapted CNN</td> <td>80.8*</td> <td>50.0*</td> </tr> <tr> <td>Algorithm || Adapted BiLSTM</td> <td>83.1*</td> <td>51.2</td> </tr> <tr> <td>Algorithm || BERT</td> <td>74.4*</td> <td>51.5</td> </tr> </tbody></table>
Table 3
table_3
D19-1557
8
emnlp2019
4.4 Results. Table 2 presents results on the LibCon and the balanced (B) and imbalanced (I) Beauty, Book and Music data sets and Table 3 presents results on the SST and MR data sets. The performance metric reported in both tables is accuracy with with additional micro f-scores reported in Table 2. From Table 2, it is observed that on the LibCon data set, where we have a considerable difference in language use between the two groups of users, the adapted BiLSTM and adapted CNN perform much better than the vanilla baselines. Furthermore, our proposed adaptation layer improves the performance of the Vanilla BiLSTM to surpass the performance of fine-tuned BERT.
[2, 1, 2, 1, 1]
['4.4 Results.', 'Table 2 presents results on the LibCon and the balanced (B) and imbalanced (I) Beauty, Book and Music data sets and Table 3 presents results on the SST and MR data sets.', 'The performance metric reported in both tables is accuracy with with additional micro f-scores reported in Table 2.', 'From Table 2, it is observed that on the LibCon data set, where we have a considerable difference in language use between the two groups of users, the adapted BiLSTM and adapted CNN perform much better than the vanilla baselines.', 'Furthermore, our proposed adaptation layer improves the performance of the Vanilla BiLSTM to surpass the performance of fine-tuned BERT.']
[None, ['MR', 'SST'], None, ['Adapted BiLSTM', 'Adapted CNN', 'Vanilla CNN', 'Vanilla BiLSTM'], ['Adapted BiLSTM', 'BERT']]
1
D19-1563table_2
Experimental results on the Chinese dataset. Superscript ∗ indicates the results are reported in (Gui et al., 2017) and the rest are reprinted from the corresponding publications (p <0.001).
2
[['Method', 'RB*'], ['Method', 'CB*'], ['Method', 'SVM*'], ['Method', 'Word2vec*'], ['Method', 'Multi-kernel*'], ['Method', 'LambdaMART*'], ['Method', 'CNN*'], ['Method', 'ConvMS-Memnet*'], ['Method', 'CANN'], ['Method', 'HCS'], ['Method', 'MANN'], ['Method', 'RHNN']]
1
[['P'], ['R'], ['F']]
[['0.6747', '0.4287', '0.5243'], ['0.2672', '0.713', '0.3887'], ['0.42', '0.4375', '0.4285'], ['0.4301', '0.4233', '0.4136'], ['0.6588', '0.6972', '0.6752'], ['0.772', '0.7499', '0.7608'], ['0.6472', '0.5493', '0.5915'], ['0.7076', '0.6838', '0.6955'], ['0.7721', '0.6891', '0.7266'], ['0.7388', '0.7154', '0.7269'], ['0.7843', '0.7587', '0.7706'], ['0.8112', '0.7725', '0.7914']]
column
['P', 'R', 'F']
['RB*', 'CB*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Method || RB*</td> <td>0.6747</td> <td>0.4287</td> <td>0.5243</td> </tr> <tr> <td>Method || CB*</td> <td>0.2672</td> <td>0.713</td> <td>0.3887</td> </tr> <tr> <td>Method || SVM*</td> <td>0.42</td> <td>0.4375</td> <td>0.4285</td> </tr> <tr> <td>Method || Word2vec*</td> <td>0.4301</td> <td>0.4233</td> <td>0.4136</td> </tr> <tr> <td>Method || Multi-kernel*</td> <td>0.6588</td> <td>0.6972</td> <td>0.6752</td> </tr> <tr> <td>Method || LambdaMART*</td> <td>0.772</td> <td>0.7499</td> <td>0.7608</td> </tr> <tr> <td>Method || CNN*</td> <td>0.6472</td> <td>0.5493</td> <td>0.5915</td> </tr> <tr> <td>Method || ConvMS-Memnet*</td> <td>0.7076</td> <td>0.6838</td> <td>0.6955</td> </tr> <tr> <td>Method || CANN</td> <td>0.7721</td> <td>0.6891</td> <td>0.7266</td> </tr> <tr> <td>Method || HCS</td> <td>0.7388</td> <td>0.7154</td> <td>0.7269</td> </tr> <tr> <td>Method || MANN</td> <td>0.7843</td> <td>0.7587</td> <td>0.7706</td> </tr> <tr> <td>Method || RHNN</td> <td>0.8112</td> <td>0.7725</td> <td>0.7914</td> </tr> </tbody></table>
Table 2
table_2
D19-1563
6
emnlp2019
5.1 Main Results . The experimental results on both datasets are shown in Table 2 and Table 3, respectively. RB yields high precision but with low recall. CB has an opposite scenario from RB. A possible reason is that these linguistic-based methods depend on some cue words to identify the emotion cause, different rules or common sense may contain different cue words.
[2, 1, 1, 1, 2]
['5.1 Main Results .', 'The experimental results on both datasets are shown in Table 2 and Table 3, respectively.', 'RB yields high precision but with low recall.', 'CB has an opposite scenario from RB.', 'A possible reason is that these linguistic-based methods depend on some cue words to identify the emotion cause, different rules or common sense may contain different cue words.']
[None, None, ['RB*', 'P', 'R'], ['CB*', 'RB*'], None]
1
D19-1565table_6
Fragment-level experiments (FLC task). (i) Spans checks only Shown are two evaluations: whether the model has identified the fragment spans correctly, while (ii) Full task is evaluation wrt the actual task of identifying the spans and also assigning the correct propaganda technique for each span.
3
[['Model', 'Metrics', 'BERT'], ['Model', 'Metrics', 'Joint'], ['Model', 'Metrics', 'Granu'], ['Model', 'Multi-Granularity', 'ReLU'], ['Model', 'Multi-Granularity', 'Sigmoid']]
2
[['Spans', 'P'], ['Spans', 'R'], ['Spans', 'F1'], ['Full Task', 'P'], ['Full Task', 'R'], ['Full Task', 'F1']]
[['39.57', '36.42', '37.9', '21.48', '21.39', '21.39'], ['39.26', '35.48', '37.25', '20.11', '19.74', '19.92'], ['43.08', '33.98', '37.93', '23.85', '20.14', '21.8'], ['43.29', '34.74', '38.28', '23.98', '20.33', '21.82'], ['44.12', '35.01', '38.98', '24.42', '21.05', '22.58']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['Multi-Granularity']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spans || P</th> <th>Spans || R</th> <th>Spans || F1</th> <th>Full Task || P</th> <th>Full Task || R</th> <th>Full Task || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Metrics || BERT</td> <td>39.57</td> <td>36.42</td> <td>37.9</td> <td>21.48</td> <td>21.39</td> <td>21.39</td> </tr> <tr> <td>Model || Metrics || Joint</td> <td>39.26</td> <td>35.48</td> <td>37.25</td> <td>20.11</td> <td>19.74</td> <td>19.92</td> </tr> <tr> <td>Model || Metrics || Granu</td> <td>43.08</td> <td>33.98</td> <td>37.93</td> <td>23.85</td> <td>20.14</td> <td>21.8</td> </tr> <tr> <td>Model || Multi-Granularity || ReLU</td> <td>43.29</td> <td>34.74</td> <td>38.28</td> <td>23.98</td> <td>20.33</td> <td>21.82</td> </tr> <tr> <td>Model || Multi-Granularity || Sigmoid</td> <td>44.12</td> <td>35.01</td> <td>38.98</td> <td>24.42</td> <td>21.05</td> <td>22.58</td> </tr> </tbody></table>
Table 6
table_6
D19-1565
8
emnlp2019
Table 6 shows that joint learning (BERT-Joint) hurts the performance compared to single-task BERT. However, using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements. The multi-granularity models outperform all baselines thanks to their higher precision. This shows the effect of the model excluding sentences that it determined to be non-propagandistic from being considered for token-level classification.
[1, 1, 1, 2]
['Table 6 shows that joint learning (BERT-Joint) hurts the performance compared to single-task BERT.', 'However, using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements.', 'The multi-granularity models outperform all baselines thanks to their higher precision.', 'This shows the effect of the model excluding sentences that it determined to be non-propagandistic from being considered for token-level classification.']
[['Joint', 'BERT'], ['Granu', 'BERT'], ['Multi-Granularity', 'P'], None]
1
D19-1569table_2
Performance comparison on different models on the benchmark datasets. The best performance are bold-typed.
2
[['Model', 'SOTA'], ['Model', 'CNN+Position'], ['Model', 'LSTM+Position'], ['Model', 'CNN+ATT'], ['Model', 'Tnet (Li et al., 2018a)'], ['Model', 'PRET+MULT (He et al., 2018b)'], ['Model', 'SA-LSTM-P (Wang and Lu, 2018)'], ['Model', 'LSTM+SynATT+TarRep (He et al., 2018a)'], ['Model', 'MGAN (Fan et al., 2018b)'], ['Model', 'MGAN (Li et al., 2018b)'], ['Model', 'HSCN (Li et al., 2018b)'], ['Model', 'ASP-BiLSTM'], ['Model', 'ASP-GCN'], ['Model', 'CDT']]
2
[['Rest14', 'ACC'], ['Rest15', 'F1'], ['Laptop', 'ACC'], ['Laptop', 'F1'], ['Twitter', 'ACC'], ['Twitter', 'F1'], ['Rest16', 'ACC'], ['Rest17', 'F1']]
[['81.6', '71.91', '76.54', '71.75', '74.97', '73.6', '85.58', '69.76'], ['79.37', '68.64', '72.73', '68.28', '72.69', '70.92', '84.63', '64.75'], ['77.59', '67.05', '70.06', '64.46', '71.39', '69.45', '83.47', '62.69'], ['79.46', '69.44', '70.53', '64.27', '73.12', '71.01', '84.28', '60.86'], ['80.79', '70.84', '76.54', '71.75', '74.97', '73.6', '-', '-'], ['79.11', '69.73', '71.15', '67.46', '-', '-', '85.58', '69.76'], ['81.6', '-', '75.1', '-', '69', '-', '-', '-'], ['80.63', '71.32', '71.94', '69.23', '-', '-', '84.61', '67.45'], ['81.25', '71.94', '75.39', '72.47', '72.54', '70.81', '-', '-'], ['81.49', '71.48', '76.21', '71.42', '74.62', '73.53', '-', '-'], ['77.8', '70.2', '76.1', '72.5', '69.6', '66.1', '-', '-'], ['80.95', '72.38', '74.22', '69.35', '73.66', '72.32', '85.12', '66.92'], ['81.3', '73.18', '74.53', '69.78', '70.91', '69.07', '81.85', '61.2'], ['82.3', '74.02', '77.19', '72.99', '74.66', '73.66', '85.58', '69.93']]
column
['ACC', 'F1', 'ACC', 'F1', 'ACC', 'F1', 'ACC', 'F1']
['ASP-GCN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || ACC</th> <th>Rest15 || F1</th> <th>Laptop || ACC</th> <th>Laptop || F1</th> <th>Twitter || ACC</th> <th>Twitter || F1</th> <th>Rest16 || ACC</th> <th>Rest17 || F1</th> </tr> </thead> <tbody> <tr> <td>Model || SOTA</td> <td>81.6</td> <td>71.91</td> <td>76.54</td> <td>71.75</td> <td>74.97</td> <td>73.6</td> <td>85.58</td> <td>69.76</td> </tr> <tr> <td>Model || CNN+Position</td> <td>79.37</td> <td>68.64</td> <td>72.73</td> <td>68.28</td> <td>72.69</td> <td>70.92</td> <td>84.63</td> <td>64.75</td> </tr> <tr> <td>Model || LSTM+Position</td> <td>77.59</td> <td>67.05</td> <td>70.06</td> <td>64.46</td> <td>71.39</td> <td>69.45</td> <td>83.47</td> <td>62.69</td> </tr> <tr> <td>Model || CNN+ATT</td> <td>79.46</td> <td>69.44</td> <td>70.53</td> <td>64.27</td> <td>73.12</td> <td>71.01</td> <td>84.28</td> <td>60.86</td> </tr> <tr> <td>Model || Tnet (Li et al., 2018a)</td> <td>80.79</td> <td>70.84</td> <td>76.54</td> <td>71.75</td> <td>74.97</td> <td>73.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || PRET+MULT (He et al., 2018b)</td> <td>79.11</td> <td>69.73</td> <td>71.15</td> <td>67.46</td> <td>-</td> <td>-</td> <td>85.58</td> <td>69.76</td> </tr> <tr> <td>Model || SA-LSTM-P (Wang and Lu, 2018)</td> <td>81.6</td> <td>-</td> <td>75.1</td> <td>-</td> <td>69</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LSTM+SynATT+TarRep (He et al., 2018a)</td> <td>80.63</td> <td>71.32</td> <td>71.94</td> <td>69.23</td> <td>-</td> <td>-</td> <td>84.61</td> <td>67.45</td> </tr> <tr> <td>Model || MGAN (Fan et al., 2018b)</td> <td>81.25</td> <td>71.94</td> <td>75.39</td> <td>72.47</td> <td>72.54</td> <td>70.81</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MGAN (Li et al., 2018b)</td> <td>81.49</td> <td>71.48</td> <td>76.21</td> <td>71.42</td> <td>74.62</td> <td>73.53</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || HSCN (Li et al., 2018b)</td> <td>77.8</td> <td>70.2</td> <td>76.1</td> <td>72.5</td> <td>69.6</td> <td>66.1</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || ASP-BiLSTM</td> <td>80.95</td> <td>72.38</td> <td>74.22</td> <td>69.35</td> <td>73.66</td> <td>72.32</td> <td>85.12</td> <td>66.92</td> </tr> <tr> <td>Model || ASP-GCN</td> <td>81.3</td> <td>73.18</td> <td>74.53</td> <td>69.78</td> <td>70.91</td> <td>69.07</td> <td>81.85</td> <td>61.2</td> </tr> <tr> <td>Model || CDT</td> <td>82.3</td> <td>74.02</td> <td>77.19</td> <td>72.99</td> <td>74.66</td> <td>73.66</td> <td>85.58</td> <td>69.93</td> </tr> </tbody></table>
Table 2
table_2
D19-1569
6
emnlp2019
From Table 2 and Figure 4, it is clear that GCN complements the BiLSTM to improve model performance. This means that the BiLSTM can identify opinion words within the context with respect to a specific aspect. However, in some complicated contexts, it might perform poorly. But the GCN can build upon BiLSTM to attend to the correct opinion words by leveraging the dependencies among words.
[1, 1, 1, 2]
['From Table 2 and Figure 4, it is clear that GCN complements the BiLSTM to improve model performance.', 'This means that the BiLSTM can identify opinion words within the context with respect to a specific aspect.', 'However, in some complicated contexts, it might perform poorly.', 'But the GCN can build upon BiLSTM to attend to the correct opinion words by leveraging the dependencies among words.']
[['ASP-GCN'], ['ASP-GCN'], ['ASP-GCN'], ['ASP-GCN']]
1
D19-1570table_1
Main BLEU results (CTC=0.62).
2
[['Method', 'Baseline'], ['Method', 'RAML'], ['Method', 'SO'], ['Method', 'ST'], ['Method', 'TA'], ['Method', 'BT']]
1
[['Fr®En'], ['En®Fr'], ['Zh®En'], ['En®De']]
[['38.38 (5)', '38.88 (6)', '17.25 (6)', '26.19 (4)'], ['+0.22 (3)', '+0.67 (3)', '+0.23 (4)', '-0.16 (6)'], ['+0.01 (4)', '+0.62 (4)', '+0.02 (5)', '-0.15 (5)'], ['-0.13 (6)', '+0.46 (5)', '+1.51 (2)', '+0.83 (2)'], ['+0.62 (2)', '+1.13 (1)', '+2.41 (1)', '+1.01 (1)'], ['+0.82 (1)', '+0.99 (2)', '+1.06 (3)', '+0.39 (3)']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RAML', 'SO', 'ST', 'TA', 'BT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fr®En</th> <th>En®Fr</th> <th>Zh®En</th> <th>En®De</th> </tr> </thead> <tbody> <tr> <td>Method || Baseline</td> <td>38.38 (5)</td> <td>38.88 (6)</td> <td>17.25 (6)</td> <td>26.19 (4)</td> </tr> <tr> <td>Method || RAML</td> <td>+0.22 (3)</td> <td>+0.67 (3)</td> <td>+0.23 (4)</td> <td>-0.16 (6)</td> </tr> <tr> <td>Method || SO</td> <td>+0.01 (4)</td> <td>+0.62 (4)</td> <td>+0.02 (5)</td> <td>-0.15 (5)</td> </tr> <tr> <td>Method || ST</td> <td>-0.13 (6)</td> <td>+0.46 (5)</td> <td>+1.51 (2)</td> <td>+0.83 (2)</td> </tr> <tr> <td>Method || TA</td> <td>+0.62 (2)</td> <td>+1.13 (1)</td> <td>+2.41 (1)</td> <td>+1.01 (1)</td> </tr> <tr> <td>Method || BT</td> <td>+0.82 (1)</td> <td>+0.99 (2)</td> <td>+1.06 (3)</td> <td>+0.39 (3)</td> </tr> </tbody></table>
Table 1
table_1
D19-1570
2
emnlp2019
CTC Table 1 shows the main BLEU results of different methods on the test set. However, we cannot identify the best DA method because their rankings across the four translation tasks vary a bit. To measure the degree of consistency, we use a correlation measure called Kendall's coefficient of concordance (Kendall and Smith, 1939; Mazurek, 2011) to evaluate the correlation of the rankings produced on the four translation tasks (appx.C). The value shows strong consistency (correlation) of different rankings when it is close to 1. We call the correlation value Cross-Task Consistency measure or CTC. The CTC for the BLEU measure is 0.62, which is of weak consistency. This phenomenon might be a result of the intrinsic nature of using a single specific test as a substitute of the whole data population for evaluation. In the next section, we introduce two measures that are more consistent (with close-to-1 CTC value). They in some extent reflect the model generalization and are easy-to-compute as well.
[1, 1, 2, 2, 2, 1, 2, 2, 2]
['CTC Table 1 shows the main BLEU results of different methods on the test set.', 'However, we cannot identify the best DA method because their rankings across the four translation tasks vary a bit.', "To measure the degree of consistency, we use a correlation measure called Kendall's coefficient of concordance (Kendall and Smith, 1939; Mazurek, 2011) to evaluate the correlation of the rankings produced on the four translation tasks (appx.C).", 'The value shows strong consistency (correlation) of different rankings when it is close to 1.', 'We call the correlation value Cross-Task Consistency measure or CTC.', 'The CTC for the BLEU measure is 0.62, which is of weak consistency.', 'This phenomenon might be a result of the intrinsic nature of using a single specific test as a substitute of the whole data population for evaluation.', 'In the next section, we introduce two measures that are more consistent (with close-to-1 CTC value).', 'They in some extent reflect the model generalization and are easy-to-compute as well.']
[None, None, None, None, None, None, None, None, None]
1
D19-1571table_1
Online decoding accuracy for a direct model (DIR), ensembling two direct models (DIR ENS) and the channel approach (CH+DIR+LM). We ablate the impact of using per word scores. Results are on WMT De-En. Table 4 in the appendix shows standard deviations.
1
[['DIR'], ['DIR ENS'], ['DIR+LM'], ['CH+DIR+LM'], [' - per word scores']]
1
[['news2016'], ['news2017']]
[['39', '34.3'], ['40', '35.3'], ['39.8', '35.2'], ['41', '36.2'], ['40', '35.1']]
column
['accuracy', 'accuracy']
['DIR+LM', 'CH+DIR+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>news2016</th> <th>news2017</th> </tr> </thead> <tbody> <tr> <td>DIR</td> <td>39</td> <td>34.3</td> </tr> <tr> <td>DIR ENS</td> <td>40</td> <td>35.3</td> </tr> <tr> <td>DIR+LM</td> <td>39.8</td> <td>35.2</td> </tr> <tr> <td>CH+DIR+LM</td> <td>41</td> <td>36.2</td> </tr> <tr> <td>- per word scores</td> <td>40</td> <td>35.1</td> </tr> </tbody></table>
Table 1
table_1
D19-1571
4
emnlp2019
Next, we evaluate online decoding with a noisy channel setup compared to just a direct model (DIR) as well as an ensemble of two direct models (DIR ENS). Table 1 shows that adding a language model to DIR (DIR+LM) gives a good improvement (Gulcehre et al., 2015) over a single direct model but ensembling two direct models is slightly more effective (DIR ENS). The noisy channel approach (CH+DIR+LM) improves by 1.9 BLEU over DIR on news2017 and by 0.9 BLEU over the ensemble. Without per word scores, accuracy drops because the direct model and the channel model are not balanced and their weight shifts throughout decoding. Our simple approach outperforms strong online ensembles which illustrates the advantage over incremental architectures (Yu et al., 2017) that do not match vanilla seq2seq models by themselves.
[1, 1, 1, 1, 1]
['Next, we evaluate online decoding with a noisy channel setup compared to just a direct model (DIR) as well as an ensemble of two direct models (DIR ENS).', 'Table 1 shows that adding a language model to DIR (DIR+LM) gives a good improvement (Gulcehre et al., 2015) over a single direct model but ensembling two direct models is slightly more effective (DIR ENS).', 'The noisy channel approach (CH+DIR+LM) improves by 1.9 BLEU over DIR on news2017 and by 0.9 BLEU over the ensemble.', 'Without per word scores, accuracy drops because the direct model and the channel model are not balanced and their weight shifts throughout decoding.', 'Our simple approach outperforms strong online ensembles which illustrates the advantage over incremental architectures (Yu et al., 2017) that do not match vanilla seq2seq models by themselves.']
[['DIR', 'DIR ENS'], ['DIR+LM', 'DIR ENS'], ['CH+DIR+LM'], [' - per word scores', 'DIR', 'CH+DIR+LM'], ['DIR ENS', 'CH+DIR+LM']]
1
D19-1576table_3
Comparison of the recent state-of-the-art approaches and G/G+I. Avg: Average DDA over 15 languages.
2
[['CODE', 'ET'], ['CODE', 'FI'], ['CODE', 'NL'], ['CODE', 'EN'], ['CODE', 'DE'], ['CODE', 'NO'], ['CODE', 'GRC'], ['CODE', 'HI'], ['CODE', 'JA'], ['CODE', 'FR'], ['CODE', 'IT'], ['CODE', 'LA'], ['CODE', 'BG'], ['CODE', 'SL'], ['CODE', 'EU'], ['Metrics', 'Avg']]
1
[['Convex MST'], ['LC-DMV'], ['D-J'], ['G'], ['G+I']]
[['49.4', '31.8', '44', '56', '56.4'], ['44.7', '26.9', '43.5', '50.7', '49.3'], ['45.3', '34.1', '43.5', '50.4', '50.6'], ['54', '56', '60.1', '51.7', '52.7'], ['51.4', '50.5', '55.7', '59.6', '61.4'], ['55.3', '45.5', '60.8', '61', '61.3'], ['43.4', '33.1', '44.9', '46.8', '46.2'], ['56.8', '54.2', '60', '47.4', '46.8'], ['44.8', '43.8', '45.8', '43.4', '44.2'], ['62', '48.6', '57', '58.4', '60.1'], ['69.1', '71.1', '70.3', '64.4', '65.9'], ['38.8', '38.6', '42.2', '45.1', '45'], ['61.6', '62.4', '73.8', '71.3', '71.3'], ['54', '49.5', '69.6', '68.3', '68.6'], ['50', '45.4', '55.7', '54.2', '53.6'], ['52', '46.1', '55.1', '55.3', '55.6']]
column
['DDA', 'DDA', 'DDA', 'DDA', 'DDA']
['G+I']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Convex MST</th> <th>LC-DMV</th> <th>D-J</th> <th>G</th> <th>G+I</th> </tr> </thead> <tbody> <tr> <td>CODE || ET</td> <td>49.4</td> <td>31.8</td> <td>44</td> <td>56</td> <td>56.4</td> </tr> <tr> <td>CODE || FI</td> <td>44.7</td> <td>26.9</td> <td>43.5</td> <td>50.7</td> <td>49.3</td> </tr> <tr> <td>CODE || NL</td> <td>45.3</td> <td>34.1</td> <td>43.5</td> <td>50.4</td> <td>50.6</td> </tr> <tr> <td>CODE || EN</td> <td>54</td> <td>56</td> <td>60.1</td> <td>51.7</td> <td>52.7</td> </tr> <tr> <td>CODE || DE</td> <td>51.4</td> <td>50.5</td> <td>55.7</td> <td>59.6</td> <td>61.4</td> </tr> <tr> <td>CODE || NO</td> <td>55.3</td> <td>45.5</td> <td>60.8</td> <td>61</td> <td>61.3</td> </tr> <tr> <td>CODE || GRC</td> <td>43.4</td> <td>33.1</td> <td>44.9</td> <td>46.8</td> <td>46.2</td> </tr> <tr> <td>CODE || HI</td> <td>56.8</td> <td>54.2</td> <td>60</td> <td>47.4</td> <td>46.8</td> </tr> <tr> <td>CODE || JA</td> <td>44.8</td> <td>43.8</td> <td>45.8</td> <td>43.4</td> <td>44.2</td> </tr> <tr> <td>CODE || FR</td> <td>62</td> <td>48.6</td> <td>57</td> <td>58.4</td> <td>60.1</td> </tr> <tr> <td>CODE || IT</td> <td>69.1</td> <td>71.1</td> <td>70.3</td> <td>64.4</td> <td>65.9</td> </tr> <tr> <td>CODE || LA</td> <td>38.8</td> <td>38.6</td> <td>42.2</td> <td>45.1</td> <td>45</td> </tr> <tr> <td>CODE || BG</td> <td>61.6</td> <td>62.4</td> <td>73.8</td> <td>71.3</td> <td>71.3</td> </tr> <tr> <td>CODE || SL</td> <td>54</td> <td>49.5</td> <td>69.6</td> <td>68.3</td> <td>68.6</td> </tr> <tr> <td>CODE || EU</td> <td>50</td> <td>45.4</td> <td>55.7</td> <td>54.2</td> <td>53.6</td> </tr> <tr> <td>Metrics || Avg</td> <td>52</td> <td>46.1</td> <td>55.1</td> <td>55.3</td> <td>55.6</td> </tr> </tbody></table>
Table 3
table_3
D19-1576
4
emnlp2019
To measure the statistical significance of the advantage of our method, we performed the nonparametric Friedman's test to support/reject the claim (null hypothesis): there is no difference between the G+I model and the NDMV model in a multilingual setting. Based on the above sample data, the P-value 7.8911 x 10-4 would result in rejection of the claim at the 0.05 significance level, thus showing the significance in our performance gain. In Table 3 we compare our method with recent state-of-the-art approaches on the UD Treebank dataset: Convex-MST (Grave and Elhadad, 2015), LC-DMV (Noji et al., 2016) and D-J (Jiang et al., 2017). For the three approaches we use the results reported by Jiang et al. (2017). Our G+I model performs better than Convex-MST and LC-DMV on average, even though additional priors and delicate biases are integrated into the two methods (e.g, the universal linguistic prior for ConvexMST and the limited center-embedding for LCDMV). Our method also slightly outperforms D-J on average, even though D-J combines ConvexMST and LC-DMV and therefore utilizes even more linguistic prior knowledge.
[0, 0, 1, 2, 1, 1]
["To measure the statistical significance of the advantage of our method, we performed the nonparametric Friedman's test to support/reject the claim (null hypothesis): there is no difference between the G+I model and the NDMV model in a multilingual setting.", 'Based on the above sample data, the P-value 7.8911 x 10-4 would result in rejection of the claim at the 0.05 significance level, thus showing the significance in our performance gain.', 'In Table 3 we compare our method with recent state-of-the-art approaches on the UD Treebank dataset: Convex-MST (Grave and Elhadad, 2015), LC-DMV (Noji et al., 2016) and D-J (Jiang et al., 2017).', 'For the three approaches we use the results reported by Jiang et al. (2017).', 'Our G+I model performs better than Convex-MST and LC-DMV on average, even though additional priors and delicate biases are integrated into the two methods (e.g, the universal linguistic prior for ConvexMST and the limited center-embedding for LCDMV).', 'Our method also slightly outperforms D-J on average, even though D-J combines ConvexMST and LC-DMV and therefore utilizes even more linguistic prior knowledge.']
[None, None, ['LC-DMV', 'D-J'], ['LC-DMV', 'D-J'], ['G+I', 'LC-DMV'], ['G+I', 'D-J']]
1
D19-1581table_3
Performance of various models on the ACP test set.
4
[['Training dataset', 'AL', 'Encoder', 'BiGRU'], ['Training dataset', 'AL', 'Encoder', 'BERT'], ['Training dataset', 'AL+CA+CO', 'Encoder', 'BiGRU'], ['Training dataset', 'AL+CA+CO', 'Encoder', 'BERT'], ['Training dataset', 'ACP', 'Encoder', 'BiGRU'], ['Training dataset', 'ACP', 'Encoder', 'BERT'], ['Training dataset', 'ACP+AL+CA+CO', 'Encoder', 'BiGRU'], ['Training dataset', 'ACP+AL+CA+CO', 'Encoder', 'BERT'], ['Training dataset', 'Random', 'Encoder', 'Metrics'], ['Training dataset', 'Random+Seed', 'Encoder', 'Metrics']]
1
[['Acc']]
[['0.843'], ['0.863'], ['0.866'], ['0.835'], ['0.919'], ['0.933'], ['0.917'], ['0.913'], ['0.5'], ['0.503']]
column
['Acc']
['Random+Seed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Training dataset || AL || Encoder || BiGRU</td> <td>0.843</td> </tr> <tr> <td>Training dataset || AL || Encoder || BERT</td> <td>0.863</td> </tr> <tr> <td>Training dataset || AL+CA+CO || Encoder || BiGRU</td> <td>0.866</td> </tr> <tr> <td>Training dataset || AL+CA+CO || Encoder || BERT</td> <td>0.835</td> </tr> <tr> <td>Training dataset || ACP || Encoder || BiGRU</td> <td>0.919</td> </tr> <tr> <td>Training dataset || ACP || Encoder || BERT</td> <td>0.933</td> </tr> <tr> <td>Training dataset || ACP+AL+CA+CO || Encoder || BiGRU</td> <td>0.917</td> </tr> <tr> <td>Training dataset || ACP+AL+CA+CO || Encoder || BERT</td> <td>0.913</td> </tr> <tr> <td>Training dataset || Random || Encoder || Metrics</td> <td>0.5</td> </tr> <tr> <td>Training dataset || Random+Seed || Encoder || Metrics</td> <td>0.503</td> </tr> </tbody></table>
Table 3
table_3
D19-1581
6
emnlp2019
4.3 Results and Discussion . Table 3 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
[2, 1, 2, 2, 1]
['4.3 Results and Discussion .', 'Table 3 shows accuracy.', 'As the Random baseline suggests, positive and negative labels were distributed evenly.', "The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon.", 'We can see that the seed lexicon itself had practically no impact on prediction.']
[None, ['Acc'], ['Random'], ['Random+Seed'], ['Random+Seed']]
1
D19-1582table_1
Overall performance of different methods on the test set with gold-standard entities. † indicates that the method uses dependency arcs.
2
[['Method', 'Cross Event'], ['Method', 'DMCNN'], ['Method', 'JRNN'], ['Method', 'DEEB-RNN'], ['Method', 'dbRNN'], ['Method', 'GCN-ED'], ['Method', 'JMEE'], ['Method', 'MOGANED']]
1
[['P'], ['R'], ['F1']]
[['68.7', '68.9', '68.8'], ['75.6', '63.6', '69.1'], ['66', '73.9', '69.3'], ['72.3', '75.8', '74'], ['74.1', '69.8', '71.9'], ['77.9', '68.8', '73.1'], ['76.3', '71.3', '73.7'], ['79.5', '72.3', '75.7']]
column
['P', 'R', 'F1']
['MOGANED']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Cross Event</td> <td>68.7</td> <td>68.9</td> <td>68.8</td> </tr> <tr> <td>Method || DMCNN</td> <td>75.6</td> <td>63.6</td> <td>69.1</td> </tr> <tr> <td>Method || JRNN</td> <td>66</td> <td>73.9</td> <td>69.3</td> </tr> <tr> <td>Method || DEEB-RNN</td> <td>72.3</td> <td>75.8</td> <td>74</td> </tr> <tr> <td>Method || dbRNN</td> <td>74.1</td> <td>69.8</td> <td>71.9</td> </tr> <tr> <td>Method || GCN-ED</td> <td>77.9</td> <td>68.8</td> <td>73.1</td> </tr> <tr> <td>Method || JMEE</td> <td>76.3</td> <td>71.3</td> <td>73.7</td> </tr> <tr> <td>Method || MOGANED</td> <td>79.5</td> <td>72.3</td> <td>75.7</td> </tr> </tbody></table>
Table 1
table_1
D19-1582
4
emnlp2019
Table 1 presents the performance comparison between different methods. We can see that MOGANED achieves 1.6% and 1.7% improvement on precision and F1-measure, respectively, compared with the best baselines. MOGANED reaches a lower recall than two sequence based methods, JRNN and DEEB-RNN.
[1, 1, 1]
['Table 1 presents the performance comparison between different methods.', 'We can see that MOGANED achieves 1.6% and 1.7% improvement on precision and F1-measure, respectively, compared with the best baselines.', 'MOGANED reaches a lower recall than two sequence based methods, JRNN and DEEB-RNN.']
[None, ['MOGANED', 'P', 'F1'], ['MOGANED', 'JRNN', 'DEEB-RNN', 'R']]
1
D19-1585table_1
DYGIE++ achieves state-of-the-art results. Test set F1 scores of best model, on all tasks and datasets. We define the following notations for events: Trig: Trigger, Arg: argument, ID: Identification, C: Classification. * indicates the use of a 4-model ensemble for trigger detection. See Appendix E for details. The results of the single model are reported in Table 2 (c). We ran significance tests on a subset of results in Appendix D. All were statistically significant except Arg-C and Arg-ID on ACE05-Event. and event roles are also correct, respectively. Model Variations We perform experiments with the following variants of our model architecture. BERT + LSTM feeds pretrained BERT embeddings to a bi-directional LSTM layer, and the LSTM parameters are trained together with task specific layers. BERT Finetune uses supervised fine-tuning of BERT on the end-task. For each variation, we study the effect of integrating different task-specific message propagation approaches. Comparisons For entity and relation extraction, we compare DYGIE++ against the DYGIE system it extends. DYGIE is a system based on ELMo (Peters et al., 2018) that uses dynamic span graphs to propagate global context. For event extraction, we compare against the method of Zhang et al. (2019), which is also an ELMo-based approach that relies on inverse reinforcement learning to focus the model on more difficult-to-detect events. Implementation Details Our model is implemented using AllenNLP (Gardner et al., 2017). We use BERTBASE for entity and relation extraction tasks and use BERTLARGE for event extraction. For BERT finetuning, we use BertAdam with the learning rates of 1 × 10−3 for the task specific layers, and 5.0 × 10−5 for BERT. We use a longer warmup period for BERT than the warmup period for task specific-layers and perform linear decay of the learning rate following the warmup
4
[['Dataset', 'ACE05', 'Task', 'Entity'], ['Dataset', 'ACE06', 'Task', 'Relation'], ['Dataset', 'ACE05-Event*', 'Task', 'Entity'], ['Dataset', 'ACE05-Event*', 'Task', 'Trig-ID'], ['Dataset', 'ACE05-Event*', 'Task', 'Trig-C'], ['Dataset', 'ACE05-Event*', 'Task', 'Arg-ID'], ['Dataset', 'ACE05-Event*', 'Task', 'Arg-C'], ['Dataset', 'SciERC', 'Task', 'Entity'], ['Dataset', 'SciERC', 'Task', 'Relation'], ['Dataset', 'GENIA', 'Task', 'Entity'], ['Dataset', 'WLPC', 'Task', 'Entity'], ['Dataset', 'WLPC', 'Task', 'Relation']]
1
[['SOTA'], ['Ours'], ['D%']]
[['88.4', '88.6', '1.7'], ['63.2', '63.4', '0.5'], ['87.1', '90.7', '27.9'], ['73.9', '76.5', '9.6'], ['72', '73.6', '5.7'], ['57.2', '55.4', '-4.2'], ['52.4', '52.5', '0.2'], ['65.2', '67.5', '6.6'], ['41.6', '48.4', '11.6'], ['76.2', '77.9', '7.1'], ['79.5', '79.7', '1'], ['64.1', '65.9', '5']]
column
['F1', 'F1', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SOTA</th> <th>Ours</th> <th>D%</th> </tr> </thead> <tbody> <tr> <td>Dataset || ACE05 || Task || Entity</td> <td>88.4</td> <td>88.6</td> <td>1.7</td> </tr> <tr> <td>Dataset || ACE06 || Task || Relation</td> <td>63.2</td> <td>63.4</td> <td>0.5</td> </tr> <tr> <td>Dataset || ACE05-Event* || Task || Entity</td> <td>87.1</td> <td>90.7</td> <td>27.9</td> </tr> <tr> <td>Dataset || ACE05-Event* || Task || Trig-ID</td> <td>73.9</td> <td>76.5</td> <td>9.6</td> </tr> <tr> <td>Dataset || ACE05-Event* || Task || Trig-C</td> <td>72</td> <td>73.6</td> <td>5.7</td> </tr> <tr> <td>Dataset || ACE05-Event* || Task || Arg-ID</td> <td>57.2</td> <td>55.4</td> <td>-4.2</td> </tr> <tr> <td>Dataset || ACE05-Event* || Task || Arg-C</td> <td>52.4</td> <td>52.5</td> <td>0.2</td> </tr> <tr> <td>Dataset || SciERC || Task || Entity</td> <td>65.2</td> <td>67.5</td> <td>6.6</td> </tr> <tr> <td>Dataset || SciERC || Task || Relation</td> <td>41.6</td> <td>48.4</td> <td>11.6</td> </tr> <tr> <td>Dataset || GENIA || Task || Entity</td> <td>76.2</td> <td>77.9</td> <td>7.1</td> </tr> <tr> <td>Dataset || WLPC || Task || Entity</td> <td>79.5</td> <td>79.7</td> <td>1</td> </tr> <tr> <td>Dataset || WLPC || Task || Relation</td> <td>64.1</td> <td>65.9</td> <td>5</td> </tr> </tbody></table>
Table 1
table_1
D19-1585
3
emnlp2019
4 Results and Analyses . State-of-the-art Results . Table 1 shows test set F1 on the entity, relation and event extraction tasks. Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification. Relative error reductions range from 0.2 - 27.9% over previous state of the art models. Benefits of Graph Propagation Table 2 shows that Coreference propagation (CorefProp) improves named entity recognition performance across all three domains. The largest gains are on the computer science research abstracts of SciERC, which make frequent use of long-range coreferences, acronyms and abbreviations. CorefProp also improves relation extraction on SciERC.
[2, 2, 1, 1, 1, 0, 0, 0]
['4 Results and Analyses .', 'State-of-the-art Results .', 'Table 1 shows test set F1 on the entity, relation and event extraction tasks.', 'Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification.', 'Relative error reductions range from 0.2 - 27.9% over previous state of the art models.', 'Benefits of Graph Propagation Table 2 shows that Coreference propagation (CorefProp) improves named entity recognition performance across all three domains.', 'The largest gains are on the computer science research abstracts of SciERC, which make frequent use of long-range coreferences, acronyms and abbreviations.', 'CorefProp also improves relation extraction on SciERC.']
[None, None, None, ['Ours'], ['Ours', 'SOTA', 'D%'], None, None, None]
1
D19-1588table_1
OntoNotes: BERT improves the c2f-coref model on English by 0.9% and 3.9% respectively for base and large variants. The main evaluation is the average F1 of three metrics – MUC, B3, and CEAFφ4 on the test set.
1
[['Martschat and Strube (2015)'], ['(Clark and Manning, 2015)'], ['(Wiseman et al., 2015)'], ['Wiseman et al. (2016)'], ['Clark and Manning (2016)'], ['e2e-coref (Lee et al., 2017)'], ['c2f-coref (Lee et al., 2018)'], ['Fei et al. (2019)'], ['EE (Kantor and Globerson, 2019)'], ['BERT-base + c2f-coref (independent)'], ['BERT-base + c2f-coref (overlap)'], ['BERT-large + c2f-coref (independent)'], ['BERT-large + c2f-coref (overlap)']]
2
[['MUC', 'P'], ['MUC', 'R'], ['MUC', 'F1'], ['B3', 'P'], ['B3', 'R'], ['B3', 'F1'], ['CEAF?4', 'P'], ['CEAF?4', 'R'], ['CEAF?4', 'F1'], ['Metrics', 'Avg. F1']]
[['76.7', '68.1', '72.2', '66.1', '54.2', '59.6', '59.5', '52.3', '55.7', '62.5'], ['76.1', '69.4', '72.6', '65.6', '56', '60.4', '59.4', '53', '56', '63'], ['76.2', '69.3', '72.6', '66.2', '55.8', '60.5', '59.4', '54.9', '57.1', '63.4'], ['77.5', '69.8', '73.4', '66.8', '57', '61.5', '62.1', '53.9', '57.7', '64.2'], ['79.2', '70.4', '74.6', '69.9', '58', '63.4', '63.5', '55.5', '59.2', '65.7'], ['78.4', '73.4', '75.8', '68.6', '61.8', '65', '62.7', '59', '60.8', '67.2'], ['81.4', '79.5', '80.4', '72.2', '69.5', '70.8', '68.2', '67.1', '67.6', '73'], ['85.4', '77.9', '81.4', '77.9', '66.4', '71.7', '70.6', '66.3', '68.4', '73.8'], ['82.6', '84.1', '83.4', '73.3', '76.2', '74.7', '72.4', '71.1', '71.8', '76.6'], ['80.2', '82.4', '81.3', '69.6', '73.8', '71.6', '69', '68.6', '68.8', '73.9'], ['80.4', '82.3', '81.4', '69.6', '73.8', '71.7', '69', '68.5', '68.8', '73.9'], ['84.7', '82.4', '83.5', '76.5', '74', '75.3', '74.1', '69.8', '71.9', '76.9'], ['85.1', '80.5', '82.8', '77.5', '70.9', '74.1', '73.8', '69.3', '71.5', '76.1']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Avg. F1']
['BERT-large + c2f-coref (overlap)', 'BERT-large + c2f-coref (independent)', 'BERT-base + c2f-coref (overlap)', 'BERT-base + c2f-coref (independent)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || P</th> <th>MUC || R</th> <th>MUC || F1</th> <th>B3 || P</th> <th>B3 || R</th> <th>B3 || F1</th> <th>CEAF?4 || P</th> <th>CEAF?4 || R</th> <th>CEAF?4 || F1</th> <th>Metrics || Avg. F1</th> </tr> </thead> <tbody> <tr> <td>Martschat and Strube (2015)</td> <td>76.7</td> <td>68.1</td> <td>72.2</td> <td>66.1</td> <td>54.2</td> <td>59.6</td> <td>59.5</td> <td>52.3</td> <td>55.7</td> <td>62.5</td> </tr> <tr> <td>(Clark and Manning, 2015)</td> <td>76.1</td> <td>69.4</td> <td>72.6</td> <td>65.6</td> <td>56</td> <td>60.4</td> <td>59.4</td> <td>53</td> <td>56</td> <td>63</td> </tr> <tr> <td>(Wiseman et al., 2015)</td> <td>76.2</td> <td>69.3</td> <td>72.6</td> <td>66.2</td> <td>55.8</td> <td>60.5</td> <td>59.4</td> <td>54.9</td> <td>57.1</td> <td>63.4</td> </tr> <tr> <td>Wiseman et al. (2016)</td> <td>77.5</td> <td>69.8</td> <td>73.4</td> <td>66.8</td> <td>57</td> <td>61.5</td> <td>62.1</td> <td>53.9</td> <td>57.7</td> <td>64.2</td> </tr> <tr> <td>Clark and Manning (2016)</td> <td>79.2</td> <td>70.4</td> <td>74.6</td> <td>69.9</td> <td>58</td> <td>63.4</td> <td>63.5</td> <td>55.5</td> <td>59.2</td> <td>65.7</td> </tr> <tr> <td>e2e-coref (Lee et al., 2017)</td> <td>78.4</td> <td>73.4</td> <td>75.8</td> <td>68.6</td> <td>61.8</td> <td>65</td> <td>62.7</td> <td>59</td> <td>60.8</td> <td>67.2</td> </tr> <tr> <td>c2f-coref (Lee et al., 2018)</td> <td>81.4</td> <td>79.5</td> <td>80.4</td> <td>72.2</td> <td>69.5</td> <td>70.8</td> <td>68.2</td> <td>67.1</td> <td>67.6</td> <td>73</td> </tr> <tr> <td>Fei et al. (2019)</td> <td>85.4</td> <td>77.9</td> <td>81.4</td> <td>77.9</td> <td>66.4</td> <td>71.7</td> <td>70.6</td> <td>66.3</td> <td>68.4</td> <td>73.8</td> </tr> <tr> <td>EE (Kantor and Globerson, 2019)</td> <td>82.6</td> <td>84.1</td> <td>83.4</td> <td>73.3</td> <td>76.2</td> <td>74.7</td> <td>72.4</td> <td>71.1</td> <td>71.8</td> <td>76.6</td> </tr> <tr> <td>BERT-base + c2f-coref (independent)</td> <td>80.2</td> <td>82.4</td> <td>81.3</td> <td>69.6</td> <td>73.8</td> <td>71.6</td> <td>69</td> <td>68.6</td> <td>68.8</td> <td>73.9</td> </tr> <tr> <td>BERT-base + c2f-coref (overlap)</td> <td>80.4</td> <td>82.3</td> <td>81.4</td> <td>69.6</td> <td>73.8</td> <td>71.7</td> <td>69</td> <td>68.5</td> <td>68.8</td> <td>73.9</td> </tr> <tr> <td>BERT-large + c2f-coref (independent)</td> <td>84.7</td> <td>82.4</td> <td>83.5</td> <td>76.5</td> <td>74</td> <td>75.3</td> <td>74.1</td> <td>69.8</td> <td>71.9</td> <td>76.9</td> </tr> <tr> <td>BERT-large + c2f-coref (overlap)</td> <td>85.1</td> <td>80.5</td> <td>82.8</td> <td>77.5</td> <td>70.9</td> <td>74.1</td> <td>73.8</td> <td>69.3</td> <td>71.5</td> <td>76.1</td> </tr> </tbody></table>
Table 1
table_1
D19-1588
3
emnlp2019
Table 1 shows that BERT-base offers an improvement of 0.9% over the ELMo-based c2fcoref model. Given how gains on coreference resolution have been hard to come by as evidenced by the table, this is still a considerable improvement. However, the magnitude of gains is relatively modest considering BERT's arguably better architecture and many more trainable parameters. This is in sharp contrast to how even the base variant of BERT has very substantially improved the state of the art in other tasks. BERT-large, however, improves c2f-coref by the much larger margin of 3.9%. We also observe that the overlap variant offers no improvement over independent.
[1, 1, 2, 2, 1, 1]
['Table 1 shows that BERT-base offers an improvement of 0.9% over the ELMo-based c2fcoref model.', 'Given how gains on coreference resolution have been hard to come by as evidenced by the table, this is still a considerable improvement.', "However, the magnitude of gains is relatively modest considering BERT's arguably better architecture and many more trainable parameters.", 'This is in sharp contrast to how even the base variant of BERT has very substantially improved the state of the art in other tasks.', 'BERT-large, however, improves c2f-coref by the much larger margin of 3.9%.', 'We also observe that the overlap variant offers no improvement over independent.']
[['BERT-base + c2f-coref (independent)'], None, None, None, ['BERT-large + c2f-coref (independent)'], ['BERT-large + c2f-coref (overlap)']]
1
D19-1589table_3
Comparison of different delta functions. Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and
1
[['SUBTRACT'], ['ADD'], ['MLP']]
1
[['M'], ['VE']]
[['3.35', '67.2'], ['3.45', '65.35'], ['3.32', '62.97']]
column
['macro-averaged', 'macro-averaged']
['SUBTRACT', 'ADD', 'MLP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>M</th> <th>VE</th> </tr> </thead> <tbody> <tr> <td>SUBTRACT</td> <td>3.35</td> <td>67.2</td> </tr> <tr> <td>ADD</td> <td>3.45</td> <td>65.35</td> </tr> <tr> <td>MLP</td> <td>3.32</td> <td>62.97</td> </tr> </tbody></table>
Table 3
table_3
D19-1589
4
emnlp2019
Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and MLP which is a multi-layer perceptron network. All scores are macro-averaged across datasets. While ADD shows good performance on METEOR, SUBTRACT does on the soft metric (i.e., VecExt), indicating that subtraction can help the model capture the better semantics than the other functions.
[1, 2, 1]
['Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and MLP which is a multi-layer perceptron network.', 'All scores are macro-averaged across datasets.', 'While ADD shows good performance on METEOR, SUBTRACT does on the soft metric (i.e., VecExt), indicating that subtraction can help the model capture the better semantics than the other functions.']
[['SUBTRACT', 'ADD', 'MLP'], None, ['ADD', 'M', 'SUBTRACT', 'VE']]
1
D19-1590table_5
Results of ProposedRU and its variants
2
[['Model', 'ProposedRUwiki'], ['Model', 'ProposedRUweb'], ['Model', 'ProposedRU'], ['Model', 'ProposedRUweb+web'], ['Model', 'ProposedRUweb+pair'], ['Model', 'ProposedRU+BK']]
1
[['R'], ['P'], ['F'], ['Avg.P']]
[['57.4', '49.6', '53.2?', '53.3'], ['59', '50.9', '54.6?', '54.5'], ['64', '52', '57.4', '57.4'], ['62.5', '49', '54.9?', '54.8'], ['64.3', '48.2', '55.1?', '55.3'], ['67.4', '52.3', '58.9', '59.9']]
column
['R', 'P', 'F', 'Avg.P']
['ProposedRU+BK']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R</th> <th>P</th> <th>F</th> <th>Avg.P</th> </tr> </thead> <tbody> <tr> <td>Model || ProposedRUwiki</td> <td>57.4</td> <td>49.6</td> <td>53.2?</td> <td>53.3</td> </tr> <tr> <td>Model || ProposedRUweb</td> <td>59</td> <td>50.9</td> <td>54.6?</td> <td>54.5</td> </tr> <tr> <td>Model || ProposedRU</td> <td>64</td> <td>52</td> <td>57.4</td> <td>57.4</td> </tr> <tr> <td>Model || ProposedRUweb+web</td> <td>62.5</td> <td>49</td> <td>54.9?</td> <td>54.8</td> </tr> <tr> <td>Model || ProposedRUweb+pair</td> <td>64.3</td> <td>48.2</td> <td>55.1?</td> <td>55.3</td> </tr> <tr> <td>Model || ProposedRU+BK</td> <td>67.4</td> <td>52.3</td> <td>58.9</td> <td>59.9</td> </tr> </tbody></table>
Table 5
table_5
D19-1590
5
emnlp2019
in which text fragments embodying background knowledge are concatenated to the input sentence as explained in Section 2. Table 5 shows that ProposedRU+BK improved the average precision over ProposedRU by about 2.5% (i.e., ProposedRU+BK significantly outperformed the state-of-the-art method, MCNN, by about 5%), suggesting that background knowledge in the form of text fragments is still useful, at least in our current experimental setting. However, the usefulness might be lost when a model is appropriately pretrained with a larger amount of texts that covers even more background knowledge.
[2, 1, 2]
['in which text fragments embodying background knowledge are concatenated to the input sentence as explained in Section 2.', 'Table 5 shows that ProposedRU+BK improved the average precision over ProposedRU by about 2.5% (i.e., ProposedRU+BK significantly outperformed the state-of-the-art method, MCNN, by about 5%), suggesting that background knowledge in the form of text fragments is still useful, at least in our current experimental setting.', 'However, the usefulness might be lost when a model is appropriately pretrained with a larger amount of texts that covers even more background knowledge.']
[None, ['ProposedRU+BK', 'ProposedRU'], None]
1
D19-1596table_1
Performance comparison of baseline VQA trained on VQA2.0, baseline VQA finetuned on ConVQA, and VQA trained using our CTM. L-ConVQA is the human-cleaned Logical Consistent QA dataset, CS-ConVQA is the human annotated Common-sense Consistency Dataset and VG is Visual Genome. CTM-based training produces the best results in terms of overall accuracy and consistency. DATA denotes the data used to fine-tune VQA or seed the CTM question generator.
3
[['a) VQA', 'DATA', 'VQA2.0'], ['b) FineTune', 'DATA', 'CS-ConVQA'], ['c) FineTune', 'DATA', 'L/CS-ConVQA'], ['d) +CTM', 'DATA', 'L/CS-ConVQA'], ['e) FineTune', 'DATA', 'L/CS-ConVQA,VG'], ['f) +CTMvg', 'DATA', 'L/CS-ConVQA,VG']]
2
[['L-ConVQA', 'Perf Con'], ['L-ConVQA', 'Avg Con'], ['L-ConVQA', 'Top1'], ['CS-ConVQA', 'Perf Con'], ['CS-ConVQA', 'Avg Con'], ['CS-ConVQA', 'Top1'], ['CS-ConVQA', 'Yes/No'], ['CS-ConVQA', 'Num']]
[['36.25', '71.36', '70.34', '26.13', '59.61', '60.03', '65.49', '31.39'], ['34.54', '70.39', '69.48', '26.39', '59.65', '60.07', '65.8', '35.92'], ['54.68', '83.42', '83.16', '24.7', '59.3', '59.6', '65.14', '33.33'], ['54.6', '83.23', '82.79', '25.94', '60.39', '60.78', '66.63', '36.89'], ['36.4', '71.6', '70.94', '25.22', '59.19', '59.56', '65.3', '31.39'], ['51.41', '81.66', '81.37', '27.49', '59.75', '60.15', '66.41', '34.95']]
column
['Perf Con', 'Avg Con', 'Top1', 'Perf Con', 'Avg Con', 'Top1', 'Yes/No', 'Num']
['L-ConVQA', 'CS-ConVQA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>L-ConVQA || Perf Con</th> <th>L-ConVQA || Avg Con</th> <th>L-ConVQA || Top1</th> <th>CS-ConVQA || Perf Con</th> <th>CS-ConVQA || Avg Con</th> <th>CS-ConVQA || Top1</th> <th>CS-ConVQA || Yes/No</th> <th>CS-ConVQA || Num</th> </tr> </thead> <tbody> <tr> <td>a) VQA || DATA || VQA2.0</td> <td>36.25</td> <td>71.36</td> <td>70.34</td> <td>26.13</td> <td>59.61</td> <td>60.03</td> <td>65.49</td> <td>31.39</td> </tr> <tr> <td>b) FineTune || DATA || CS-ConVQA</td> <td>34.54</td> <td>70.39</td> <td>69.48</td> <td>26.39</td> <td>59.65</td> <td>60.07</td> <td>65.8</td> <td>35.92</td> </tr> <tr> <td>c) FineTune || DATA || L/CS-ConVQA</td> <td>54.68</td> <td>83.42</td> <td>83.16</td> <td>24.7</td> <td>59.3</td> <td>59.6</td> <td>65.14</td> <td>33.33</td> </tr> <tr> <td>d) +CTM || DATA || L/CS-ConVQA</td> <td>54.6</td> <td>83.23</td> <td>82.79</td> <td>25.94</td> <td>60.39</td> <td>60.78</td> <td>66.63</td> <td>36.89</td> </tr> <tr> <td>e) FineTune || DATA || L/CS-ConVQA,VG</td> <td>36.4</td> <td>71.6</td> <td>70.94</td> <td>25.22</td> <td>59.19</td> <td>59.56</td> <td>65.3</td> <td>31.39</td> </tr> <tr> <td>f) +CTMvg || DATA || L/CS-ConVQA,VG</td> <td>51.41</td> <td>81.66</td> <td>81.37</td> <td>27.49</td> <td>59.75</td> <td>60.15</td> <td>66.41</td> <td>34.95</td> </tr> </tbody></table>
Table 1
table_1
D19-1596
5
emnlp2019
Table 1 shows quantitative results on our LConVQA and CS-ConVQA datasets. We make a number of observations below. The state-of-the-art VQA has low consistency. The baseline VQA system (row a) retains similarly high top-1 accuracy on the ConVQA splits (63.58% on VQAv2 vs 70.34% / 60.03% on LConVQA / CS-ConVQA); however, it achieves only 26.13% perfect consistency on the human generated CS-ConVQA questions. Finetuning is an effective strategy for the synthetic L-ConVQA split. Finetuning on LConVQA train results in 18.43% gains in perfect consistency on L-ConVQA test (row c vs a). This is unsurprising given the templated questions and simple concepts in L-ConVQA; however, perfect consistency is low in absolute terms at 54.68%. Finetuning does not lead to significant gains in consistency for human-generated questions. Finetuning the VQA model on CS-ConVQA (row b) leads to an improvement in consistency of only 0.26%. Likewise, adding L-ConVQA (row c) and extra Visual Genome questions (row e) actually reduces consistency. CTM-based training preserves or improves consistency when leveraging additional data. When we apply CTM to the Finetuned L/CSConVQA model, we improve CS-ConVQA perfect consistency by 1.24% (row d vs c) while modestly improving other metrics. Extending to Visual Genome questions, the CTM augmented model improves perfect consistency in CS-ConVQA by 2.27% over the finetuned model (row f vs e). Interestingly, the CTM modules were never trained with the human-annotated CS-ConVQA questions and yet lead to this improvement on CSConVQA by acting as an intelligent data augmenter/regularizer.
[1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 1 shows quantitative results on our LConVQA and CS-ConVQA datasets.', 'We make a number of observations below.', 'The state-of-the-art VQA has low consistency.', 'The baseline VQA system (row a) retains similarly high top-1 accuracy on the ConVQA splits (63.58% on VQAv2 vs 70.34% / 60.03% on LConVQA / CS-ConVQA); however, it achieves only 26.13% perfect consistency on the human generated CS-ConVQA questions.', 'Finetuning is an effective strategy for the synthetic L-ConVQA split.', 'Finetuning on LConVQA train results in 18.43% gains in perfect consistency on L-ConVQA test (row c vs a).', 'This is unsurprising given the templated questions and simple concepts in L-ConVQA; however, perfect consistency is low in absolute terms at 54.68%.', 'Finetuning does not lead to significant gains in consistency for human-generated questions.', 'Finetuning the VQA model on CS-ConVQA (row b) leads to an improvement in consistency of only 0.26%.', 'Likewise, adding L-ConVQA (row c) and extra Visual Genome questions (row e) actually reduces consistency.', 'CTM-based training preserves or improves consistency when leveraging additional data.', 'When we apply CTM to the Finetuned L/CSConVQA model, we improve CS-ConVQA perfect consistency by 1.24% (row d vs c) while modestly improving other metrics.', 'Extending to Visual Genome questions, the CTM augmented model improves perfect consistency in CS-ConVQA by 2.27% over the finetuned model (row f vs e).', 'Interestingly, the CTM modules were never trained with the human-annotated CS-ConVQA questions and yet lead to this improvement on CSConVQA by acting as an intelligent data augmenter/regularizer.']
[['L-ConVQA', 'CS-ConVQA'], None, ['L/CS-ConVQA'], ['VQA2.0', 'L-ConVQA'], ['L-ConVQA'], ['L-ConVQA', 'L/CS-ConVQA'], ['L-ConVQA'], ['b) FineTune', 'c) FineTune', 'e) FineTune'], ['b) FineTune', 'c) FineTune', 'e) FineTune', 'CS-ConVQA'], ['b) FineTune', 'c) FineTune', 'e) FineTune', 'L/CS-ConVQA', 'Avg Con', 'L/CS-ConVQA,VG', 'f) +CTMvg'], ['d) +CTM', 'f) +CTMvg'], ['CS-ConVQA', 'd) +CTM', 'f) +CTMvg'], ['d) +CTM', 'f) +CTMvg', 'CS-ConVQA'], ['d) +CTM', 'f) +CTMvg', 'CS-ConVQA']]
1
D19-1599table_1
Results on the validation set of OpenSQuAD.
2
[['Model', 'Single-sentence'], ['Model', 'Length-50'], ['Model', 'Length-100'], ['Model', 'Length-200'], ['Model', 'w/o sliding-window (same as (3))'], ['Model', 'w/ sliding-window'], ['Model', 'w/o passage ranker (same as (6))'], ['Model', 'w/ passage ranker'], ['Model', 'w/ passage scores'], ['Model', 'BERT+QANet'], ['Model', 'BERT+QANet (fix BERT)'], ['Model', 'BERT+QANet (init. from (11))']]
1
[['EM'], ['F1']]
[['34.8', '44.4'], ['35.5', '45.2'], ['35.7', '45.7'], ['34.8', '44.7'], ['35.7', '45.7'], ['40.4', '49.8'], ['40.4', '49.8'], ['41.3', '51.7'], ['42.8', '53.4'], ['18.3', '27.8'], ['35.5', '45.9'], ['36.2', '46.4']]
column
['EM', 'F1']
['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Single-sentence</td> <td>34.8</td> <td>44.4</td> </tr> <tr> <td>Model || Length-50</td> <td>35.5</td> <td>45.2</td> </tr> <tr> <td>Model || Length-100</td> <td>35.7</td> <td>45.7</td> </tr> <tr> <td>Model || Length-200</td> <td>34.8</td> <td>44.7</td> </tr> <tr> <td>Model || w/o sliding-window (same as (3))</td> <td>35.7</td> <td>45.7</td> </tr> <tr> <td>Model || w/ sliding-window</td> <td>40.4</td> <td>49.8</td> </tr> <tr> <td>Model || w/o passage ranker (same as (6))</td> <td>40.4</td> <td>49.8</td> </tr> <tr> <td>Model || w/ passage ranker</td> <td>41.3</td> <td>51.7</td> </tr> <tr> <td>Model || w/ passage scores</td> <td>42.8</td> <td>53.4</td> </tr> <tr> <td>Model || BERT+QANet</td> <td>18.3</td> <td>27.8</td> </tr> <tr> <td>Model || BERT+QANet (fix BERT)</td> <td>35.5</td> <td>45.9</td> </tr> <tr> <td>Model || BERT+QANet (init. from (11))</td> <td>36.2</td> <td>46.4</td> </tr> </tbody></table>
Table 1
table_1
D19-1599
3
emnlp2019
Does explicit inter-sentence matching matter? . Almost all previous state-of-the-art QA and RC models find answers by matching passages with questions, aka inter-sentence matching (Wang and Jiang, 2017; Wang et al., 2016; Seo et al., 2017; Wang et al., 2017; Song et al., 2017). However, BERT model simply concatenates a passage with a question, and differentiates them by separating them with a delimiter token [SEP], and assigning different segment ids for them. Here, we aim to check whether explicit inter-sentence matching still matters for BERT. We employ a shared BERT model to encode a passage and a question individually, and a weighted sum of all BERT layers is used as the final tokenlevel representation for the question or passage, where weights for all BERT layers are trainable parameters. Then the passage and question representations are input into QANet (Yu et al., 2018) to perform inter-sentence matching, and predict the final answer. Model (10) in Table 1 shows the result of jointly training the BERT encoder and the QANet model. The result is very poor, likely because the parameters in BERT are catastrophically forgotten while training the QANet model. To tackle this issue, we fix parameters in BERT, and only update parameters for QANet. The result is listed as model (11). It works better than model (10), but still worse than multi-passage BERT in model (6). We design another model by starting from model (11), and then jointly fine-tuning the BERT encoder and QANet. Model (12) in Table 1 shows the result. It works better than model (11), but still has a big gap with multi-passage BERT in model (6) . Therefore, we conclude that the explicit inter-sentence matching is not helpful for multi-passage BERT. One possible reason is that the multi-head self-attention layers in BERT has already embedded the inter-sentence matching.
[2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Does explicit inter-sentence matching matter? .', 'Almost all previous state-of-the-art QA and RC models find answers by matching passages with questions, aka inter-sentence matching (Wang and Jiang, 2017; Wang et al., 2016; Seo et al., 2017; Wang et al., 2017; Song et al., 2017).', 'However, BERT model simply concatenates a passage with a question, and differentiates them by separating them with a delimiter token [SEP], and assigning different segment ids for them.', 'Here, we aim to check whether explicit inter-sentence matching still matters for BERT.', 'We employ a shared BERT model to encode a passage and a question individually, and a weighted sum of all BERT layers is used as the final tokenlevel representation for the question or passage, where weights for all BERT layers are trainable parameters.', 'Then the passage and question representations are input into QANet (Yu et al., 2018) to perform inter-sentence matching, and predict the final answer.', 'Model (10) in Table 1 shows the result of jointly training the BERT encoder and the QANet model.', 'The result is very poor, likely because the parameters in BERT are catastrophically forgotten while training the QANet model.', 'To tackle this issue, we fix parameters in BERT, and only update parameters for QANet.', 'The result is listed as model (11).', 'It works better than model (10), but still worse than multi-passage BERT in model (6).', 'We design another model by starting from model (11), and then jointly fine-tuning the BERT encoder and QANet.', 'Model (12) in Table 1 shows the result.', 'It works better than model (11), but still has a big gap with multi-passage BERT in model (6) .', 'Therefore, we conclude that the explicit inter-sentence matching is not helpful for multi-passage BERT.', 'One possible reason is that the multi-head self-attention layers in BERT has already embedded the inter-sentence matching.']
[None, None, ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet'], ['BERT+QANet'], ['BERT+QANet'], ['BERT+QANet (fix BERT)'], ['BERT+QANet', 'w/ sliding-window'], ['BERT+QANet (fix BERT)'], ['BERT+QANet (init. from (11))'], ['BERT+QANet (fix BERT)', 'w/ sliding-window'], None, None]
1
D19-1616table_2
Evaluation results of our models on development (dev) and testing (test) sets. The automatic evaluation scores in terms of Rouge (R1 F1, R2 F1, RL F1) and BLEU for the output summaries are shown in the table.
4
[['Setting', 'S1', 'Dataset', 'Dev'], ['Setting', 'S2', 'Dataset', 'Test'], ['Setting', 'S2', 'Dataset', 'Dev'], ['Setting', 'S3', 'Dataset', 'Test'], ['Setting', 'S3', 'Dataset', 'Dev'], ['Setting', 'S4', 'Dataset', 'Test']]
1
[['R1_F1'], ['R2_F1'], ['RL_F1'], ['BLEU']]
[['43.9', '28.5', '46.3', '12.6'], ['39.7', '22.9', '42.2', '9'], ['45.4', '29.8', '47.4', '14'], ['55.7', '41.8', '57.6', '20.8'], ['44.3', '28.5', '46.4', '13.1'], ['40', '23', '42.3', '9.4']]
column
['R1_F1', 'R2_F1', 'RL_F1', 'BLEU']
['Dev', 'Test']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1_F1</th> <th>R2_F1</th> <th>RL_F1</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Setting || S1 || Dataset || Dev</td> <td>43.9</td> <td>28.5</td> <td>46.3</td> <td>12.6</td> </tr> <tr> <td>Setting || S2 || Dataset || Test</td> <td>39.7</td> <td>22.9</td> <td>42.2</td> <td>9</td> </tr> <tr> <td>Setting || S2 || Dataset || Dev</td> <td>45.4</td> <td>29.8</td> <td>47.4</td> <td>14</td> </tr> <tr> <td>Setting || S3 || Dataset || Test</td> <td>55.7</td> <td>41.8</td> <td>57.6</td> <td>20.8</td> </tr> <tr> <td>Setting || S3 || Dataset || Dev</td> <td>44.3</td> <td>28.5</td> <td>46.4</td> <td>13.1</td> </tr> <tr> <td>Setting || S4 || Dataset || Test</td> <td>40</td> <td>23</td> <td>42.3</td> <td>9.4</td> </tr> </tbody></table>
Table 2
table_2
D19-1616
3
emnlp2019
5 Evaluation and Discussion . We evaluate the results for every 10,000 iterations on the dev and test set. The automatic evaluation results based on the dev and test set are shown in Table 2. To evaluate the proposed algorithms, we use ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score, which is a popular metric for text summarization task, and has several variants like ROUGE-N, and ROUGE-L, which measure the overlap of n-grams between the system and reference summary (LIN, 2004). We use ROUGE 1 F1 (R1 F1), ROUGE 2 F1 (R2 F1), and ROUGE L F1 (RL F1) for scoring the generated summary. In addition, we also use the SacreBLEU4 evaluation metric (Post, 2018). In terms of Rouge score model S3 outperforms model S1 but perform worse than model S2.
[2, 2, 1, 2, 2, 2, 1]
['5 Evaluation and Discussion .', 'We evaluate the results for every 10,000 iterations on the dev and test set.', 'The automatic evaluation results based on the dev and test set are shown in Table 2.', 'To evaluate the proposed algorithms, we use ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score, which is a popular metric for text summarization task, and has several variants like ROUGE-N, and ROUGE-L, which measure the overlap of n-grams between the system and reference summary (LIN, 2004).', 'We use ROUGE 1 F1 (R1 F1), ROUGE 2 F1 (R2 F1), and ROUGE L F1 (RL F1) for scoring the generated summary.', 'In addition, we also use the SacreBLEU4 evaluation metric (Post, 2018).', 'In terms of Rouge score model S3 outperforms model S1 but perform worse than model S2.']
[None, None, None, None, ['R1_F1', 'R2_F1', 'RL_F1'], ['BLEU'], ['R1_F1', 'R2_F1', 'RL_F1', 'S3', 'S1', 'S2']]
1
D19-1627table_4
The results on Word-in-Context (WiC) data.
2
[['Model', 'Lee and Chen (2017)'], ['Model', 'Neelakantan et al. (2015)'], ['Model', 'Mancini et al. (2016)'], ['Model', 'Guo et al. (2019)'], ['Model', 'Chang et al. (2018)'], ['Model', 'Pilehvar and Collier (2016)'], ['Model', 'Proposed (BERT-base)']]
1
[['Accuracy (%)']]
[['52.14'], ['54'], ['54.56'], ['55.27'], ['57'], ['58.55'], ['68.64']]
column
['Accuracy (%)']
['Proposed (BERT-base)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Lee and Chen (2017)</td> <td>52.14</td> </tr> <tr> <td>Model || Neelakantan et al. (2015)</td> <td>54</td> </tr> <tr> <td>Model || Mancini et al. (2016)</td> <td>54.56</td> </tr> <tr> <td>Model || Guo et al. (2019)</td> <td>55.27</td> </tr> <tr> <td>Model || Chang et al. (2018)</td> <td>57</td> </tr> <tr> <td>Model || Pilehvar and Collier (2016)</td> <td>58.55</td> </tr> <tr> <td>Model || Proposed (BERT-base)</td> <td>68.64</td> </tr> </tbody></table>
Table 4
table_4
D19-1627
5
emnlp2019
3.2 Word Sense Selection in Context. We further examine if the captured sense-specific cues help word sense disambiguation via Wordin-Context data (WiC) (Pilehvar and CamachoCollados, 2018), in which each instance contains a pair of two contexts sharing a target word, and the task is to decide whether their word senses are the same.3 . To justify that the models are capable of selecting senses encoded in the embeddings, for each pair, our model outputs 10 candidate definitions (top-10 nearest neighbors), and we output TRUE if any definition occurs in both candidate sets, otherwise FALSE. Table 4 shows that the proposed model with contextualized word embeddings outperforms all previous models. We conclude that contextualized word embeddings indeed capture sense-informative cues and our proposed model is capable of interpreting the corresponding senses via definition.
[2, 2, 2, 1, 2]
['3.2 Word Sense Selection in Context.', 'We further examine if the captured sense-specific cues help word sense disambiguation via Wordin-Context data (WiC) (Pilehvar and CamachoCollados, 2018), in which each instance contains a pair of two contexts sharing a target word, and the task is to decide whether their word senses are the same.3 .', 'To justify that the models are capable of selecting senses encoded in the embeddings, for each pair, our model outputs 10 candidate definitions (top-10 nearest neighbors), and we output TRUE if any definition occurs in both candidate sets, otherwise FALSE.', 'Table 4 shows that the proposed model with contextualized word embeddings outperforms all previous models.', 'We conclude that contextualized word embeddings indeed capture sense-informative cues and our proposed model is capable of interpreting the corresponding senses via definition.']
[None, None, None, ['Proposed (BERT-base)'], None]
1
D19-1634table_7
Comparison of copying accuracies.
2
[['System', 'MS UEDIN'], ['System', 'COPYNET'], ['System', 'Ours']]
1
[[' Accuracy']]
[['64.63'], ['64.72'], ['65.61']]
column
['Accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || MS UEDIN</td> <td>64.63</td> </tr> <tr> <td>System || COPYNET</td> <td>64.72</td> </tr> <tr> <td>System || Ours</td> <td>65.61</td> </tr> </tbody></table>
Table 7
table_7
D19-1634
8
emnlp2019
Table 7 shows the comparison of copying accuracies between MS UEDIN, COPYNET, and our approach. We find that our approach outperforms the two baselines. However, the copying accuracy of our approach is almost 20% lower than the prediction accuracy (i.e., 65.61% vs. 85.09%), indicating that it is much more challenging to place the copied words in correct positions.
[1, 1, 1]
['Table 7 shows the comparison of copying accuracies between MS UEDIN, COPYNET, and our approach.', 'We find that our approach outperforms the two baselines.', 'However, the copying accuracy of our approach is almost 20% lower than the prediction accuracy (i.e., 65.61% vs. 85.09%), indicating that it is much more challenging to place the copied words in correct positions.']
[['MS UEDIN', 'COPYNET', 'Ours'], ['Ours'], ['Ours', ' Accuracy']]
1
D19-1638table_5
Evaluation III: Informativeness: The values represent percentage of times instructions generated by the model is chosen by a human evaluator.
2
[['Method', 'Set2MultipleSeq'], ['Method', 'Set2MultipleSeq+opt'], ['Method', 'Ambigous']]
1
[['%']]
[['30'], ['63'], ['7']]
column
['%']
['Set2MultipleSeq+opt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>%</th> </tr> </thead> <tbody> <tr> <td>Method || Set2MultipleSeq</td> <td>30</td> </tr> <tr> <td>Method || Set2MultipleSeq+opt</td> <td>63</td> </tr> <tr> <td>Method || Ambigous</td> <td>7</td> </tr> </tbody></table>
Table 5
table_5
D19-1638
6
emnlp2019
We conducted separate evaluations for two (Set2SingleSeq, pairs of (Set2SingleSeq, Set2MultipleSeq) Set2MultipleSeq+Opt). The results are shown in Tables 5 and 6 respectively. Results shown in Table 5 explains that incorporating neural components for subset selection and content ordering helps in improving informative instruction generation. We observe that conducting content selection multiple times during each time step through content ordering RNN helps in generating a discrete set of instructions (Set2MultipleSeq). Table 6 shows that penalizing redundancy during beam search decoding reduces noise and helps in generating instructions with rich information density. Inter-evaluator agreement for the entire set of human evaluation is reasonably high: Cohen’s kappa coefficient is 0.79.
[1, 1, 1, 1, 0, 0]
['We conducted separate evaluations for two (Set2SingleSeq, pairs of (Set2SingleSeq, Set2MultipleSeq) Set2MultipleSeq+Opt).', 'The results are shown in Tables 5 and 6 respectively.', 'Results shown in Table 5 explains that incorporating neural components for subset selection and content ordering helps in improving informative instruction generation.', 'We observe that conducting content selection multiple times during each time step through content ordering RNN helps in generating a discrete set of instructions (Set2MultipleSeq).', 'Table 6 shows that penalizing redundancy during beam search decoding reduces noise and helps in generating instructions with rich information density.', 'Inter-evaluator agreement for the entire set of human evaluation is reasonably high: Cohen’s kappa coefficient is 0.79.']
[['Set2MultipleSeq', 'Set2MultipleSeq+opt'], None, ['Set2MultipleSeq+opt'], ['Set2MultipleSeq+opt'], None, None]
1
D19-1647table_2
Performance of the three proposed approaches in comparison with the baseline.
2
[['approach', 'baseline'], ['approach', 'WD'], ['approach', 'NER'], ['approach', 'BLSTM']]
1
[[' prec'], [' rec'], [' f1']]
[['49.80%', ' —', ' —'], ['67.30%', '93.00%', '78.10%'], ['71.80%', '81.30%', '76.20%'], ['86.90%', '85.30%', '86.10%']]
column
['prec', 'rec', 'f1']
['BLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>prec</th> <th>rec</th> <th>f1</th> </tr> </thead> <tbody> <tr> <td>approach || baseline</td> <td>49.80%</td> <td>—</td> <td>—</td> </tr> <tr> <td>approach || WD</td> <td>67.30%</td> <td>93.00%</td> <td>78.10%</td> </tr> <tr> <td>approach || NER</td> <td>71.80%</td> <td>81.30%</td> <td>76.20%</td> </tr> <tr> <td>approach || BLSTM</td> <td>86.90%</td> <td>85.30%</td> <td>86.10%</td> </tr> </tbody></table>
Table 2
table_2
D19-1647
4
emnlp2019
5.2 Baseline VA Candidates . We cannot determine recall for our approaches, instead we compute precision, recall and f-score based on the baseline dataset (see Section3), which is shown in Table 2 as well as precision of the baseline dataset. The automated approaches using Wikidata and named entity recognition can boost the precision significantly with a moderate loss in recall. The BLSTM approach performs best. Although the loss in recall is higher than with the WD approach, the precision reaches almost 87% and is thus raised to a new level.
[2, 1, 1, 1, 1]
['5.2 Baseline VA Candidates .', 'We cannot determine recall for our approaches, instead we compute precision, recall and f-score based on the baseline dataset (see Section3), which is shown in Table 2 as well as precision of the baseline dataset.', 'The automated approaches using Wikidata and named entity recognition can boost the precision significantly with a moderate loss in recall.', 'The BLSTM approach performs best.', 'Although the loss in recall is higher than with the WD approach, the precision reaches almost 87% and is thus raised to a new level.']
[None, None, ['WD', 'NER', 'baseline', ' prec', ' rec'], ['BLSTM'], ['BLSTM', 'WD', ' rec', ' prec']]
1
D19-1648table_1
Experimental results. +P indicates consideration of paraphrases and -P does not.
2
[['Model', 'Baseline'], ['Model', 'VE-P'], ['Model', 'HanPaNE-P'], ['Model', 'VE+P'], ['Model', 'HanPaNE+P (Proposed)']]
1
[[' Precision'], [' Recall'], [' F-score']]
[['92.75', '92.15', '92.45'], ['93.11', '91.4', '92.25'], ['92.71', '91.94', '92.32'], ['93.15', '91.79', '92.47'], ['92.81', '92.33', '92.57']]
column
['Precision', 'Recall', 'F-score']
['HanPaNE+P (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F-score</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>92.75</td> <td>92.15</td> <td>92.45</td> </tr> <tr> <td>Model || VE-P</td> <td>93.11</td> <td>91.4</td> <td>92.25</td> </tr> <tr> <td>Model || HanPaNE-P</td> <td>92.71</td> <td>91.94</td> <td>92.32</td> </tr> <tr> <td>Model || VE+P</td> <td>93.15</td> <td>91.79</td> <td>92.47</td> </tr> <tr> <td>Model || HanPaNE+P (Proposed)</td> <td>92.81</td> <td>92.33</td> <td>92.57</td> </tr> </tbody></table>
Table 1
table_1
D19-1648
4
emnlp2019
3.2 Experimental Results. Table 1 shows the experimental results. We can see that HanPaNE+P showed the highest accuracy and HanPaNE+P and VE+P, with consideration of paraphrases, showed a higher accuracy than Baseline. In contrast, HanPaNE-P and VE-P, without consideration of paraphrases, did not. The results indicate that the use of paraphrases contributed to improved accuracy. We also conducted the following two types of hypothesis testing. The first one is a McNemar paired test on the labeling disagreements of words assigned by HanPaNE and the others as in (Sha and Pereira, 2003). All the results except for Baseline were significantly different (p <0.01). The second one is a bi-nominal test used in (Sasano and Kurohashi, 2008). For this test, the number of the entities correctly recognized by only HanPaNE and the number of entities correctly recognized by only the other method are counted. Then, based on the assumption that outputs have the binomial distribution, we apply a binomial test. All the results were significantly different for this test (p < 0.05). These results showed that HanPaNE works better than augmented training data.
[2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0]
['3.2 Experimental Results.', 'Table 1 shows the experimental results.', 'We can see that HanPaNE+P showed the highest accuracy and HanPaNE+P and VE+P, with consideration of paraphrases, showed a higher accuracy than Baseline.', 'In contrast, HanPaNE-P and VE-P, without consideration of paraphrases, did not.', 'The results indicate that the use of paraphrases contributed to improved accuracy.', 'We also conducted the following two types of hypothesis testing.', 'The first one is a McNemar paired test on the labeling disagreements of words assigned by HanPaNE and the others as in (Sha and Pereira, 2003).', 'All the results except for Baseline were significantly different (p <0.01).', 'The second one is a bi-nominal test used in (Sasano and Kurohashi, 2008).', 'For this test, the number of the entities correctly recognized by only HanPaNE and the number of entities correctly recognized by only the other method are counted.', 'Then, based on the assumption that outputs have the binomial distribution, we apply a binomial test.', 'All the results were significantly different for this test (p < 0.05). These results showed that HanPaNE works better than augmented training data.']
[None, None, ['HanPaNE+P (Proposed)', 'VE+P', 'Baseline'], ['HanPaNE-P', 'VE-P'], ['HanPaNE+P (Proposed)', 'VE+P'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)'], ['Baseline'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)']]
1
D19-1653table_4
Performance of the average F1. Max score if bold and significant differences with p < 0.05 if *.
2
[['BLC variant', 'sentence-level + structural'], ['BLC variant', 'sentence-level'], ['BLC variant', 'post-level']]
2
[[' unit identification', 'B'], [' unit identification', ' I'], [' unit identification', ' O'], [' unit identification', ' macro'], ['unit classification', ' V'], ['unit classification', ' R'], ['unit classification', ' P'], ['unit classification', ' T'], ['unit classification', ' F'], ['unit classification', ' macro']]
[['75.4', '92.8', '62.1', ' *76.8', ' *80.7', '61.5', ' *16.0', '43.1', '33.2', ' *49.2'], ['75.6', '92.8', '61', '76.5', '80.2', '60.9', '13', '41.3', '31.3', '47.7'], ['67.8', '92.9', ' *64.7', '75.1', '79.9', '49.4', '2.1', '35.3', '32.2', '43.8']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['sentence-level + structural']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>unit identification || B</th> <th>unit identification || I</th> <th>unit identification || O</th> <th>unit identification || macro</th> <th>unit classification || V</th> <th>unit classification || R</th> <th>unit classification || P</th> <th>unit classification || T</th> <th>unit classification || F</th> <th>unit classification || macro</th> </tr> </thead> <tbody> <tr> <td>BLC variant || sentence-level + structural</td> <td>75.4</td> <td>92.8</td> <td>62.1</td> <td>*76.8</td> <td>*80.7</td> <td>61.5</td> <td>*16.0</td> <td>43.1</td> <td>33.2</td> <td>*49.2</td> </tr> <tr> <td>BLC variant || sentence-level</td> <td>75.6</td> <td>92.8</td> <td>61</td> <td>76.5</td> <td>80.2</td> <td>60.9</td> <td>13</td> <td>41.3</td> <td>31.3</td> <td>47.7</td> </tr> <tr> <td>BLC variant || post-level</td> <td>67.8</td> <td>92.9</td> <td>*64.7</td> <td>75.1</td> <td>79.9</td> <td>49.4</td> <td>2.1</td> <td>35.3</td> <td>32.2</td> <td>43.8</td> </tr> </tbody></table>
Table 4
table_4
D19-1653
5
emnlp2019
Results: Overall Performance . Table 4 shows that our proposed sentence-level BLC with structural information performs best regarding macro F1 in either boundary identification or unit classification tasks. The presence of structural features provides an excellent boost in classifying unit types. For the post-level BLC, the model yields a better score in predicting non-EUparts because post-level discrimination can capture the entire post and thus identify irrelevant boundaries, such as supplementary notes, in posts.
[2, 1, 2, 1]
['Results: Overall Performance .', 'Table 4 shows that our proposed sentence-level BLC with structural information performs best regarding macro F1 in either boundary identification or unit classification tasks.', 'The presence of structural features provides an excellent boost in classifying unit types.', 'For the post-level BLC, the model yields a better score in predicting non-EUparts because post-level discrimination can capture the entire post and thus identify irrelevant boundaries, such as supplementary notes, in posts.']
[None, ['sentence-level + structural', ' macro', ' unit identification', 'unit classification'], None, ['post-level']]
1
D19-1659table_1
Results on the Yelp and Amazon test sets.
2
[['Model', 'Shen et al. (2017)'], ['Model', 'Fu et al. (2018)'], ['Model', 'Li et al. (2018)'], ['Model', 'This work'], ['Model', 'w/o Lsentiment'], ['Model', 'w/o Lcontent'], ['Model', 'w/o Lalignment'], ['Model', 'only Lsentiment']]
2
[['Yelp', 'Acc'], ['Yelp', 'BLEU'], ['Amazon', 'Acc'], ['Amazon', 'BLEU']]
[['74.5', '6.79', '74.4', '1.57'], ['46.8', '11.24', '70.3', '7.87'], ['88.3', '12.61', '53.4', '27.12'], ['88.5', '12.13', '53.8', '15.95'], ['3.4', '24.06', '18.2', '42.65'], ['86.4', '10.08', '53.9', '14.77'], ['84.7', '11.94', '51.6', '16.51'], ['85.4', '10.05', '53.4', '14.76']]
column
['Acc', 'BLEU', 'Acc', 'BLEU']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp || Acc</th> <th>Yelp || BLEU</th> <th>Amazon || Acc</th> <th>Amazon || BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Shen et al. (2017)</td> <td>74.5</td> <td>6.79</td> <td>74.4</td> <td>1.57</td> </tr> <tr> <td>Model || Fu et al. (2018)</td> <td>46.8</td> <td>11.24</td> <td>70.3</td> <td>7.87</td> </tr> <tr> <td>Model || Li et al. (2018)</td> <td>88.3</td> <td>12.61</td> <td>53.4</td> <td>27.12</td> </tr> <tr> <td>Model || This work</td> <td>88.5</td> <td>12.13</td> <td>53.8</td> <td>15.95</td> </tr> <tr> <td>Model || w/o Lsentiment</td> <td>3.4</td> <td>24.06</td> <td>18.2</td> <td>42.65</td> </tr> <tr> <td>Model || w/o Lcontent</td> <td>86.4</td> <td>10.08</td> <td>53.9</td> <td>14.77</td> </tr> <tr> <td>Model || w/o Lalignment</td> <td>84.7</td> <td>11.94</td> <td>51.6</td> <td>16.51</td> </tr> <tr> <td>Model || only Lsentiment</td> <td>85.4</td> <td>10.05</td> <td>53.4</td> <td>14.76</td> </tr> </tbody></table>
Table 1
table_1
D19-1659
4
emnlp2019
Table 1 shows the results of various models. Our model based on the unified objective of Eq. (4) offers better balanced results compared to its variants. When removing Lsentiment, our model degrades to an input copy-like method, resulting in low classification accuracies but the highest BLEU scores. When removing Lcontent, the BLEU scores drop, indicating that the model cannot maintain a sufficient number of content words. Without Lalignment, we observe a reduction in both accuracy and BLEU on Yelp. However, this tendency is inconsistent on Amazon (i.e., -2.2 accuracy and +0.56 BLEU). When using only Lsentiment, our model falls back to the vanilla encoder-decoder model with a single loss, yielding poorer results on both datasets.
[1, 1, 1, 1, 1, 1, 1]
['Table 1 shows the results of various models.', 'Our model based on the unified objective of Eq. (4) offers better balanced results compared to its variants.', 'When removing Lsentiment, our model degrades to an input copy-like method, resulting in low classification accuracies but the highest BLEU scores.', 'When removing Lcontent, the BLEU scores drop, indicating that the model cannot maintain a sufficient number of content words.', 'Without Lalignment, we observe a reduction in both accuracy and BLEU on Yelp.', 'However, this tendency is inconsistent on Amazon (i.e., -2.2 accuracy and +0.56 BLEU).', 'When using only Lsentiment, our model falls back to the vanilla encoder-decoder model with a single loss, yielding poorer results on both datasets.']
[None, ['This work'], ['w/o Lsentiment', 'Acc', 'BLEU'], ['w/o Lcontent', 'BLEU'], ['w/o Lalignment', 'Acc', 'BLEU', 'Yelp'], ['Amazon', 'Acc', 'BLEU'], ['only Lsentiment']]
1
D19-1665table_1
Overall results for each dataset and model.
1
[['FT-BR'], ['MTL'], ['FT-LP'], ['MTL-LP'], ['MTL-XLD'], ['Sobhani (Seq2Seq)']]
1
[['BBC'], ['ETC'], ['MFTC']]
[['39.72', '52.24', '51.19'], ['48.57', '53.32', '53.97'], ['36.20', '53.57', '55.11'], ['55.60', '55.37', '62.98'], ['51.33', '52.22', '60.94'], ['NA', '54.81', 'NA']]
column
['accuracy', 'accuracy', 'accuracy']
['MTL-LP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BBC</th> <th>ETC</th> <th>MFTC</th> </tr> </thead> <tbody> <tr> <td>FT-BR</td> <td>39.72</td> <td>52.24</td> <td>51.19</td> </tr> <tr> <td>MTL</td> <td>48.57</td> <td>53.32</td> <td>53.97</td> </tr> <tr> <td>FT-LP</td> <td>36.20</td> <td>53.57</td> <td>55.11</td> </tr> <tr> <td>MTL-LP</td> <td>55.60</td> <td>55.37</td> <td>62.98</td> </tr> <tr> <td>MTL-XLD</td> <td>51.33</td> <td>52.22</td> <td>60.94</td> </tr> <tr> <td>Sobhani (Seq2Seq)</td> <td>NA</td> <td>54.81</td> <td>NA</td> </tr> </tbody></table>
Table 1
table_1
D19-1665
4
emnlp2019
In Table 1 we report the test set results for all models. The results for the MFTC dataset are averaged across the six discourse domains. Overall, MTL-LP is the best performing multilabel classification method across all the datasets. MTLLP is also better than the best performing model Seq2Seq reported in Sobhani et al. (2019) for the ETC dataset. MTL-XLD improves on the baseline models for the BBC and MFTC datasets, but performs slightly worse than MTL on the ETC dataset. We note that our results for the BBC and MFTC datasets are not directly comparable with previous work on BBC (Simaki et al., 2017) and MFTC (Dehghani et al., 2019), since we consider the full set of labels, whereas previous work removed those that were sparser. Our reimplementation of the logistic regression model of Simaki et al. (2017), as an additional baseline, resulted in poor performance in the BBC dataset (20 in JSS) and we did not consider it further.
[1, 2, 1, 1, 1, 2, 2]
['In Table 1 we report the test set results for all models.', 'The results for the MFTC dataset are averaged across the six discourse domains.', 'Overall, MTL-LP is the best performing multilabel classification method across all the datasets.', 'MTLLP is also better than the best performing model Seq2Seq reported in Sobhani et al. (2019) for the ETC dataset.', 'MTL-XLD improves on the baseline models for the BBC and MFTC datasets, but performs slightly worse than MTL on the ETC dataset.', 'We note that our results for the BBC and MFTC datasets are not directly comparable with previous work on BBC (Simaki et al., 2017) and MFTC (Dehghani et al., 2019), since we consider the full set of labels, whereas previous work removed those that were sparser.', 'Our reimplementation of the logistic regression model of Simaki et al. (2017), as an additional baseline, resulted in poor performance in the BBC dataset (20 in JSS) and we did not consider it further.']
[None, ['MFTC'], ['MTL-LP'], ['Sobhani (Seq2Seq)', 'MTL-LP'], ['MTL-XLD', 'BBC', 'MFTC', 'MTL', 'ETC'], ['BBC', 'MFTC'], ['BBC']]
1
D19-1667table_3
Results on total term prediction(%).
2
[['Model', 'CNN'], ['Model', 'RNN'], ['Model', 'RCNN'], ['Model', 'DGN']]
1
[['S'], ['EM'], ['[email protected]'], ['[email protected]']]
[['67.24', '8.41', '16.96', '35.58'], ['67.27', '8.04', '16.79', '35.11'], ['69.56', '8.54', '17.57', '35.75'], ['75.74', '8.64', '19.32', '40.43']]
column
['DGN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>EM</th> <th>[email protected]</th> <th>[email protected]</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>67.24</td> <td>8.41</td> <td>16.96</td> <td>35.58</td> </tr> <tr> <td>Model || RNN</td> <td>67.27</td> <td>8.04</td> <td>16.79</td> <td>35.11</td> </tr> <tr> <td>Model || RCNN</td> <td>69.56</td> <td>8.54</td> <td>17.57</td> <td>35.75</td> </tr> <tr> <td>Model || DGN</td> <td>75.74</td> <td>8.64</td> <td>19.32</td> <td>40.43</td> </tr> </tbody></table>
Table 3
table_3
D19-1667
4
emnlp2019
Table 3 presents the results of the total term prediction. Although our method is not directly trained to make the final prediction, the performance of our model surpasses all baselines, which confirms that the breakdown charge-based analysis can indeed help the total prison term prediction.
[1, 1]
['Table 3 presents the results of the total term prediction.', 'Although our method is not directly trained to make the final prediction, the performance of our model surpasses all baselines, which confirms that the breakdown charge-based analysis can indeed help the total prison term prediction.']
[None, ['DGN']]
1
P16-1111table_2
Results of parsing abstract structure
1
[['BACKGROUND'], ['OBJECTIVE'], ['DATA'], ['DESIGN'], ['METHOD'], ['RESULT'], ['CONCLUSION'], ['ALL']]
1
[['Precision'], ['Recall'], ['F-measure'], ['Accuracy']]
[['74.6', '77.2', '75.8', '-'], ['85.2', '81.8', '83.5', '-'], ['82.6', '76.8', '79.6', '-'], ['68', '64.8', '66.3', '-'], ['80.4', '80.1', '80.2', '-'], ['90.8', '93.3', '92', '-'], ['93.8', '92', '92.9', '-'], ['-', '-', '-', '86.6']]
column
['Precision', 'Recall', 'F-measure', 'Accuracy']
['BACKGROUND', 'OBJECTIVE', 'DATA', 'DESIGN', 'METHOD', 'RESULT', 'CONCLUSION']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F-measure</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>BACKGROUND</td> <td>74.6</td> <td>77.2</td> <td>75.8</td> <td>-</td> </tr> <tr> <td>OBJECTIVE</td> <td>85.2</td> <td>81.8</td> <td>83.5</td> <td>-</td> </tr> <tr> <td>DATA</td> <td>82.6</td> <td>76.8</td> <td>79.6</td> <td>-</td> </tr> <tr> <td>DESIGN</td> <td>68</td> <td>64.8</td> <td>66.3</td> <td>-</td> </tr> <tr> <td>METHOD</td> <td>80.4</td> <td>80.1</td> <td>80.2</td> <td>-</td> </tr> <tr> <td>RESULT</td> <td>90.8</td> <td>93.3</td> <td>92</td> <td>-</td> </tr> <tr> <td>CONCLUSION</td> <td>93.8</td> <td>92</td> <td>92.9</td> <td>-</td> </tr> <tr> <td>ALL</td> <td>-</td> <td>-</td> <td>-</td> <td>86.6</td> </tr> </tbody></table>
Table 2
table_2
P16-1111
6
acl2016
Table 2 shows the results obtained on testing the final classifier system on the Test subset of SL. We obtain an overall high accuracy of 86.6% at the sentence level. While RESULT and CONCLUSION obtained F-measures above 90%, OBJECTIVE and METHOD reported reasonable F-measures above 80%. DESIGN obtained the lowest precision, recall and F-measure. Overall, the performance we obtain is in the range of other reported results in similar tasks (Guo et al., 2013).
[1, 1, 1, 1, 0]
['Table 2 shows the results obtained on testing the final classifier system on the Test subset of SL.', 'We obtain an overall high accuracy of 86.6% at the sentence level.', 'While RESULT and CONCLUSION obtained F-measures above 90%, OBJECTIVE and METHOD reported reasonable F-measures above 80%.', 'DESIGN obtained the lowest precision, recall and F-measure.', 'Overall, the performance we obtain is in the range of other reported results in similar tasks (Guo et al., 2013).']
[None, ['ALL', 'Accuracy'], ['RESULT', 'CONCLUSION', 'OBJECTIVE', 'METHOD', 'F-measure'], ['DESIGN', 'Precision', 'Recall', 'F-measure'], None]
1
P16-1111table_3
Results on classifying trajectories
2
[['System', 'Random'], ['System', 'Majority'], ['System', 'LR'], ['System', 'LR - LD-R']]
1
[['ALL'], ['BIO'], ['PHY'], ['CHM'], ['NEU']]
[['50.3', '47.2', '47.8', '50.9', '51.2'], ['56.1', '56.3', '81.6', '74.3', '56.6'], ['74.2', '81', '83.3', '81.9', '74.8'], ['71.3', '77.7', '81.6', '73.1', '70.5']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['LR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ALL</th> <th>BIO</th> <th>PHY</th> <th>CHM</th> <th>NEU</th> </tr> </thead> <tbody> <tr> <td>System || Random</td> <td>50.3</td> <td>47.2</td> <td>47.8</td> <td>50.9</td> <td>51.2</td> </tr> <tr> <td>System || Majority</td> <td>56.1</td> <td>56.3</td> <td>81.6</td> <td>74.3</td> <td>56.6</td> </tr> <tr> <td>System || LR</td> <td>74.2</td> <td>81</td> <td>83.3</td> <td>81.9</td> <td>74.8</td> </tr> <tr> <td>System || LR - LD-R</td> <td>71.3</td> <td>77.7</td> <td>81.6</td> <td>73.1</td> <td>70.5</td> </tr> </tbody></table>
Table 3
table_3
P16-1111
8
acl2016
Table 3 shows the performance of our model on this task. As expected a topic's label distribution over its entire life-time is very informative with respect to classifying the topic as growing or declining. We achieve a significant improvement over the baselines on the full dataset (32.3% relative improvement over majority prediction), and this trend holds across each field separately. The ratio feature proved to be extremely predictive in this task, i.e.relative increases/decreases in a topic being used in different functional roles are very predictive of the type of its trajectory.
[1, 1, 1, 2]
['Table 3 shows the performance of our model on this task.', "As expected a topic's label distribution over its entire life-time is very informative with respect to classifying the topic as growing or declining.", 'We achieve a significant improvement over the baselines on the full dataset (32.3% relative improvement over majority prediction), and this trend holds across each field separately.', 'The ratio feature proved to be extremely predictive in this task, i.e.relative increases/decreases in a topic being used in different functional roles are very predictive of the type of its trajectory.']
[None, ['LR'], ['LR', 'ALL', 'Majority'], ['LR']]
1
P16-1111table_4
Results on predicting trajectory
2
[['System', 'LD-% + LD-delta'], ['System', 'LD-% only'], ['System', 'LD-delta only']]
1
[['Accuracy on ALL']]
[['72.1'], ['71'], ['60.4']]
column
['Accuracy on ALL']
['LD-% + LD-delta']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy on ALL</th> </tr> </thead> <tbody> <tr> <td>System || LD-% + LD-delta</td> <td>72.1</td> </tr> <tr> <td>System || LD-% only</td> <td>71</td> </tr> <tr> <td>System || LD-delta only</td> <td>60.4</td> </tr> </tbody></table>
Table 4
table_4
P16-1111
8
acl2016
Table 4 shows the performance of our model on this task. (The baseline performances are the same as in the classification task). These results show that we can accurately predict whether a topic will grow or decline using only a small amount of data. Moreover, we see that both percentage and delta features are necessary for this task.
[1, 2, 1, 1]
['Table 4 shows the performance of our model on this task.', '(The baseline performances are the same as in the classification task).', 'These results show that we can accurately predict whether a topic will grow or decline using only a small amount of data.', 'Moreover, we see that both percentage and delta features are necessary for this task.']
[None, None, ['Accuracy on ALL'], ['LD-% + LD-delta', 'LD-% only']]
1
P16-1112table_1
Performance of the CRF and alternative neural network structures on the public FCE dataset for token-level error detection in learner writing.
1
[['CRF'], ['CNN'], ['Deep CNN'], ['Bi-RNN'], ['Deep Bi-RNN'], ['Bi-LSTM'], ['Deep Bi-LSTM']]
2
[['Development', 'P'], ['Development', 'R'], ['Development', 'F0.5'], ['Test', 'predicted'], ['Test', 'correct'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F0.5']]
[['62.2', '13.6', '36.3', '914', '516', '56.5', '8.2', '25.9'], ['52.4', '24.9', '42.9', '3518', '1620', '46', '25.7', '39.8'], ['48.4', '26.2', '41.4', '3992', '1651', '41.4', '26.2', '37.1'], ['63.9', '18', '42.3', '2333', '1196', '51.3', '19', '38.2'], ['60.3', '17.6', '40.6', '2543', '1255', '49.4', '19.9', '38.1'], ['54.5', '28.2', '46', '3898', '1798', '46.1', '28.5', '41.1'], ['56.7', '21.3', '42.5', '2822', '1359', '48.2', '21.6', '38.6']]
column
['P', 'R', 'F0.5', 'predicted', 'correct', 'P', 'R', 'F0.5']
['CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Development || P</th> <th>Development || R</th> <th>Development || F0.5</th> <th>Test || predicted</th> <th>Test || correct</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F0.5</th> </tr> </thead> <tbody> <tr> <td>CRF</td> <td>62.2</td> <td>13.6</td> <td>36.3</td> <td>914</td> <td>516</td> <td>56.5</td> <td>8.2</td> <td>25.9</td> </tr> <tr> <td>CNN</td> <td>52.4</td> <td>24.9</td> <td>42.9</td> <td>3518</td> <td>1620</td> <td>46</td> <td>25.7</td> <td>39.8</td> </tr> <tr> <td>Deep CNN</td> <td>48.4</td> <td>26.2</td> <td>41.4</td> <td>3992</td> <td>1651</td> <td>41.4</td> <td>26.2</td> <td>37.1</td> </tr> <tr> <td>Bi-RNN</td> <td>63.9</td> <td>18</td> <td>42.3</td> <td>2333</td> <td>1196</td> <td>51.3</td> <td>19</td> <td>38.2</td> </tr> <tr> <td>Deep Bi-RNN</td> <td>60.3</td> <td>17.6</td> <td>40.6</td> <td>2543</td> <td>1255</td> <td>49.4</td> <td>19.9</td> <td>38.1</td> </tr> <tr> <td>Bi-LSTM</td> <td>54.5</td> <td>28.2</td> <td>46</td> <td>3898</td> <td>1798</td> <td>46.1</td> <td>28.5</td> <td>41.1</td> </tr> <tr> <td>Deep Bi-LSTM</td> <td>56.7</td> <td>21.3</td> <td>42.5</td> <td>2822</td> <td>1359</td> <td>48.2</td> <td>21.6</td> <td>38.6</td> </tr> </tbody></table>
Table 1
table_1
P16-1112
5
acl2016
Table 1 contains results for experiments comparing different composition architectures on the task of error detection. The CRF has the lowest F0.5 score compared to any of the neural models. It memorises frequent error sequences with high precision, but does not generalise sufficiently, resulting in low recall. The ability to condition on the previous label also does not provide much help on this task - there are only two possible labels and the errors are relatively sparse.
[1, 1, 1, 2]
['Table 1 contains results for experiments comparing different composition architectures on the task of error detection.', 'The CRF has the lowest F0.5 score compared to any of the neural models.', 'It memorises frequent error sequences with high precision, but does not generalise sufficiently, resulting in low recall.', 'The ability to condition on the previous label also does not provide much help on this task - there are only two possible labels and the errors are relatively sparse.']
[None, ['CRF', 'F0.5'], ['CRF', 'P', 'R'], None]
1
P16-1112table_2
Results on the public FCE test set when incrementally providing more training data to the error detection model.
2
[['Training Data', 'FCE-public'], ['Training Data', '+NUCLE A4'], ['Training Data', '+IELTS'], ['Training Data', '+FCE'], ['Training Data', '+CPE'], ['Training Data', '+CAE']]
2
[['Dev', 'F0.5'], ['Test', 'F0.5']]
[['46', '41.1'], ['39', '41'], ['45.6', '50.7'], ['57.2', '61.1'], ['59', '62.1'], ['60.7', '64.3']]
column
['F0.5', 'F0.5']
['Training Data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || F0.5</th> <th>Test || F0.5</th> </tr> </thead> <tbody> <tr> <td>Training Data || FCE-public</td> <td>46</td> <td>41.1</td> </tr> <tr> <td>Training Data || +NUCLE A4</td> <td>39</td> <td>41</td> </tr> <tr> <td>Training Data || +IELTS</td> <td>45.6</td> <td>50.7</td> </tr> <tr> <td>Training Data || +FCE</td> <td>57.2</td> <td>61.1</td> </tr> <tr> <td>Training Data || +CPE</td> <td>59</td> <td>62.1</td> </tr> <tr> <td>Training Data || +CAE</td> <td>60.7</td> <td>64.3</td> </tr> </tbody></table>
Table 2
table_2
P16-1112
6
acl2016
Table 2 contains results obtained by incrementally adding training data to the Bi-LSTM model. We found that incorporating the NUCLE dataset does not improve performance over using only the FCE-public dataset, which is likely due to the two corpora containing texts with different domains and writing styles. The texts in FCE are written by young intermediate students, in response to prompts eliciting letters, emails and reviews, whereas NUCLE contains mostly argumentative essays written by advanced adult learners. The differences in the datasets offset the benefits from additional training data, and the performance remains roughly the same.
[1, 1, 2, 1]
['Table 2 contains results obtained by incrementally adding training data to the Bi-LSTM model.', 'We found that incorporating the NUCLE dataset does not improve performance over using only the FCE-public dataset, which is likely due to the two corpora containing texts with different domains and writing styles.', 'The texts in FCE are written by young intermediate students, in response to prompts eliciting letters, emails and reviews, whereas NUCLE contains mostly argumentative essays written by advanced adult learners.', 'The differences in the datasets offset the benefits from additional training data, and the performance remains roughly the same.']
[['Training Data'], ['+NUCLE A4', 'FCE-public'], ['FCE-public', '+NUCLE A4'], ['Training Data']]
1
P16-1113table_7
Results (in percentage) on the CoNLL2009 test sets for Chinese, German and Spanish.
2
[['Chinese', 'PathLSTM'], ['Chinese', 'Bjorkelund et al. (2009)'], ['Chinese', 'Zhao et al. (2009)'], ['German', 'PathLSTM'], ['German', 'Bjorkelund et al. (2009)'], ['German', 'Che et al. (2009)'], ['Spanish', 'Zhao et al. (2009)'], ['Spanish', 'PathLSTM'], ['Spanish', 'Bjorkelund et al. (2009)']]
1
[['P'], ['R'], ['F1']]
[['83.2', '75.9', '79.4'], ['82.4', '75.1', '78.6'], ['80.4', '75.2', '77.7'], ['81.8', '78.5', '80.1'], ['81.2', '78.3', '79.7'], ['82.1', '75.4', '78.6'], ['83.1', '78', '80.5'], ['83.2', '77.4', '80.2'], ['78.9', '74.3', '76.5']]
column
['P', 'R', 'F1']
['PathLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Chinese || PathLSTM</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr> <td>Chinese || Bjorkelund et al. (2009)</td> <td>82.4</td> <td>75.1</td> <td>78.6</td> </tr> <tr> <td>Chinese || Zhao et al. (2009)</td> <td>80.4</td> <td>75.2</td> <td>77.7</td> </tr> <tr> <td>German || PathLSTM</td> <td>81.8</td> <td>78.5</td> <td>80.1</td> </tr> <tr> <td>German || Bjorkelund et al. (2009)</td> <td>81.2</td> <td>78.3</td> <td>79.7</td> </tr> <tr> <td>German || Che et al. (2009)</td> <td>82.1</td> <td>75.4</td> <td>78.6</td> </tr> <tr> <td>Spanish || Zhao et al. (2009)</td> <td>83.1</td> <td>78</td> <td>80.5</td> </tr> <tr> <td>Spanish || PathLSTM</td> <td>83.2</td> <td>77.4</td> <td>80.2</td> </tr> <tr> <td>Spanish || Bjorkelund et al. (2009)</td> <td>78.9</td> <td>74.3</td> <td>76.5</td> </tr> </tbody></table>
Table 7
table_7
P16-1113
8
acl2016
The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Bjorkelund et al. (2009) in all cases. For German and Chinese, PathLSTM achieves the best overall F1-scores of 80.1% and 79.4%, respectively.
[1, 1]
['The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Bjorkelund et al. (2009) in all cases.', 'For German and Chinese, PathLSTM achieves the best overall F1-scores of 80.1% and 79.4%, respectively.']
[['PathLSTM', 'Bjorkelund et al. (2009)'], ['PathLSTM', 'Chinese', 'German', 'F1']]
1
P16-1116table_1
Overall performance with gold-standard entities, timex, and values, the candidate arguments are annotated in ACE 2005. “ET” means the pattern balancing event type classifier, “Regu” means the regularization method
2
[['Method', 'JET'], ['Method', 'Cross-Event'], ['Method', 'Cross-Entity'], ['Method', 'Joint'], ['Method', 'DMCNN'], ['Method', 'RBPB(JET)'], ['Method', 'RBPB(JET) + ET'], ['Method', 'RBPB(JET) + Regu'], ['Method', 'RBPB(JET) + ET + Regu']]
2
[['Trigger Classification', 'P'], ['Trigger Classification', 'R'], ['Trigger Classification', 'F1'], ['Argument Identification', 'P'], ['Argument Identification', 'R'], ['Argument Identification', 'F1'], ['argument Role', 'P'], ['argument Role', 'P'], ['argument Role', 'F1']]
[['67.6', '53.5', '59.7', '46.5', '37.2', '41.3', '41', '32.8', '36.5'], ['68.7', '68.9', '68.8', '50.9', '49.7', '50.3', '45.1', '44.1', '44.6'], ['72.9', '64.3', '68.3', '53.4', '52.9', '53.1', '51.6', '45.5', '48.3'], ['73.7', '62.3', '67.5', '69.8', '47.9', '56.8', '64.7', '44.4', '52.7'], ['75.6', '63.6', '69.1', '68.8', '51.9', '59.1', '62.2', '46.9', '53.5'], ['62.3', '59.9', '61.1', '50.4', '45.8', '48.0', '41.9', '36.5', '39.0'], ['66.7', '65.9', '66.3', '60.6', '56.7', '58.6', '49.2', '48.3', '48.7'], ['67.2', '61.7', '64.3', '62.8', '57.5', '60.0', '52.6', '48.4', '50.4'], ['70.3', '67.5', '68.9', '63.2', '59.4', '61.2', '54.1', '53.5', '53.8']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'P', 'F1']
['RBPB(JET)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Classification || P</th> <th>Trigger Classification || R</th> <th>Trigger Classification || F1</th> <th>Argument Identification || P</th> <th>Argument Identification || R</th> <th>Argument Identification || F1</th> <th>argument Role || P</th> <th>argument Role || P</th> <th>argument Role || F1</th> </tr> </thead> <tbody> <tr> <td>Method || JET</td> <td>67.6</td> <td>53.5</td> <td>59.7</td> <td>46.5</td> <td>37.2</td> <td>41.3</td> <td>41</td> <td>32.8</td> <td>36.5</td> </tr> <tr> <td>Method || Cross-Event</td> <td>68.7</td> <td>68.9</td> <td>68.8</td> <td>50.9</td> <td>49.7</td> <td>50.3</td> <td>45.1</td> <td>44.1</td> <td>44.6</td> </tr> <tr> <td>Method || Cross-Entity</td> <td>72.9</td> <td>64.3</td> <td>68.3</td> <td>53.4</td> <td>52.9</td> <td>53.1</td> <td>51.6</td> <td>45.5</td> <td>48.3</td> </tr> <tr> <td>Method || Joint</td> <td>73.7</td> <td>62.3</td> <td>67.5</td> <td>69.8</td> <td>47.9</td> <td>56.8</td> <td>64.7</td> <td>44.4</td> <td>52.7</td> </tr> <tr> <td>Method || DMCNN</td> <td>75.6</td> <td>63.6</td> <td>69.1</td> <td>68.8</td> <td>51.9</td> <td>59.1</td> <td>62.2</td> <td>46.9</td> <td>53.5</td> </tr> <tr> <td>Method || RBPB(JET)</td> <td>62.3</td> <td>59.9</td> <td>61.1</td> <td>50.4</td> <td>45.8</td> <td>48.0</td> <td>41.9</td> <td>36.5</td> <td>39.0</td> </tr> <tr> <td>Method || RBPB(JET) + ET</td> <td>66.7</td> <td>65.9</td> <td>66.3</td> <td>60.6</td> <td>56.7</td> <td>58.6</td> <td>49.2</td> <td>48.3</td> <td>48.7</td> </tr> <tr> <td>Method || RBPB(JET) + Regu</td> <td>67.2</td> <td>61.7</td> <td>64.3</td> <td>62.8</td> <td>57.5</td> <td>60.0</td> <td>52.6</td> <td>48.4</td> <td>50.4</td> </tr> <tr> <td>Method || RBPB(JET) + ET + Regu</td> <td>70.3</td> <td>67.5</td> <td>68.9</td> <td>63.2</td> <td>59.4</td> <td>61.2</td> <td>54.1</td> <td>53.5</td> <td>53.8</td> </tr> </tbody></table>
Table 1
table_1
P16-1116
7
acl2016
Table 1 shows the overall performance on the blind test set. We compare our results with the JET baseline as well as the Cross-Event, CrossEntity, and joint methods. When adding the event type classifier, in the line titled “+ ET”, we see a significant increase in the three measures over the JET baseline in recall. Although our trigger’s precision is lower than RBPB(JET), it gains 5.2% improvement on the trigger’s F1 measure, 10.6% improvement on argument identification’s F1 measure and 9.7% improvement on argument classification’s F1 measure. Future work may be done to solve these two limitations. The line titled + Regu in Table 1 represents the performance when we only use the regularization method. In Table 1, Compared to the four baseline systems, the argument identification’s F1 measure of “+ Regu” is significantly higher. The complete approach is denoted as “RBPB” in Table 1. Remarkably, our approach performances comparable in trigger classification with the state-of art methods: Cross-Event, Cross-Entity, Joint model, DMCNN and significantly higher than them in argument identification as well as classification although we did not use the cross-document, cross-event information or any global feature.
[1, 1, 1, 1, 2, 1, 1, 1, 1]
['Table 1 shows the overall performance on the blind test set.', 'We compare our results with the JET baseline as well as the Cross-Event, CrossEntity, and joint methods.', 'When adding the event type classifier, in the line titled “+ ET”, we see a significant increase in the three measures over the JET baseline in recall.', 'Although our trigger’s precision is lower than RBPB(JET), it gains 5.2% improvement on the trigger’s F1 measure, 10.6% improvement on argument identification’s F1 measure and 9.7% improvement on argument classification’s F1 measure.', 'Future work may be done to solve these two limitations.', 'The line titled + Regu in Table 1 represents the performance when we only use the regularization method.', 'In Table 1, Compared to the four baseline systems, the argument identification’s F1 measure of “+ Regu” is significantly higher.', 'The complete approach is denoted as “RBPB” in Table 1.', 'Remarkably, our approach performances comparable in trigger classification with the state-of art methods: Cross-Event, Cross-Entity, Joint model, DMCNN and significantly higher than them in argument identification as well as classification although we did not use the cross-document, cross-event information or any global feature.']
[None, ['Cross-Event', 'Cross-Entity', 'Joint'], ['JET', 'RBPB(JET) + ET', 'R'], ['RBPB(JET)', 'Trigger Classification', 'P', 'F1'], ['JET', 'RBPB(JET)'], ['RBPB(JET) + Regu'], ['Cross-Event', 'Cross-Entity', 'Joint', 'DMCNN', 'RBPB(JET) + Regu', 'F1'], ['RBPB(JET)'], ['Trigger Classification', 'RBPB(JET)', 'Cross-Event', 'Cross-Entity', 'Joint', 'DMCNN']]
1
P16-1116table_2
Overall performance with predicted entities, timex, and values, the candidate arguments are extracted by JET. “ET” is the pattern balancing event type classifier, “Regu” is the regularization method
2
[['Method', 'JET'], ['Method', 'Cross-Document'], ['Method', 'Joint'], ['Method', 'RBPB(JET)'], ['Method', 'RBPB(JET) + ET'], ['Method', 'RBPB(JET) + Regu'], ['Method', 'RBPB(JET) + ET + Regu']]
2
[['Trigger', 'F1'], ['Arg id', 'F1'], ['Arg id+cl', 'F1']]
[['59.7', '42.5', '36.6'], ['67.3', '46.2', '42.6'], ['65.6', '-', '41.8'], ['60.4', '44.3', '37.1'], ['66', '47.8', '39.7'], ['64.8', '54.6', '42'], ['67.8', '55.4', '43.8']]
column
['F1', 'F1', 'F1']
['RBPB(JET)', 'RBPB(JET) + ET', 'RBPB(JET) + Regu', 'RBPB(JET) + ET + Regu']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger || F1</th> <th>Arg id || F1</th> <th>Arg id+cl || F1</th> </tr> </thead> <tbody> <tr> <td>Method || JET</td> <td>59.7</td> <td>42.5</td> <td>36.6</td> </tr> <tr> <td>Method || Cross-Document</td> <td>67.3</td> <td>46.2</td> <td>42.6</td> </tr> <tr> <td>Method || Joint</td> <td>65.6</td> <td>-</td> <td>41.8</td> </tr> <tr> <td>Method || RBPB(JET)</td> <td>60.4</td> <td>44.3</td> <td>37.1</td> </tr> <tr> <td>Method || RBPB(JET) + ET</td> <td>66</td> <td>47.8</td> <td>39.7</td> </tr> <tr> <td>Method || RBPB(JET) + Regu</td> <td>64.8</td> <td>54.6</td> <td>42</td> </tr> <tr> <td>Method || RBPB(JET) + ET + Regu</td> <td>67.8</td> <td>55.4</td> <td>43.8</td> </tr> </tbody></table>
Table 2
table_2
P16-1116
8
acl2016
We test the performance with argument candidates automatically extracted by JET in Table 2, our approach “+ ET” again significantly outperforms the JET baseline. Remarkably, our result is comparable with the Joint model although we only use lexical features. The line titled + Regu in Table 2 represents the performance when we only use the regularization method. In Table 2, the “+ Regu” again gains a higher F1 measure than the JET, Cross-Document, joint model baseline and “+ ET”. The complete approach is denoted as “RBPB” in Table 2. Remarkably, our approach performances comparable in trigger classification with the state-of art methods: Cross-Document, Joint model, and significantly higher than them in argument identification as well as classification although we did not use the cross-document, cross-event information or any global feature.
[1, 1, 1, 1, 1, 1]
['We test the performance with argument candidates automatically extracted by JET in Table 2, our approach “+ ET” again significantly outperforms the JET baseline.', 'Remarkably, our result is comparable with the Joint model although we only use lexical features.', 'The line titled + Regu in Table 2 represents the performance when we only use the regularization method.', 'In Table 2, the “+ Regu” again gains a higher F1 measure than the JET, Cross-Document, joint model baseline and “+ ET”.', 'The complete approach is denoted as “RBPB” in Table 2.', 'Remarkably, our approach performances comparable in trigger classification with the state-of art methods: Cross-Document, Joint model, and significantly higher than them in argument identification as well as classification although we did not use the cross-document, cross-event information or any global feature.']
[['JET', 'RBPB(JET) + ET'], ['RBPB(JET) + ET', 'Joint'], ['RBPB(JET) + Regu'], ['RBPB(JET) + Regu', 'Trigger', 'F1', 'Arg id', 'Arg id+cl', 'JET', 'Cross-Document', 'Joint', 'RBPB(JET) + ET'], ['RBPB(JET)'], ['RBPB(JET)', 'RBPB(JET) + ET', 'RBPB(JET) + Regu', 'RBPB(JET) + ET + Regu', 'Trigger', 'F1', 'Cross-Document', 'Joint']]
1
P16-1118table_4
System performance on test data (* indicates statistical significance)
2
[['System', 'Lucene'], ['System', 'EDITS'], ['System', 'TIE'], ['System', 'ENT']]
2
[['Newswire', 'Precision'], ['Newswire', 'Recall'], ['Newswire', 'F-score'], ['Clinical', 'Precision'], ['Clinical', 'Recall'], ['Clinical', 'F-score']]
[['0.47', '0.48', '0.47*', '0.16', '0.22', '0.19'], ['0.22', '0.57', '0.32', '0.23', '0.21', '0.20'], ['0.66', '0.21', '0.31', '0.43', '0.01', '0.02'], ['0.77', '0.26', '0.39', '0.42', '0.15', '0.23*']]
column
['Precision', 'Recall', 'F-score', 'Precision', 'Recall', 'F-score']
['ENT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Newswire || Precision</th> <th>Newswire || Recall</th> <th>Newswire || F-score</th> <th>Clinical || Precision</th> <th>Clinical || Recall</th> <th>Clinical || F-score</th> </tr> </thead> <tbody> <tr> <td>System || Lucene</td> <td>0.47</td> <td>0.48</td> <td>0.47*</td> <td>0.16</td> <td>0.22</td> <td>0.19</td> </tr> <tr> <td>System || EDITS</td> <td>0.22</td> <td>0.57</td> <td>0.32</td> <td>0.23</td> <td>0.21</td> <td>0.20</td> </tr> <tr> <td>System || TIE</td> <td>0.66</td> <td>0.21</td> <td>0.31</td> <td>0.43</td> <td>0.01</td> <td>0.02</td> </tr> <tr> <td>System || ENT</td> <td>0.77</td> <td>0.26</td> <td>0.39</td> <td>0.42</td> <td>0.15</td> <td>0.23*</td> </tr> </tbody></table>
Table 4
table_4
P16-1118
6
acl2016
Table 4 summarizes the system performance on newswire and clinical data. We observe that systems that did well on RTE datasets, were mediocre on the clinical dataset. We did not, however, put any effort into adaption of TIE and EDITS to the clinical data. So the mediocre performance on clinical is understandable. It is interesting to see though that ENT did well (comparatively) on both domains. We note that our problem setting is most similar to the RTE-5 entailment search task. Of the 20 runs across eight teams that participated in RTE-5, the median F-Score was 0.30 and the best system (Mirkin et al., 2009) achieved an F-Score of 0.46. EDITS and TIE perform slightly above the median and ENT (with 0.39 F-score) would have ranked third in the challenge. The performance of all systems on the clinical data is noticeably low as compared to the newswire data.
[1, 1, 2, 2, 1, 2, 2, 2, 1]
['Table 4 summarizes the system performance on newswire and clinical data.', 'We observe that systems that did well on RTE datasets, were mediocre on the clinical dataset.', 'We did not, however, put any effort into adaption of TIE and EDITS to the clinical data.', 'So the mediocre performance on clinical is understandable.', 'It is interesting to see though that ENT did well (comparatively) on both domains.', 'We note that our problem setting is most similar to the RTE-5 entailment search task.', 'Of the 20 runs across eight teams that participated in RTE-5, the median F-Score was 0.30 and the best system (Mirkin et al., 2009) achieved an F-Score of 0.46.', 'EDITS and TIE perform slightly above the median and ENT (with 0.39 F-score) would have ranked third in the challenge.', 'The performance of all systems on the clinical data is noticeably low as compared to the newswire data.']
[['Newswire', 'Clinical'], ['Clinical'], ['TIE', 'EDITS', 'Clinical'], ['Clinical'], ['ENT', 'Newswire', 'Clinical'], None, ['F-score'], ['EDITS', 'TIE', 'ENT', 'F-score'], ['System', 'Clinical', 'Newswire']]
1
P16-1120table_3
Performance of Translation Extraction
2
[['Method', 'Cue(BiLDA)'], ['Method', 'Cue(BiSTM)'], ['Method', 'Cue(BiSTM+TS)'], ['Method', 'Liu(BiLDA)'], ['Method', 'Liu(BiSTM)'], ['Method', 'Liu(BiSTM+TS)']]
2
[['ACC1', 'K=100'], ['ACC1', 'K=400'], ['ACC1', 'K=2000'], ['ACC10', 'K=100'], ['ACC10', 'K=400'], ['ACC10', 'K=2000']]
[['0.024', '0.056', '0.101', '0.093', '0.170', '0.281'], ['0.055', '0.112', '0.184', '0.218', '0.286', '0.410'], ['0.052', '0.107', '0.176', '0.196', '0.274', '0.398'], ['0.206', '0.345', '0.426', '0.463', '0.550', '0.603'], ['0.287', '0.414', '0.479', '0.531', '0.625', '0.671'], ['0.283', '0.406', '0.467', '0.536', '0.612', '0.667']]
column
['ACC1', 'ACC1', 'ACC1', 'ACC10', 'ACC10', 'ACC10']
['Cue(BiSTM+TS)', 'Liu(BiSTM+TS)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACC1 || K=100</th> <th>ACC1 || K=400</th> <th>ACC1 || K=2000</th> <th>ACC10 || K=100</th> <th>ACC10 || K=400</th> <th>ACC10 || K=2000</th> </tr> </thead> <tbody> <tr> <td>Method || Cue(BiLDA)</td> <td>0.024</td> <td>0.056</td> <td>0.101</td> <td>0.093</td> <td>0.170</td> <td>0.281</td> </tr> <tr> <td>Method || Cue(BiSTM)</td> <td>0.055</td> <td>0.112</td> <td>0.184</td> <td>0.218</td> <td>0.286</td> <td>0.410</td> </tr> <tr> <td>Method || Cue(BiSTM+TS)</td> <td>0.052</td> <td>0.107</td> <td>0.176</td> <td>0.196</td> <td>0.274</td> <td>0.398</td> </tr> <tr> <td>Method || Liu(BiLDA)</td> <td>0.206</td> <td>0.345</td> <td>0.426</td> <td>0.463</td> <td>0.550</td> <td>0.603</td> </tr> <tr> <td>Method || Liu(BiSTM)</td> <td>0.287</td> <td>0.414</td> <td>0.479</td> <td>0.531</td> <td>0.625</td> <td>0.671</td> </tr> <tr> <td>Method || Liu(BiSTM+TS)</td> <td>0.283</td> <td>0.406</td> <td>0.467</td> <td>0.536</td> <td>0.612</td> <td>0.667</td> </tr> </tbody></table>
Table 3
table_3
P16-1120
7
acl2016
We measured the performance of translation extraction with top N accuracy (ACCN), the number of test words whose top N translation candidates contain a correct translation over the total number of test words (7,930). Table 3 summarizes ACC1 and ACC10 for each model. As can be seen, Cue/Liu(BiSTM) and Cue/Liu(BiSTM+TS) significantly outperform Cue/Liu(BiLDA) (p < 0.01 in the sign test). This indicates that BiSTM and BiSTM+TS improve the performance of translation extraction for both the Cue and Liu methods by assigning more suitable topics.
[2, 1, 1, 1]
['We measured the performance of translation extraction with top N accuracy (ACCN), the number of test words whose top N translation candidates contain a correct translation over the total number of test words (7,930).', 'Table 3 summarizes ACC1 and ACC10 for each model.', 'As can be seen, Cue/Liu(BiSTM) and Cue/Liu(BiSTM+TS) significantly outperform Cue/Liu(BiLDA) (p < 0.01 in the sign test).', 'This indicates that BiSTM and BiSTM+TS improve the performance of translation extraction for both the Cue and Liu methods by assigning more suitable topics.']
[['ACC1', 'ACC10'], ['ACC1', 'ACC10'], ['Cue(BiSTM)', 'Liu(BiSTM)', 'Cue(BiSTM+TS)', 'Liu(BiSTM+TS)', 'Cue(BiLDA)', 'Liu(BiLDA)'], ['Cue(BiSTM+TS)', 'Liu(BiSTM+TS)']]
1
P16-1123table_3
Comparison with results published in the literature, where ‘∗’ refers to models from Nguyen and Grishman (2015).
3
[['Classifier', 'Manually Engineered Methods', 'SVM (Rink and Harabagiu 2010)'], ['Classifier', 'Dependency Methods', 'RNN (Socher et al. 2012)'], ['Classifier', 'Dependency Methods', 'MVRNN (Socher et al. 2012)'], ['Classifier', 'Dependency Methods', 'FCM (Yu et al. 2014)'], ['Classifier', 'Dependency Methods', 'Hybrid FCM (Yu et al. 2014)'], ['Classifier', 'Dependency Methods', 'SDP-LSTM (Xu et al. 2015b)'], ['Classifier', 'Dependency Methods', 'DRNNs (Xu et al. 2016)'], ['Classifier', 'Dependency Methods', 'SPTree (Miwa and Bansal 2016)'], ['Classifier', 'End-To-End Methods', 'CNN+ Softmax (Zeng et al. 2014)'], ['Classifier', 'End-To-End Methods', 'CR-CNN (dos Santos et al. 2015)'], ['Classifier', 'End-To-End Methods', 'DepNN (Liu et al. 2015)'], ['Classifier', 'End-To-End Methods', 'depLCNN+NS (Xu et al. 2015a)'], ['Classifier', 'End-To-End Methods', 'STACK-FORWARD*'], ['Classifier', 'End-To-End Methods', 'VOTE-BIDIRECT*'], ['Classifier', 'End-To-End Methods', 'VOTE-BACKWARD*'], ['Classifier', 'Our Architectures', 'Att-Input-CNN'], ['Classifier', 'Our Architectures', 'Att-Pooling-CNN']]
1
[['F1']]
[['82.2'], ['77.6'], ['82.4'], ['83'], ['83.4'], ['83.7'], ['85.8'], ['84.5'], ['82.7'], ['84.1'], ['83.6'], ['85.6'], ['83.4'], ['84.1'], ['84.1'], ['87.5'], ['88']]
column
['F1']
['Our Architectures']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Classifier || Manually Engineered Methods || SVM (Rink and Harabagiu 2010)</td> <td>82.2</td> </tr> <tr> <td>Classifier || Dependency Methods || RNN (Socher et al. 2012)</td> <td>77.6</td> </tr> <tr> <td>Classifier || Dependency Methods || MVRNN (Socher et al. 2012)</td> <td>82.4</td> </tr> <tr> <td>Classifier || Dependency Methods || FCM (Yu et al. 2014)</td> <td>83</td> </tr> <tr> <td>Classifier || Dependency Methods || Hybrid FCM (Yu et al. 2014)</td> <td>83.4</td> </tr> <tr> <td>Classifier || Dependency Methods || SDP-LSTM (Xu et al. 2015b)</td> <td>83.7</td> </tr> <tr> <td>Classifier || Dependency Methods || DRNNs (Xu et al. 2016)</td> <td>85.8</td> </tr> <tr> <td>Classifier || Dependency Methods || SPTree (Miwa and Bansal 2016)</td> <td>84.5</td> </tr> <tr> <td>Classifier || End-To-End Methods || CNN+ Softmax (Zeng et al. 2014)</td> <td>82.7</td> </tr> <tr> <td>Classifier || End-To-End Methods || CR-CNN (dos Santos et al. 2015)</td> <td>84.1</td> </tr> <tr> <td>Classifier || End-To-End Methods || DepNN (Liu et al. 2015)</td> <td>83.6</td> </tr> <tr> <td>Classifier || End-To-End Methods || depLCNN+NS (Xu et al. 2015a)</td> <td>85.6</td> </tr> <tr> <td>Classifier || End-To-End Methods || STACK-FORWARD*</td> <td>83.4</td> </tr> <tr> <td>Classifier || End-To-End Methods || VOTE-BIDIRECT*</td> <td>84.1</td> </tr> <tr> <td>Classifier || End-To-End Methods || VOTE-BACKWARD*</td> <td>84.1</td> </tr> <tr> <td>Classifier || Our Architectures || Att-Input-CNN</td> <td>87.5</td> </tr> <tr> <td>Classifier || Our Architectures || Att-Pooling-CNN</td> <td>88</td> </tr> </tbody></table>
Table 3
table_3
P16-1123
7
acl2016
Table 3 provides a detailed comparison of our Multi-Level Attention CNN model with previous approaches. We observe that our novel attention-based architecture achieves new state-of-the-art results on this relation classification dataset. Att Input-CNN relies only on the primal attention at the input level, performing standard max-pooling after the convolution layer to generate the network output w O, in which the new objective function is utilized. With Att-Input-CNN, we achieve an F1-score of 87.5%, thus already outperforming not only the original winner of the SemEval task, an SVM-based approach (82.2%), but also the well-known CR-CNN model (84.1%) with a relative improvement of 4.04%, and the newly released DRNNs (85.8%) with a relative improvement of 2.0%, although the latter approach depends on the Stanford parser to obtain dependency parse information. Our full dual attention model Att-Pooling-CNN achieves an even more favorable F1-score of 88%.
[1, 1, 2, 1, 1]
['Table 3 provides a detailed comparison of our Multi-Level Attention CNN model with previous approaches.', 'We observe that our novel attention-based architecture achieves new state-of-the-art results on this relation classification dataset.', 'Att Input-CNN relies only on the primal attention at the input level, performing standard max-pooling after the convolution layer to generate the network output w O, in which the new objective function is utilized.', 'With Att-Input-CNN, we achieve an F1-score of 87.5%, thus already outperforming not only the original winner of the SemEval task, an SVM-based approach (82.2%), but also the well-known CR-CNN model (84.1%) with a relative improvement of 4.04%, and the newly released DRNNs (85.8%) with a relative improvement of 2.0%, although the latter approach depends on the Stanford parser to obtain dependency parse information.', 'Our full dual attention model Att-Pooling-CNN achieves an even more favorable F1-score of 88%.']
[None, ['Our Architectures'], ['Att-Input-CNN'], ['Att-Input-CNN', 'F1', 'SVM (Rink and Harabagiu 2010)', 'CR-CNN (dos Santos et al. 2015)', 'DRNNs (Xu et al. 2016)'], ['Att-Pooling-CNN', 'F1']]
1
P16-1123table_4
Comparison between the main model and variants.
2
[['Classifier', 'Att-Input-CNN (Main)'], ['Classifier', 'Att-Input-CNN (Variant-1)'], ['Classifier', 'Att-Input-CNN (Variant-2)']]
1
[['F1']]
[['87.5'], ['87.2'], ['87.3']]
column
['F1']
['Att-Input-CNN (Main)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Classifier || Att-Input-CNN (Main)</td> <td>87.5</td> </tr> <tr> <td>Classifier || Att-Input-CNN (Variant-1)</td> <td>87.2</td> </tr> <tr> <td>Classifier || Att-Input-CNN (Variant-2)</td> <td>87.3</td> </tr> </tbody></table>
Table 4
table_4
P16-1123
7
acl2016
Table 4 provides the experimental results for the two variants of the model given by Eqs.(7) and (8) in Section 3.3. Our main model outperforms the other variants on this dataset, although the variants may still prove useful when applied to other tasks.
[1, 1]
['Table 4 provides the experimental results for the two variants of the model given by Eqs.(7) and (8) in Section 3.3.', 'Our main model outperforms the other variants on this dataset, although the variants may still prove useful when applied to other tasks.']
[None, ['Att-Input-CNN (Main)', 'Att-Input-CNN (Variant-1)', 'Att-Input-CNN (Variant-2)']]
1
P16-1126table_3
Precision/Recall/F1 for the three models. The three starred categories resulted from the decomposition of the original Other category, which is excluded here. Categories are ordered in this table in descending order by frequency in the dataset.
2
[['Category', 'First Party Collection/Use'], ['Category', 'Third Party Sharing/Collection'], ['Category', 'User Choice/Control'], ['Category', 'Introductory/Generic*'], ['Category', 'Data Security'], ['Category', 'Internat’l and Specific Audiences'], ['Category', 'Privacy Contact Information*'], ['Category', 'User Access, Edit, and Deletion'], ['Category', 'Practice Not Covered*'], ['Category', 'Policy Change'], ['Category', 'Data Retention'], ['Category', 'Do Not Track'], ['Micro-Average', '-']]
2
[['LR', 'P'], ['LR', 'R'], ['LR', 'F'], ['SVM', 'P'], ['SVM', 'R'], ['SVM', 'F'], ['HMM', 'P'], ['HMM', 'R'], ['HMM', 'F']]
[['0.73', '0.67', '0.7', '0.76', '0.73', '0.75', '0.69', '0.76', '0.72'], ['0.64', '0.63', '0.63', '0.67', '0.73', '0.7', '0.63', '0.61', '0.62'], ['0.45', '0.62', '0.52', '0.65', '0.58', '0.61', '0.47', '0.33', '0.39'], ['0.51', '0.5', '0.5', '0.58', '0.49', '0.53', '0.54', '0.49', '0.51'], ['0.48', '0.75', '0.59', '0.66', '0.67', '0.67', '0.67', '0.53', '0.59'], ['0.49', '0.69', '0.57', '0.7', '0.7', '0.7', '0.67', '0.66', '0.66'], ['0.34', '0.72', '0.46', '0.6', '0.68', '0.64', '0.48', '0.59', '0.53'], ['0.47', '0.71', '0.57', '0.67', '0.56', '61', '0.48', '0.42', '0.45'], ['0.2', '0.47', '0.28', '0.19', '0.26', '0.22', '0.15', '0.12', '0.13'], ['0.59', '0.83', '0.69', '0.66', '0.88', '0.75', '0.52', '0.68', '0.59'], ['0.1', '0.35', '0.16', '0.12', '0.12', '0.12', '0.08', '0.12', '0.09'], ['0.45', '1', '0.62', '1', '1', '1', '0.45', '0.4', '0.41'], ['0.53', '0.65', '0.58', '0.66', '0.66', '0.66', '0.6', '0.59', '0.6']]
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['HMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LR || P</th> <th>LR || R</th> <th>LR || F</th> <th>SVM || P</th> <th>SVM || R</th> <th>SVM || F</th> <th>HMM || P</th> <th>HMM || R</th> <th>HMM || F</th> </tr> </thead> <tbody> <tr> <td>Category || First Party Collection/Use</td> <td>0.73</td> <td>0.67</td> <td>0.7</td> <td>0.76</td> <td>0.73</td> <td>0.75</td> <td>0.69</td> <td>0.76</td> <td>0.72</td> </tr> <tr> <td>Category || Third Party Sharing/Collection</td> <td>0.64</td> <td>0.63</td> <td>0.63</td> <td>0.67</td> <td>0.73</td> <td>0.7</td> <td>0.63</td> <td>0.61</td> <td>0.62</td> </tr> <tr> <td>Category || User Choice/Control</td> <td>0.45</td> <td>0.62</td> <td>0.52</td> <td>0.65</td> <td>0.58</td> <td>0.61</td> <td>0.47</td> <td>0.33</td> <td>0.39</td> </tr> <tr> <td>Category || Introductory/Generic*</td> <td>0.51</td> <td>0.5</td> <td>0.5</td> <td>0.58</td> <td>0.49</td> <td>0.53</td> <td>0.54</td> <td>0.49</td> <td>0.51</td> </tr> <tr> <td>Category || Data Security</td> <td>0.48</td> <td>0.75</td> <td>0.59</td> <td>0.66</td> <td>0.67</td> <td>0.67</td> <td>0.67</td> <td>0.53</td> <td>0.59</td> </tr> <tr> <td>Category || Internat’l and Specific Audiences</td> <td>0.49</td> <td>0.69</td> <td>0.57</td> <td>0.7</td> <td>0.7</td> <td>0.7</td> <td>0.67</td> <td>0.66</td> <td>0.66</td> </tr> <tr> <td>Category || Privacy Contact Information*</td> <td>0.34</td> <td>0.72</td> <td>0.46</td> <td>0.6</td> <td>0.68</td> <td>0.64</td> <td>0.48</td> <td>0.59</td> <td>0.53</td> </tr> <tr> <td>Category || User Access, Edit, and Deletion</td> <td>0.47</td> <td>0.71</td> <td>0.57</td> <td>0.67</td> <td>0.56</td> <td>61</td> <td>0.48</td> <td>0.42</td> <td>0.45</td> </tr> <tr> <td>Category || Practice Not Covered*</td> <td>0.2</td> <td>0.47</td> <td>0.28</td> <td>0.19</td> <td>0.26</td> <td>0.22</td> <td>0.15</td> <td>0.12</td> <td>0.13</td> </tr> <tr> <td>Category || Policy Change</td> <td>0.59</td> <td>0.83</td> <td>0.69</td> <td>0.66</td> <td>0.88</td> <td>0.75</td> <td>0.52</td> <td>0.68</td> <td>0.59</td> </tr> <tr> <td>Category || Data Retention</td> <td>0.1</td> <td>0.35</td> <td>0.16</td> <td>0.12</td> <td>0.12</td> <td>0.12</td> <td>0.08</td> <td>0.12</td> <td>0.09</td> </tr> <tr> <td>Category || Do Not Track</td> <td>0.45</td> <td>1</td> <td>0.62</td> <td>1</td> <td>1</td> <td>1</td> <td>0.45</td> <td>0.4</td> <td>0.41</td> </tr> <tr> <td>Micro-Average || -</td> <td>0.53</td> <td>0.65</td> <td>0.58</td> <td>0.66</td> <td>0.66</td> <td>0.66</td> <td>0.6</td> <td>0.59</td> <td>0.6</td> </tr> </tbody></table>
Table 3
table_3
P16-1126
9
acl2016
We split the set of 115 policies into subsets of 75 for training and 40 for testing. The number of clusters in the HMM approach8 is set to 100 and the results are shown in Table 3 as means across 10 runs. The standard deviations for these performance figures are generally between 0.01 and 0.05; the one exception is Do Not Track (the least frequent category) with a standard deviation of 0.2. As the table shows, although the HMM does not reach the same performance as SVM, it performs similarly to logistic regression and meets or exceeds its F1-score for five categories.
[2, 1, 2, 1]
['We split the set of 115 policies into subsets of 75 for training and 40 for testing.', 'The number of clusters in the HMM approach8 is set to 100 and the results are shown in Table 3 as means across 10 runs.', 'The standard deviations for these performance figures are generally between 0.01 and 0.05; the one exception is Do Not Track (the least frequent category) with a standard deviation of 0.2.', 'As the table shows, although the HMM does not reach the same performance as SVM, it performs similarly to logistic regression and meets or exceeds its F1-score for five categories.']
[None, ['HMM'], None, ['HMM', 'SVM', 'LR', 'F']]
1
P16-1127table_2
Test results over different domains on SPO dataset. The numbers reported correspond to the proportion of cases in which the predicted LF is interpretable against the KB and returns the correct answer. LFP = Logical Form Prediction, CFP = Canonical Form Prediction, DSP = Derivation Sequence Prediction, DSP-C = Derivation Sequence constrained using grammatical knowledge, DSP-CL = Derivation Sequence using a loss function constrained by grammatical knowledge.
1
[['SPO'], ['LFP'], ['CFP'], ['DSP'], ['DSP-C'], ['DSP-CL']]
1
[['Basketball'], ['Social'], ['Publication'], ['Blocks'], ['Calendar'], ['Housing'], ['Restaurants'], ['Avg']]
[['46.3', '48.2', '59', '41.9', '74.4', '54', '75.9', '57.1'], ['73.1', '70.2', '72', '55.4', '71.4', '61.9', '76.5', '68.6'], ['80.3', '79.5', '70.2', '54.1', '73.2', '63.5', '71.1', '70.3'], ['71.6', '67.5', '64', '53.9', '64.3', '55', '76.8', '64.7'], ['80.5', '80', '75.8', '55.6', '75', '61.9', '80.1', '72.7'], ['80.6', '77.6', '70.2', '53.1', '75', '59.3', '74.4', '70']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Basketball</th> <th>Social</th> <th>Publication</th> <th>Blocks</th> <th>Calendar</th> <th>Housing</th> <th>Restaurants</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>SPO</td> <td>46.3</td> <td>48.2</td> <td>59</td> <td>41.9</td> <td>74.4</td> <td>54</td> <td>75.9</td> <td>57.1</td> </tr> <tr> <td>LFP</td> <td>73.1</td> <td>70.2</td> <td>72</td> <td>55.4</td> <td>71.4</td> <td>61.9</td> <td>76.5</td> <td>68.6</td> </tr> <tr> <td>CFP</td> <td>80.3</td> <td>79.5</td> <td>70.2</td> <td>54.1</td> <td>73.2</td> <td>63.5</td> <td>71.1</td> <td>70.3</td> </tr> <tr> <td>DSP</td> <td>71.6</td> <td>67.5</td> <td>64</td> <td>53.9</td> <td>64.3</td> <td>55</td> <td>76.8</td> <td>64.7</td> </tr> <tr> <td>DSP-C</td> <td>80.5</td> <td>80</td> <td>75.8</td> <td>55.6</td> <td>75</td> <td>61.9</td> <td>80.1</td> <td>72.7</td> </tr> <tr> <td>DSP-CL</td> <td>80.6</td> <td>77.6</td> <td>70.2</td> <td>53.1</td> <td>75</td> <td>59.3</td> <td>74.4</td> <td>70</td> </tr> </tbody></table>
Table 2
table_2
P16-1127
8
acl2016
Results on test data Table 2 shows the test results of SPO and our different systems over the seven domains. It can be seen that all of our sequence-based systems are performing better than SPO by a large margin on these tests. When averaging over the seven domains, our ‘worst’ system DSP scores at 64.7% compared to SPO at 57.1%. We note that these positive results hold despite the fact that DSP has the handicap that it may generate ungrammatical sequences relative to the underlying grammar, which do not lead to interpretable LFs. The LFP and CFP models, with higher performance than DSP, also may generate ungrammatical sequences.
[1, 1, 1, 2, 1]
['Results on test data Table 2 shows the test results of SPO and our different systems over the seven domains.', 'It can be seen that all of our sequence-based systems are performing better than SPO by a large margin on these tests.', 'When averaging over the seven domains, our ‘worst’ system DSP scores at 64.7% compared to SPO at 57.1%.', 'We note that these positive results hold despite the fact that DSP has the handicap that it may generate ungrammatical sequences relative to the underlying grammar, which do not lead to interpretable LFs.', 'The LFP and CFP models, with higher performance than DSP, also may generate ungrammatical sequences.']
[['SPO', 'LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL'], ['LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL', 'SPO'], ['DSP', 'SPO'], ['DSP'], ['LFP', 'CFP', 'DSP']]
1
P16-1129table_3
Results of feature validation
2
[['Method', 'RF'], ['Method', 'RF - w/o novel'], ['Method', 'RF - w/o trad.']]
1
[['R-1'], ['R-2'], ['R-SU4']]
[['0.38559', '0.11887', '0.14907'], ['0.37297', '0.10964', '0.14021'], ['0.36314', '0.0991', '0.13102']]
column
['R-1', 'R-2', 'R-SU4']
['RF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Method || RF</td> <td>0.38559</td> <td>0.11887</td> <td>0.14907</td> </tr> <tr> <td>Method || RF - w/o novel</td> <td>0.37297</td> <td>0.10964</td> <td>0.14021</td> </tr> <tr> <td>Method || RF - w/o trad.</td> <td>0.36314</td> <td>0.0991</td> <td>0.13102</td> </tr> </tbody></table>
Table 3
table_3
P16-1129
7
acl2016
Different groups of features may play different roles in the LTR models. In order to validate the impact of both the traditional features and the novel task-specific features, we conduct experiments with different combinations by removing each group of features respectively. Table 3 shows the results, with w/o denotes experiments without the corresponding group of features. We can observe that both the traditional features and the novel features contribute useful information for learning to rank models.
[2, 2, 1, 1]
['Different groups of features may play different roles in the LTR models.', 'In order to validate the impact of both the traditional features and the novel task-specific features, we conduct experiments with different combinations by removing each group of features respectively.', 'Table 3 shows the results, with w/o denotes experiments without the corresponding group of features.', 'We can observe that both the traditional features and the novel features contribute useful information for learning to rank models.']
[None, None, None, ['RF']]
1
P16-1130table_2
Translation results. The bold numbers stand for the best systems.
1
[['Base'], ['MERS'], ['CSRS'], ['MERS-MINI'], ['CSRS-MINI']]
1
[['ED'], ['EF'], ['EC'], ['EJ']]
[['15', '26.76', '29.42', '37.1'], ['15.62', '27.33', '29.75', '37.76'], ['16.15', '28.05', '30.12', '37.83'], ['15.77', '28.13', '30.53', '38.14'], ['16.49', '28.3', '31.63', '38.32']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['CSRS', 'MERS', 'CSRS-MINI', 'MERS-MINI']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ED</th> <th>EF</th> <th>EC</th> <th>EJ</th> </tr> </thead> <tbody> <tr> <td>Base</td> <td>15</td> <td>26.76</td> <td>29.42</td> <td>37.1</td> </tr> <tr> <td>MERS</td> <td>15.62</td> <td>27.33</td> <td>29.75</td> <td>37.76</td> </tr> <tr> <td>CSRS</td> <td>16.15</td> <td>28.05</td> <td>30.12</td> <td>37.83</td> </tr> <tr> <td>MERS-MINI</td> <td>15.77</td> <td>28.13</td> <td>30.53</td> <td>38.14</td> </tr> <tr> <td>CSRS-MINI</td> <td>16.49</td> <td>28.3</td> <td>31.63</td> <td>38.32</td> </tr> </tbody></table>
Table 2
table_2
P16-1130
6
acl2016
Table 2 shows the translation results using bootstrap resampling (Koehn, 2004). Base stands for the baseline system without any. MERS, CSRS, MERS-MINI and CSRS-MINI means the outputs of the baseline system were reranked using features from the MERS, CSRS, MERS-MINI and CSRS-MINI models respectively. Generally, the CSRS model outperformed the MERS model and the CSRS-MINI model outperformed the MERSMINI model on different translation tasks.
[1, 2, 2, 1]
['Table 2 shows the translation results using bootstrap resampling (Koehn, 2004).', 'Base stands for the baseline system without any.', 'MERS, CSRS, MERS-MINI and CSRS-MINI means the outputs of the baseline system were reranked using features from the MERS, CSRS, MERS-MINI and CSRS-MINI models respectively.', 'Generally, the CSRS model outperformed the MERS model and the CSRS-MINI model outperformed the MERSMINI model on different translation tasks.']
[None, ['Base'], ['MERS', 'CSRS', 'MERS-MINI', 'CSRS-MINI'], ['CSRS', 'MERS', 'CSRS-MINI', 'MERS-MINI']]
1
P16-1131table_2
Comparisons of results on the test sets.
3
[['Methods', 'Graph-NN:proposed', 'o3-adding'], ['Methods', 'Graph-NN:proposed', 'o3-perceptron'], ['Methods', 'Graph-NN:others', 'Pei et al. (2015)'], ['Methods', 'Graph-NN:others', 'Fonseca and Aluı́sio (2015)'], ['Methods', 'Graph-NN:others', 'Zhang and Zhao (2015)'], ['Methods', 'Graph-Linear', 'Koo and Collins (2010)'], ['Methods', 'Graph-Linear', 'Martins et al. (2013)'], ['Methods', 'Graph-Linear', 'Ma and Zhao (2015)'], ['Methods', 'Transition-NN', 'Chen and Manning (2014)'], ['Methods', 'Transition-NN', 'Dyer et al. (2015)'], ['Methods', 'Transition-NN', 'Weiss et al. (2015)'], ['Methods', 'Transition-NN', 'Zhou et al. (2015)']]
2
[['PTB-Y&M', 'UAS'], ['PTB-Y&M', 'LAS'], ['PTB-Y&M', 'CM'], ['PTB-SD', 'UAS'], ['PTB-SD', 'LAS'], ['PTB-SD', 'CM'], ['PTB-LTH', 'UAS'], ['PTB-LTH', 'LAS'], ['PTB-LTH', 'CM'], ['CTB', 'UAS'], ['CTB', 'LAS'], ['CTB', 'CM']]
[['93.20', '92.12', '48.92', '93.42', '91.29', '50.37', '93.14', '90.07', '43.38', '87.55', '86.19', '35.65'], ['93.31', '92.23', '50.00', '93.42', '91.26', '49.92', '93.12', '89.53', '43.83', '87.65', '86.17', '36.07'], ['93.29', '92.13', '–', '–', '–', '–', '–', '–', '–', '–', '–', '–'], ['–', '–', '–', '–', '–', '–', '91.6–', '88.9–', '–', '–', '–', '–'], ['–', '–', '–', '–', '–', '–', '92.52', '–', '41.10', '86.01', '–', '31.88'], ['93.04', '–', '–', '–', '–', '–', '–', '–', '–', '–', '–', '–'], ['93.07', '–', '–', '92.82', '–', '–', '–', '–', '–', '–', '–', '–'], ['93.0–', '–', '48.8–', '–', '–', '–', '–', '–', '–', '87.2–', '–', '37.0–'], ['–', '–', '–', '91.8–', '89.6–', '–', '92.0–', '90.7–', '–', '83.9–', '82.4–', '–'], ['–', '–', '–', '93.1–', '90.9–', '–', '–', '–', '–', '87.2–', '85.7–', '–'], ['–', '–', '–', '93.99', '92.05', '–', '–', '–', '–', '–', '–', '–'], ['93.28', '92.35', '–', '–', '–', '–', '–', '–', '–', '–', '–', '–']]
column
['UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM']
['Graph-NN:proposed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PTB-Y&amp;M || UAS</th> <th>PTB-Y&amp;M || LAS</th> <th>PTB-Y&amp;M || CM</th> <th>PTB-SD || UAS</th> <th>PTB-SD || LAS</th> <th>PTB-SD || CM</th> <th>PTB-LTH || UAS</th> <th>PTB-LTH || LAS</th> <th>PTB-LTH || CM</th> <th>CTB || UAS</th> <th>CTB || LAS</th> <th>CTB || CM</th> </tr> </thead> <tbody> <tr> <td>Methods || Graph-NN:proposed || o3-adding</td> <td>93.20</td> <td>92.12</td> <td>48.92</td> <td>93.42</td> <td>91.29</td> <td>50.37</td> <td>93.14</td> <td>90.07</td> <td>43.38</td> <td>87.55</td> <td>86.19</td> <td>35.65</td> </tr> <tr> <td>Methods || Graph-NN:proposed || o3-perceptron</td> <td>93.31</td> <td>92.23</td> <td>50.00</td> <td>93.42</td> <td>91.26</td> <td>49.92</td> <td>93.12</td> <td>89.53</td> <td>43.83</td> <td>87.65</td> <td>86.17</td> <td>36.07</td> </tr> <tr> <td>Methods || Graph-NN:others || Pei et al. (2015)</td> <td>93.29</td> <td>92.13</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Methods || Graph-NN:others || Fonseca and Aluı́sio (2015)</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>91.6–</td> <td>88.9–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Methods || Graph-NN:others || Zhang and Zhao (2015)</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>92.52</td> <td>–</td> <td>41.10</td> <td>86.01</td> <td>–</td> <td>31.88</td> </tr> <tr> <td>Methods || Graph-Linear || Koo and Collins (2010)</td> <td>93.04</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Methods || Graph-Linear || Martins et al. (2013)</td> <td>93.07</td> <td>–</td> <td>–</td> <td>92.82</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Methods || Graph-Linear || Ma and Zhao (2015)</td> <td>93.0–</td> <td>–</td> <td>48.8–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>87.2–</td> <td>–</td> <td>37.0–</td> </tr> <tr> <td>Methods || Transition-NN || Chen and Manning (2014)</td> <td>–</td> <td>–</td> <td>–</td> <td>91.8–</td> <td>89.6–</td> <td>–</td> <td>92.0–</td> <td>90.7–</td> <td>–</td> <td>83.9–</td> <td>82.4–</td> <td>–</td> </tr> <tr> <td>Methods || Transition-NN || Dyer et al. (2015)</td> <td>–</td> <td>–</td> <td>–</td> <td>93.1–</td> <td>90.9–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>87.2–</td> <td>85.7–</td> <td>–</td> </tr> <tr> <td>Methods || Transition-NN || Weiss et al. (2015)</td> <td>–</td> <td>–</td> <td>–</td> <td>93.99</td> <td>92.05</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Methods || Transition-NN || Zhou et al. (2015)</td> <td>93.28</td> <td>92.35</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> </tbody></table>
Table 2
table_2
P16-1131
8
acl2016
We show the results of two of the best proposed parsers: third-order adding (o3-adding) and third-order perceptron (o3-perceptron) methods, and compare with the reported results of some previous work in Table 2. We compare with three categories of models: other Graph-based NN (neural network) models, traditional Graph-based Linear models and Transition-based NN models. For PTB, there have been several different dependency converters which lead to different sets of dependencies and we choose three of the most popular ones for more comprehensive comparisons. Since not all work report results on all of these dependencies, some of the entries might be not available. From the comparison, we see that the proposed parser has output competitive performance for different dependency conversion conventions and treebanks. Compared with traditional graph based linear models, neural models may benefit from better feature representations and more general non-linear transformations. The results and comparisons in Table 2 demonstrate the proposed models can obtain comparable accuracies, which show the effectiveness of combining local and global features through window-based and convolutional neural networks.
[1, 1, 2, 2, 1, 2, 1]
['We show the results of two of the best proposed parsers: third-order adding (o3-adding) and third-order perceptron (o3-perceptron) methods, and compare with the reported results of some previous work in Table 2.', 'We compare with three categories of models: other Graph-based NN (neural network) models, traditional Graph-based Linear models and Transition-based NN models.', 'For PTB, there have been several different dependency converters which lead to different sets of dependencies and we choose three of the most popular ones for more comprehensive comparisons.', 'Since not all work report results on all of these dependencies, some of the entries might be not available.', 'From the comparison, we see that the proposed parser has output competitive performance for different dependency conversion conventions and treebanks.', 'Compared with traditional graph based linear models, neural models may benefit from better feature representations and more general non-linear transformations.', 'The results and comparisons in Table 2 demonstrate the proposed models can obtain comparable accuracies, which show the effectiveness of combining local and global features through window-based and convolutional neural networks.']
[['Graph-NN:proposed', 'o3-adding', 'o3-perceptron'], ['Graph-NN:others', 'Graph-Linear', 'Transition-NN'], None, None, ['Graph-NN:proposed'], ['Graph-NN:proposed', 'Graph-Linear'], ['Graph-NN:proposed']]
1
P16-1134table_3
Results of grSemi-CRF with external information, measured in F1 score. None = no external information, Emb = Senna embeddings, Brown = Brown clusters, Gaz = gazetteers and All = Emb + Brown + Gaz. NYT and RCV1 in the parenthesis denote the corpus used to generate Brown clusters. “–” means no results. Notice that gazetteers are only applied to NER.
2
[['Input Features', 'None'], ['Input Features', 'Brown(NYT)'], ['Input Features', 'Brown(RCV1)']]
1
[['CONLL 2000'], ['CONLL 2003']]
[['93.92', '84.66'], ['94.18', '86.57'], ['94.05', '88.22']]
column
['F1', 'F1']
['Brown(NYT)', 'Brown(RCV1)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL 2000</th> <th>CONLL 2003</th> </tr> </thead> <tbody> <tr> <td>Input Features || None</td> <td>93.92</td> <td>84.66</td> </tr> <tr> <td>Input Features || Brown(NYT)</td> <td>94.18</td> <td>86.57</td> </tr> <tr> <td>Input Features || Brown(RCV1)</td> <td>94.05</td> <td>88.22</td> </tr> </tbody></table>
Table 3
table_3
P16-1134
7
acl2016
As Table 3 shows, external information improve the performance of grSemi-CRFs for both tasks. Compared to text chunking, we can find out that external information plays an extremely important role in NER, which coincides with the general idea that NER is a knowledge-intensive task (Ratinov and Roth, 2009). Another interesting thing is that, Brown clusters generated from NYT corpus performs better on the CONLL 2000 task while those generated from Reuters RCV1 dataset performs better on the CONLL 2003 task. The reason is that the CONLL 2000 dataset is the subset of Wall Street Journal (WSJ) part of the Penn Treebank II Corpus (Marcus et al., 1993) while the CONLL 2003 dataset is a subset of Reuters RCV1 dataset. Maybe the writing styles between NYT and WSJ are more similar than those between RCV1 and WSJ.
[1, 1, 1, 2, 2]
['As Table 3 shows, external information improve the performance of grSemi-CRFs for both tasks.', 'Compared to text chunking, we can find out that external information plays an extremely important role in NER, which coincides with the general idea that NER is a knowledge-intensive task (Ratinov and Roth, 2009).', 'Another interesting thing is that, Brown clusters generated from NYT corpus performs better on the CONLL 2000 task while those generated from Reuters RCV1 dataset performs better on the CONLL 2003 task.', 'The reason is that the CONLL 2000 dataset is the subset of Wall Street Journal (WSJ) part of the Penn Treebank II Corpus (Marcus et al., 1993) while the CONLL 2003 dataset is a subset of Reuters RCV1 dataset.', 'Maybe the writing styles between NYT and WSJ are more similar than those between RCV1 and WSJ.']
[['Input Features'], ['Input Features'], ['Brown(NYT)', 'CONLL 2000', 'Brown(RCV1)', 'CONLL 2003'], ['CONLL 2000', 'CONLL 2003'], ['CONLL 2000', 'CONLL 2003']]
1
P16-1134table_4
F1 scores of grSemi-CRF with scalar or vectorial gating coefficients.
2
[['Gating Coefficients', 'Scalars'], ['Gating Coefficients', 'Vectors']]
1
[['CONLL 2000'], ['CONLL 2003']]
[['94.47', '89.27'], ['95.01', '89.44']]
column
['F1', 'F1']
['Gating Coefficients']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL 2000</th> <th>CONLL 2003</th> </tr> </thead> <tbody> <tr> <td>Gating Coefficients || Scalars</td> <td>94.47</td> <td>89.27</td> </tr> <tr> <td>Gating Coefficients || Vectors</td> <td>95.01</td> <td>89.44</td> </tr> </tbody></table>
Table 4
table_4
P16-1134
8
acl2016
4.4.2 Impact of Vectorial Gating Coefficients. As Table 4 shows, a grSemi-CRF using vectorial gating coefficients (i.e., Eq. (7)) performs better than that using scalar gating coefficients (i.e., Eq. (6)), which provides evidences for the theoretical intuition that vectorial gating coefficients can make a detailed modeling of the combinations of segment-level latent features and thus performs better than scalar gating coefficients.
[2, 1]
['4.4.2 Impact of Vectorial Gating Coefficients.', 'As Table 4 shows, a grSemi-CRF using vectorial gating coefficients (i.e., Eq. (7)) performs better than that using scalar gating coefficients (i.e., Eq. (6)), which provides evidences for the theoretical intuition that vectorial gating coefficients can make a detailed modeling of the combinations of segment-level latent features and thus performs better than scalar gating coefficients.']
[None, ['Gating Coefficients', 'Scalars', 'Vectors']]
1
P16-1137table_4
Loss function runtime comparison (seconds per epoch) of the DNN models.
1
[['random'], ['mix'], ['max']]
2
[['DNN AVG', 'CE'], ['DNN AVG', 'hinge'], ['DNN LSTM', 'CE'], ['DNN LSTM', 'hinge']]
[['124', '230', '710', '783'], ['20755', '21045', '25928', '26380'], ['39338', '41867', '49583', '49427']]
column
['runtime', 'runtime', 'runtime', 'runtime']
['random', 'mix', 'max']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DNN AVG || CE</th> <th>DNN AVG || hinge</th> <th>DNN LSTM || CE</th> <th>DNN LSTM || hinge</th> </tr> </thead> <tbody> <tr> <td>random</td> <td>124</td> <td>230</td> <td>710</td> <td>783</td> </tr> <tr> <td>mix</td> <td>20755</td> <td>21045</td> <td>25928</td> <td>26380</td> </tr> <tr> <td>max</td> <td>39338</td> <td>41867</td> <td>49583</td> <td>49427</td> </tr> </tbody></table>
Table 4
table_4
P16-1137
7
acl2016
Table 4 shows a runtime comparison of the losses and sampling strategies. We find random sampling to be orders of magnitude faster than the others while also performing the best.
[1, 1]
['Table 4 shows a runtime comparison of the losses and sampling strategies.', 'We find random sampling to be orders of magnitude faster than the others while also performing the best.']
[None, ['random']]
1
P16-1140table_4
Comparison of original and shuffled character-based word representation on decoding POS tag.
2
[['Lan.', 'Russian'], ['Lan.', 'Slovenian']]
1
[['Raw'], ['Shuf.']]
[['0.906', '0.671'], ['0.8', '0.653']]
column
['correlation', 'correlation']
['Raw', 'Shuf.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Raw</th> <th>Shuf.</th> </tr> </thead> <tbody> <tr> <td>Lan. || Russian</td> <td>0.906</td> <td>0.671</td> </tr> <tr> <td>Lan. || Slovenian</td> <td>0.8</td> <td>0.653</td> </tr> </tbody></table>
Table 4
table_4
P16-1140
7
acl2016
To prove that word form could provides informative and explicit cues for grammatical functions, we train another shuffled character-based word representation, which means that the autoencoder inputs shuffled letters and outputs the shuffled letters again. We use the hidden layer of the shuffled autoencoder as the representation for each word. The result in Table 4 shows that now the character-based model cannot perform as well as the original character-based autoencoder representation does, which again proves that the order of the word form is necessary for learning the grammatical function of a word.
[2, 2, 1]
['To prove that word form could provides informative and explicit cues for grammatical functions, we train another shuffled character-based word representation, which means that the autoencoder inputs shuffled letters and outputs the shuffled letters again.', 'We use the hidden layer of the shuffled autoencoder as the representation for each word.', 'The result in Table 4 shows that now the character-based model cannot perform as well as the original character-based autoencoder representation does, which again proves that the order of the word form is necessary for learning the grammatical function of a word.']
[['Shuf.'], ['Shuf.'], ['Raw', 'Shuf.']]
1
P16-1140table_5
Comparison of morpho-phonological knowledge transfer on different language pairs. The reconstruction accuracy is correlated with the overlapping proportion of grapheme patterns between source language and target language.
2
[['Target Language', 'Bigram type overlap.'], ['Target Language', 'Bigram token overlap.'], ['Target Language', 'Trigram type overlap.'], ['Target Language', 'Trigram token overlap.']]
3
[['Source Language', 'Arabic', 'fa'], ['Source Language', 'Arabic', 'ud'], ['Source Language', 'Finnish', 'en'], ['Source Language', 'Finnish', 'shuf en'], ['Source Language', 'Finnish', 'rand']]
[['0.176', '0.761', '0.891', '0.864', '0.648'], ['0.689', '0.881', '0.999', '0.993', '0.65'], ['0.523', '0.522', '0.665', '0.449', '0.078'], ['0.526', '0.585', '0.978', '0.796', '0.078']]
column
['correlation', 'correlation', 'correlation', 'correlation', 'correlation']
['Bigram token overlap.', 'Trigram token overlap.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Source Language || Arabic || fa</th> <th>Source Language || Arabic || ud</th> <th>Source Language || Finnish || en</th> <th>Source Language || Finnish || shuf en</th> <th>Source Language || Finnish || rand</th> </tr> </thead> <tbody> <tr> <td>Target Language || Bigram type overlap.</td> <td>0.176</td> <td>0.761</td> <td>0.891</td> <td>0.864</td> <td>0.648</td> </tr> <tr> <td>Target Language || Bigram token overlap.</td> <td>0.689</td> <td>0.881</td> <td>0.999</td> <td>0.993</td> <td>0.65</td> </tr> <tr> <td>Target Language || Trigram type overlap.</td> <td>0.523</td> <td>0.522</td> <td>0.665</td> <td>0.449</td> <td>0.078</td> </tr> <tr> <td>Target Language || Trigram token overlap.</td> <td>0.526</td> <td>0.585</td> <td>0.978</td> <td>0.796</td> <td>0.078</td> </tr> </tbody></table>
Table 5
table_5
P16-1140
7
acl2016
To explain the behaviour of AE, we calculate the correlation between the bigram character frequency in the words of the training language (e.g.Finnish) and the bigram character frequency in the words of the testing language (e.g. English). Table 5 reveals that phonological knowledge can be transferred if two languages share similar bigram and trigram character frequency distribution. For example, Finnish and English are both Indo-European language.
[2, 1, 1]
['To explain the behaviour of AE, we calculate the correlation between the bigram character frequency in the words of the training language (e.g.Finnish) and the bigram character frequency in the words of the testing language (e.g. English).', 'Table 5 reveals that phonological knowledge can be transferred if two languages share similar bigram and trigram character frequency distribution.', 'For example, Finnish and English are both Indo-European language.']
[['Bigram token overlap.', 'Trigram token overlap.'], ['Bigram token overlap.', 'Trigram token overlap.'], ['Finnish', 'en', 'shuf en']]
1
P16-1148table_4
Emotion classification results (one vs. all for each emotion and 6 way for ALL) using our models compared to others.
2
[['#Emotion', '#anger'], ['#Emotion', '#disgust'], ['#Emotion', '#fear'], ['#Emotion', '#joy'], ['#Emotion', '#sadness'], ['#Emotion', '#surprise'], ['#Emotion', 'ALL']]
1
[['Wang (2012)'], ['Roberts (2012)'], ['Qadir (2013)'], ['Mohammad (2014)'], ['This work']]
[['0.72', '0.64', '0.44', '0.28', '0.80'], ['–', '0.67', '–', '0.19', '0.92'], ['0.44', '0.74', '0.54', '0.51', '0.77'], ['0.72', '0.68', '0.59', '0.62', '0.79'], ['0.65', '0.69', '0.46', '0.39', '0.62'], ['0.14', '0.61', '–', '0.45', '0.64'], ['–', '0.67', '0.53', '0.49', '0.78']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wang (2012)</th> <th>Roberts (2012)</th> <th>Qadir (2013)</th> <th>Mohammad (2014)</th> <th>This work</th> </tr> </thead> <tbody> <tr> <td>#Emotion || #anger</td> <td>0.72</td> <td>0.64</td> <td>0.44</td> <td>0.28</td> <td>0.80</td> </tr> <tr> <td>#Emotion || #disgust</td> <td>–</td> <td>0.67</td> <td>–</td> <td>0.19</td> <td>0.92</td> </tr> <tr> <td>#Emotion || #fear</td> <td>0.44</td> <td>0.74</td> <td>0.54</td> <td>0.51</td> <td>0.77</td> </tr> <tr> <td>#Emotion || #joy</td> <td>0.72</td> <td>0.68</td> <td>0.59</td> <td>0.62</td> <td>0.79</td> </tr> <tr> <td>#Emotion || #sadness</td> <td>0.65</td> <td>0.69</td> <td>0.46</td> <td>0.39</td> <td>0.62</td> </tr> <tr> <td>#Emotion || #surprise</td> <td>0.14</td> <td>0.61</td> <td>–</td> <td>0.45</td> <td>0.64</td> </tr> <tr> <td>#Emotion || ALL</td> <td>–</td> <td>0.67</td> <td>0.53</td> <td>0.49</td> <td>0.78</td> </tr> </tbody></table>
Table 4
table_4
P16-1148
5
acl2016
We demonstrate our emotion model prediction quality using 10-fold c.v. on our hashtag emotion dataset and compare it to other existing datasets in Table 4. Our results significantly outperform the existing approaches and are comparable with the state-of-the-art system for Twitter sentiment classification (Mohammad et al., 2013; Zhu et al., 2014).
[1, 1]
['We demonstrate our emotion model prediction quality using 10-fold c.v. on our hashtag emotion dataset and compare it to other existing datasets in Table 4.', 'Our results significantly outperform the existing approaches and are comparable with the state-of-the-art system for Twitter sentiment classification (Mohammad et al., 2013; Zhu et al., 2014).']
[None, ['This work', 'Wang (2012)', 'Roberts (2012)', 'Qadir (2013)', 'Mohammad (2014)']]
1
P16-1150table_4
Correlation results on UKPConvArgRank.
1
[['Pearson’s r'], ['Spearman’s ρ']]
1
[['SVM'], ['BLSTM']]
[['.351', '.270'], ['.402', '.354']]
row
['Pearson’s r', 'Spearman’s ρ']
['SVM', 'BLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SVM</th> <th>BLSTM</th> </tr> </thead> <tbody> <tr> <td>Pearson’s r</td> <td>.351</td> <td>.270</td> </tr> <tr> <td>Spearman’s ρ</td> <td>.402</td> <td>.354</td> </tr> </tbody></table>
Table 4
table_4
P16-1150
8
acl2016
Without any modifications, we use the same SVM and features as described in Section 4.1. Regarding the BLSTM, we only replace the output layer with a linear activation function and optimize mean absolute error loss. Table 4 shows that SVM outperforms BLSTM.
[2, 2, 1]
['Without any modifications, we use the same SVM and features as described in Section 4.1.', 'Regarding the BLSTM, we only replace the output layer with a linear activation function and optimize mean absolute error loss.', 'Table 4 shows that SVM outperforms BLSTM.']
[['SVM'], ['BLSTM'], ['SVM', 'BLSTM']]
1
P16-1154table_3
Testing performance of LCSTS, where “RNN” is canonical Enc-Dec, and “RNN context” its attentive variant.
2
[['Models', 'RNN (Hu et al. 2015) +C'], ['Models', 'RNN (Hu et al. 2015) +W'], ['Models', 'RNN context (Hu et al. 2015) +C'], ['Models', 'RNN context (Hu et al. 2015) +W'], ['Models', 'COPYNET +C'], ['Models', 'COPYNET +W']]
1
[['R-1'], ['R-2'], ['R-L']]
[['21.5', '8.9', '18.6'], ['17.7', '8.5', '15.8'], ['29.9', '17.4', '27.2'], ['26.8', '16.1', '24.1'], ['34.4', '21.6', '31.3'], ['35', '22.3', '32']]
column
['R-1', 'R-2', 'R-L']
['COPYNET +C', 'COPYNET +W']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Models || RNN (Hu et al. 2015) +C</td> <td>21.5</td> <td>8.9</td> <td>18.6</td> </tr> <tr> <td>Models || RNN (Hu et al. 2015) +W</td> <td>17.7</td> <td>8.5</td> <td>15.8</td> </tr> <tr> <td>Models || RNN context (Hu et al. 2015) +C</td> <td>29.9</td> <td>17.4</td> <td>27.2</td> </tr> <tr> <td>Models || RNN context (Hu et al. 2015) +W</td> <td>26.8</td> <td>16.1</td> <td>24.1</td> </tr> <tr> <td>Models || COPYNET +C</td> <td>34.4</td> <td>21.6</td> <td>31.3</td> </tr> <tr> <td>Models || COPYNET +W</td> <td>35</td> <td>22.3</td> <td>32</td> </tr> </tbody></table>
Table 3
table_3
P16-1154
6
acl2016
It is clear from Table 3 that COPYNET beats the competitor models with big margin. Hu et al.(2015) reports that the performance of a word-based model is inferior to a character-based one. One possible explanation is that a word-based model, even with a much larger vocabulary (50000 words in Hu et al. (2015)), still has a large proportion of OOVs due to the large number of entity names in the summary data and the mistakes in word segmentation. COPYNET, with its ability to handle the OOV words with the copying mechanism, performs however slightly better with the word-based variant.
[1, 2, 2, 2]
['It is clear from Table 3 that COPYNET beats the competitor models with big margin.', 'Hu et al.(2015) reports that the performance of a word-based model is inferior to a character-based one.', 'One possible explanation is that a word-based model, even with a much larger vocabulary (50000 words in Hu et al. (2015)), still has a large proportion of OOVs due to the large number of entity names in the summary data and the mistakes in word segmentation.', 'COPYNET, with its ability to handle the OOV words with the copying mechanism, performs however slightly better with the word-based variant.']
[['COPYNET +C', 'COPYNET +W'], None, None, ['COPYNET +C', 'COPYNET +W']]
1
P16-1159table_5
Subjective evaluation of MLE and MRT on Chinese-English translation.
1
[['evaluator 1'], ['evaluator 2']]
1
[['MLE < MRT'], ['MLE = MRT'], ['MLE > MRT']]
[['54%', '24%', '22%'], ['53%', '22%', '25%']]
row
['percentage', 'percentage']
['MLE < MRT', 'MLE = MRT', 'MLE > MRT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MLE &lt; MRT</th> <th>MLE = MRT</th> <th>MLE &gt; MRT</th> </tr> </thead> <tbody> <tr> <td>evaluator 1</td> <td>54%</td> <td>24%</td> <td>22%</td> </tr> <tr> <td>evaluator 2</td> <td>53%</td> <td>22%</td> <td>25%</td> </tr> </tbody></table>
Table 5
table_5
P16-1159
7
acl2016
Table 5 shows the results of subjective evaluation. The two human evaluators made close judgements: around 54% of MLE translations are worse than MRE, 23% are equal, and 23% are better.
[1, 1]
['Table 5 shows the results of subjective evaluation.', 'The two human evaluators made close judgements: around 54% of MLE translations are worse than MRE, 23% are equal, and 23% are better.']
[None, ['MLE < MRT', 'MLE = MRT', 'MLE > MRT']]
1
P16-1159table_7
Comparison with previous work on English-French translation. The BLEU scores are casesensitive. “PosUnk” denotes Luong et al. (2015b)’s technique of handling rare words.
6
[['Existing end-to-end NMT systems', 'Bahdanau et al. (2015)', 'Architecture', 'gated RNN with search', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Jean et al. (2015)', 'Architecture', 'gated RNN with search', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Jean et al. (2015)', 'Architecture', 'gated RNN with search + PosUnk', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Luong et al. (2015b)', 'Architecture', 'LSTM with 4 layers', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Luong et al. (2015b)', 'Architecture', 'LSTM with 4 layers + PosUnk', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Luong et al. (2015b)', 'Architecture', 'LSTM with 6 layers', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Luong et al. (2015b)', 'Architecture', 'LSTM with 6 layers + PosUnk', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Sutskever et al. (2014)', 'Architecture', 'LSTM with 4 layers', 'Training', 'MLE'], ['Our end-to-end NMT systems', 'this work', 'Architecture', 'gated RNN with search', 'Training', 'MLE'], ['Our end-to-end NMT systems', 'this work', 'Architecture', 'gated RNN with search', 'Training', 'MRT'], ['Our end-to-end NMT systems', 'this work', 'Architecture', 'gated RNN with search + PosUnk', 'Training', 'MRT']]
1
[['Vocab'], ['BLEU']]
[['30000', '28.45'], ['30000', '29.97'], ['30000', '33.08'], ['40000', '29.50'], ['40000', '31.80'], ['40000', '30.40'], ['40000', '32.70'], ['80000', '30.59'], ['30000', '29.88'], ['30000', '31.30'], ['30000', '34.23']]
column
['Vocab', 'BLEU']
['this work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocab</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Existing end-to-end NMT systems || Bahdanau et al. (2015) || Architecture || gated RNN with search || Training || MLE</td> <td>30000</td> <td>28.45</td> </tr> <tr> <td>Existing end-to-end NMT systems || Jean et al. (2015) || Architecture || gated RNN with search || Training || MLE</td> <td>30000</td> <td>29.97</td> </tr> <tr> <td>Existing end-to-end NMT systems || Jean et al. (2015) || Architecture || gated RNN with search + PosUnk || Training || MLE</td> <td>30000</td> <td>33.08</td> </tr> <tr> <td>Existing end-to-end NMT systems || Luong et al. (2015b) || Architecture || LSTM with 4 layers || Training || MLE</td> <td>40000</td> <td>29.50</td> </tr> <tr> <td>Existing end-to-end NMT systems || Luong et al. (2015b) || Architecture || LSTM with 4 layers + PosUnk || Training || MLE</td> <td>40000</td> <td>31.80</td> </tr> <tr> <td>Existing end-to-end NMT systems || Luong et al. (2015b) || Architecture || LSTM with 6 layers || Training || MLE</td> <td>40000</td> <td>30.40</td> </tr> <tr> <td>Existing end-to-end NMT systems || Luong et al. (2015b) || Architecture || LSTM with 6 layers + PosUnk || Training || MLE</td> <td>40000</td> <td>32.70</td> </tr> <tr> <td>Existing end-to-end NMT systems || Sutskever et al. (2014) || Architecture || LSTM with 4 layers || Training || MLE</td> <td>80000</td> <td>30.59</td> </tr> <tr> <td>Our end-to-end NMT systems || this work || Architecture || gated RNN with search || Training || MLE</td> <td>30000</td> <td>29.88</td> </tr> <tr> <td>Our end-to-end NMT systems || this work || Architecture || gated RNN with search || Training || MRT</td> <td>30000</td> <td>31.30</td> </tr> <tr> <td>Our end-to-end NMT systems || this work || Architecture || gated RNN with search + PosUnk || Training || MRT</td> <td>30000</td> <td>34.23</td> </tr> </tbody></table>
Table 7
table_7
P16-1159
8
acl2016
Table 7 shows the results on English-French translation. We list existing end-to-end NMT systems that are comparable to our system. All these systems use the same subset of the WMT 2014 training corpus and adopt MLE as the training criterion. They differ in network architectures and vocabulary sizes. Our RNNSEARCH-MLE system achieves a BLEU score comparable to that of Jean et al. (2015). RNNSEARCH-MRT achieves the highest BLEU score in this setting even with a vocabulary size smaller than Luong et al. (2015b) (2014). Note that our approach does not assume specific architectures and can in principle be applied to any NMT systems.
[1, 2, 2, 1, 1, 1, 2]
['Table 7 shows the results on English-French translation.', 'We list existing end-to-end NMT systems that are comparable to our system.', 'All these systems use the same subset of the WMT 2014 training corpus and adopt MLE as the training criterion.', 'They differ in network architectures and vocabulary sizes.', 'Our RNNSEARCH-MLE system achieves a BLEU score comparable to that of Jean et al. (2015).', 'RNNSEARCH-MRT achieves the highest BLEU score in this setting even with a vocabulary size smaller than Luong et al. (2015b) (2014).', 'Note that our approach does not assume specific architectures and can in principle be applied to any NMT systems.']
[None, None, None, ['Vocab'], ['gated RNN with search', 'Training', 'MLE', 'Jean et al. (2015)', 'BLEU'], ['gated RNN with search', 'Training', 'MRT', 'BLEU', 'Vocab', 'Luong et al. (2015b)'], ['this work']]
1
P16-1161table_2
BLEU scores obtained on the WMT14 test set. We report the performance of the baseline, the source-context model and the full model.
2
[['data size', 'baseline'], ['data size', '+source'], ['data size', '+target']]
1
[['small'], ['medium'], ['full']]
[['10.7', '15.2', '16.7'], ['10.7', '16', '17.3'], ['11.2', '16.4', '17.5']]
column
['BLEU', 'BLEU', 'BLEU']
['+target', '+source']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>small</th> <th>medium</th> <th>full</th> </tr> </thead> <tbody> <tr> <td>data size || baseline</td> <td>10.7</td> <td>15.2</td> <td>16.7</td> </tr> <tr> <td>data size || +source</td> <td>10.7</td> <td>16</td> <td>17.3</td> </tr> <tr> <td>data size || +target</td> <td>11.2</td> <td>16.4</td> <td>17.5</td> </tr> </tbody></table>
Table 2
table_2
P16-1161
10
acl2016
Table 2 shows the obtained results. Statistically significant differences (alpha=0.01) are marked in bold. The source-context model does not help in the small data setting but brings a substantial improvement of 0.7-0.8 BLEU points for the medium and full data settings, which is an encouraging result. Target-side context information allows our model to push the translation quality further: even for the small data setting, it brings a substantial improvement of 0.5 BLEU points and the gain remains significant as the data size increases. Even in the full data setting, target-side features improve the score by roughly 0.2 BLEU points. Our results demonstrate that feature-rich models scale to large data size both in terms of technical feasibility and of translation quality improvements. Target side information seems consistently beneficial, adding further 0.2-0.5 BLEU points on top of the source-context model.
[1, 2, 1, 1, 1, 1, 1]
['Table 2 shows the obtained results.', 'Statistically significant differences (alpha=0.01) are marked in bold.', 'The source-context model does not help in the small data setting but brings a substantial improvement of 0.7-0.8 BLEU points for the medium and full data settings, which is an encouraging result.', 'Target-side context information allows our model to push the translation quality further: even for the small data setting, it brings a substantial improvement of 0.5 BLEU points and the gain remains significant as the data size increases.', 'Even in the full data setting, target-side features improve the score by roughly 0.2 BLEU points.', 'Our results demonstrate that feature-rich models scale to large data size both in terms of technical feasibility and of translation quality improvements.', 'Target side information seems consistently beneficial, adding further 0.2-0.5 BLEU points on top of the source-context model.']
[None, None, ['+source', 'small', 'medium', 'full'], ['+target', 'small'], ['+target', 'full'], ['+target', '+source'], ['+target', '+source']]
1
P16-1168table_2
Evaluation Metrics
1
[['monolingual'], ['alternate'], ['transfer']]
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE-L'], ['CIDEr-D']]
[['0.715', '0.573', '0.468', '0.379', '0.616', '0.58'], ['0.709', '0.565', '0.46', '0.37', '0.611', '0.568'], ['0.717', '0.574', '0.469', '0.38', '0.619', '0.625']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE-L', 'CIDEr-D']
['transfer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE-L</th> <th>CIDEr-D</th> </tr> </thead> <tbody> <tr> <td>monolingual</td> <td>0.715</td> <td>0.573</td> <td>0.468</td> <td>0.379</td> <td>0.616</td> <td>0.58</td> </tr> <tr> <td>alternate</td> <td>0.709</td> <td>0.565</td> <td>0.46</td> <td>0.37</td> <td>0.611</td> <td>0.568</td> </tr> <tr> <td>transfer</td> <td>0.717</td> <td>0.574</td> <td>0.469</td> <td>0.38</td> <td>0.619</td> <td>0.625</td> </tr> </tbody></table>
Table 2
table_2
P16-1168
8
acl2016
Table 2 shows the evaluation metrics for various settings of cross-lingual transfer learning. All values were calculated for Japanese captions generated for test set images. Our proposed model is labeled “transfer”. As you can see, it outperformed the other two models for every metric. In particular, the CIDEr-D score was about 4% higher than that for the monolingual baseline. The performance of a model trained using the English and Japanese corpora alternately is shown on the line label “alternate”. Surprisingly, this model had lower performance than the baseline model.
[1, 2, 2, 1, 1, 2, 1]
['Table 2 shows the evaluation metrics for various settings of cross-lingual transfer learning.', 'All values were calculated for Japanese captions generated for test set images.', 'Our proposed model is labeled “transfer”.', 'As you can see, it outperformed the other two models for every metric.', 'In particular, the CIDEr-D score was about 4% higher than that for the monolingual baseline.', 'The performance of a model trained using the English and Japanese corpora alternately is shown on the line label “alternate”.', 'Surprisingly, this model had lower performance than the baseline model.']
[None, None, ['transfer'], ['transfer'], ['transfer', 'CIDEr-D', 'monolingual'], ['alternate'], ['alternate', 'monolingual']]
1
P16-1170table_3
Image captioning results
1
[['Bing'], ['MS COCO']]
1
[['BLEU'], ['METEOR']]
[['0.101', '0.151'], ['0.291', '0.247']]
column
['BLEU', 'METEOR']
['Bing']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Bing</td> <td>0.101</td> <td>0.151</td> </tr> <tr> <td>MS COCO</td> <td>0.291</td> <td>0.247</td> </tr> </tbody></table>
Table 3
table_3
P16-1170
9
acl2016
Table 3 shows the results of testing the state-of-the-art MSR captioning system on the CaptionsBing-5000 dataset as compared to the MS COCO dataset, measured by the standard BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) metrics. The wide gap in the results further confirms that indeed the V QGBing-5000 dataset covers a new class of images. We hope the availability of this new dataset will encourage including more diverse domains for image captioning.
[1, 1, 2]
['Table 3 shows the results of testing the state-of-the-art MSR captioning system on the CaptionsBing-5000 dataset as compared to the MS COCO dataset, measured by the standard BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) metrics.', 'The wide gap in the results further confirms that indeed the V QGBing-5000 dataset covers a new class of images.', 'We hope the availability of this new dataset will encourage including more diverse domains for image captioning.']
[['MS COCO', 'Bing', 'BLEU', 'METEOR'], ['Bing'], None]
1
P16-1173table_5
Parsing performance on learner English.
2
[['Parser', 'Petrov (2010)'], ['Parser', 'Stanford'], ['Parser', 'Charniak-Johnson']]
1
[['R'], ['P'], ['F'], ['CMR']]
[['0.863', '0.865', '-', '0.358'], ['0.812', '0.832', '0.822', '0.398'], ['0.845', '0.865', '0.855', '0.465']]
column
['R', 'P', 'F', 'CMR']
['Stanford', 'Charniak-Johnson']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R</th> <th>P</th> <th>F</th> <th>CMR</th> </tr> </thead> <tbody> <tr> <td>Parser || Petrov (2010)</td> <td>0.863</td> <td>0.865</td> <td>-</td> <td>0.358</td> </tr> <tr> <td>Parser || Stanford</td> <td>0.812</td> <td>0.832</td> <td>0.822</td> <td>0.398</td> </tr> <tr> <td>Parser || Charniak-Johnson</td> <td>0.845</td> <td>0.865</td> <td>0.855</td> <td>0.465</td> </tr> </tbody></table>
Table 5
table_5
P16-1173
8
acl2016
Table 5 shows the results. To our surprise, both parsers perform very well on the learner corpora despite the fact that it contains a number of grammatical errors and also syntactic tags that are not defined in PTB-II. Their performance is comparable to, or even better than, that on the Penn Treebank (reported in Petrov (2010)).
[1, 1, 1]
['Table 5 shows the results.', 'To our surprise, both parsers perform very well on the learner corpora despite the fact that it contains a number of grammatical errors and also syntactic tags that are not defined in PTB-II.', 'Their performance is comparable to, or even better than, that on the Penn Treebank (reported in Petrov (2010)).']
[None, ['Stanford', 'Charniak-Johnson'], ['Stanford', 'Charniak-Johnson', 'Petrov (2010)']]
1
P16-1178table_6
Average performance across all ten folds for the GL model and for different feature sets. Aspects Fluency Conciseness Completeness Referencing Descriptiveness Novelty Richness Attractiveness Formality Popularity Technicality Subjectivity Polarity Sentimentality
2
[['Aspect', 'Fluency'], ['Aspect', 'Conciseness'], ['Aspect', 'Completeness'], ['Aspect', 'Referencing'], ['Aspect', 'Descriptiveness'], ['Aspect', 'Novelty'], ['Aspect', 'Richness'], ['Aspect', 'Attractiveness'], ['Aspect', 'Formality'], ['Aspect', 'Popularity'], ['Aspect', 'Technicality'], ['Aspect', 'Subjectivity'], ['Aspect', 'Polarity'], ['Aspect', 'Sentimentality']]
1
[['BoW'], ['Shallow'], ['BaselineM']]
[['1.1571', '1.1181', '1.1462'], ['1.2622', '1.1968', '1.2456'], ['0.8408', '0.7945', '0.813'], ['0.7047', '0.6613', '0.7048'], ['0.926', '0.873', '0.9073'], ['0.7994', '0.7607', '0.7797'], ['0.9866', '0.9454', '0.9568'], ['0.7048', '0.6702', '0.6907'], ['0.7025', '0.6691', '0.692'], ['0.8329', '0.7825', '0.825'], ['0.7923', '0.7409', '0.7907'], ['0.875', '0.8283', '0.9094'], ['0.8109', '0.778', '0.8009'], ['0.817', '0.7668', '0.8046']]
column
['RMSE', 'RMSE', 'RMSE']
['BoW', 'Shallow']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BoW</th> <th>Shallow</th> <th>BaselineM</th> </tr> </thead> <tbody> <tr> <td>Aspect || Fluency</td> <td>1.1571</td> <td>1.1181</td> <td>1.1462</td> </tr> <tr> <td>Aspect || Conciseness</td> <td>1.2622</td> <td>1.1968</td> <td>1.2456</td> </tr> <tr> <td>Aspect || Completeness</td> <td>0.8408</td> <td>0.7945</td> <td>0.813</td> </tr> <tr> <td>Aspect || Referencing</td> <td>0.7047</td> <td>0.6613</td> <td>0.7048</td> </tr> <tr> <td>Aspect || Descriptiveness</td> <td>0.926</td> <td>0.873</td> <td>0.9073</td> </tr> <tr> <td>Aspect || Novelty</td> <td>0.7994</td> <td>0.7607</td> <td>0.7797</td> </tr> <tr> <td>Aspect || Richness</td> <td>0.9866</td> <td>0.9454</td> <td>0.9568</td> </tr> <tr> <td>Aspect || Attractiveness</td> <td>0.7048</td> <td>0.6702</td> <td>0.6907</td> </tr> <tr> <td>Aspect || Formality</td> <td>0.7025</td> <td>0.6691</td> <td>0.692</td> </tr> <tr> <td>Aspect || Popularity</td> <td>0.8329</td> <td>0.7825</td> <td>0.825</td> </tr> <tr> <td>Aspect || Technicality</td> <td>0.7923</td> <td>0.7409</td> <td>0.7907</td> </tr> <tr> <td>Aspect || Subjectivity</td> <td>0.875</td> <td>0.8283</td> <td>0.9094</td> </tr> <tr> <td>Aspect || Polarity</td> <td>0.8109</td> <td>0.778</td> <td>0.8009</td> </tr> <tr> <td>Aspect || Sentimentality</td> <td>0.817</td> <td>0.7668</td> <td>0.8046</td> </tr> </tbody></table>
Table 6
table_6
P16-1178
8
acl2016
As a reference for future research with the proposed corpus, we trained GLM regression models to predict each aspect individually. Table 6 presents the RMSE for each aspect, for two different sets of feature: a standard BoW and the shallow features described previously, as well as the BaselineM. Despite the simplicity of the features, we can see that the aspects can be inferred from the articles. In particular, the model trained on the BoW features achieves an RMSE that is very close to that of the BaselineM, whereas model trained on the shallow features outperforms all other models.
[2, 1, 2, 1]
['As a reference for future research with the proposed corpus, we trained GLM regression models to predict each aspect individually.', 'Table 6 presents the RMSE for each aspect, for two different sets of feature: a standard BoW and the shallow features described previously, as well as the BaselineM.', 'Despite the simplicity of the features, we can see that the aspects can be inferred from the articles.', 'In particular, the model trained on the BoW features achieves an RMSE that is very close to that of the BaselineM, whereas model trained on the shallow features outperforms all other models.']
[None, ['BoW', 'Shallow', 'BaselineM'], None, ['BoW', 'BaselineM', 'Shallow']]
1
P16-1181table_3
Sentence boundary detection results (F1) on test sets.
1
[['MARMOT'], ['NOSYNTAX'], ['JOINT']]
1
[['WSJ'], ['Switchboard'], ['WSJ*']]
[['97.64', '71.87', '53.02'], ['98.21', '76.31†', '55.15'], ['98.21', '76.65†', '65.34†‡']]
column
['F1', 'F1', 'F1']
['JOINT', 'NOSYNTAX', 'MARMOT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ</th> <th>Switchboard</th> <th>WSJ*</th> </tr> </thead> <tbody> <tr> <td>MARMOT</td> <td>97.64</td> <td>71.87</td> <td>53.02</td> </tr> <tr> <td>NOSYNTAX</td> <td>98.21</td> <td>76.31†</td> <td>55.15</td> </tr> <tr> <td>JOINT</td> <td>98.21</td> <td>76.65†</td> <td>65.34†‡</td> </tr> </tbody></table>
Table 3
table_3
P16-1181
8
acl2016
Table 3 gives the performance of the sentence boundary detectors on test sets. On WSJ all systems are close to 98 and this high number once again affirms that the task of segmenting newspaper-quality text does not leave much space for improvement. Although the parsing models outperform MARMOT, the improvements in F1 are not significant. In contrast, all systems fare considerably worse on WSJ* which confirms that the orthographic clues in newspaper text suffice to segment the sentences properly. Although NOSYNTAX outperforms MARMOT, the difference is not significant. However, when real syntax is used (JOINT) we see a huge improvement in F1, 10 points absolute, which is significantly better than both NOSYNTAX and MARMOT. On Switchboard MARMOT is much lower and both parsing models outperform it significantly. Surprisingly the NOSYNTAX system achieves a very high result beating the baseline significantly by almost 4.5 points. The usage of syntax in the JOINT model raises this gain to 4.8 points.
[1, 1, 2, 2, 2, 1, 1, 1, 1]
['Table 3 gives the performance of the sentence boundary detectors on test sets.', 'On WSJ all systems are close to 98 and this high number once again affirms that the task of segmenting newspaper-quality text does not leave much space for improvement.', 'Although the parsing models outperform MARMOT, the improvements in F1 are not significant.', 'In contrast, all systems fare considerably worse on WSJ* which confirms that the orthographic clues in newspaper text suffice to segment the sentences properly.', 'Although NOSYNTAX outperforms MARMOT, the difference is not significant.', 'However, when real syntax is used (JOINT) we see a huge improvement in F1, 10 points absolute, which is significantly better than both NOSYNTAX and MARMOT.', 'On Switchboard MARMOT is much lower and both parsing models outperform it significantly.', 'Surprisingly the NOSYNTAX system achieves a very high result beating the baseline significantly by almost 4.5 points.', 'The usage of syntax in the JOINT model raises this gain to 4.8 points.']
[None, ['WSJ'], ['MARMOT'], ['WSJ*'], ['NOSYNTAX', 'MARMOT'], ['JOINT', 'NOSYNTAX', 'MARMOT'], ['MARMOT', 'Switchboard'], ['NOSYNTAX', 'Switchboard'], ['JOINT', 'Switchboard']]
1
P16-1188table_2
Results for RST Discourse Treebank (Carlson et al., 2001). Differences between our system and the Tree Knapsack system of Yoshida et al. (2014) are not statistically significant, reflecting the high variance in this small (20 document) test set.
1
[['First k words'], ['Tree Knapsack'], ['Full']]
1
[['ROUGE-1'], ['ROUGE-2']]
[['23.5', '8.3'], ['25.1', '8.7'], ['26.3', '8']]
column
['ROUGE-1', 'ROUGE-2']
['Full']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> </tr> </thead> <tbody> <tr> <td>First k words</td> <td>23.5</td> <td>8.3</td> </tr> <tr> <td>Tree Knapsack</td> <td>25.1</td> <td>8.7</td> </tr> <tr> <td>Full</td> <td>26.3</td> <td>8</td> </tr> </tbody></table>
Table 2
table_2
P16-1188
9
acl2016
Table 2 shows the results on the RST corpus. Our system is roughly comparable to Tree Knapsack here, and we note that none of the differences in the table are statistically significant. We also observed significant variation between multiple runs on this corpus, with scores changing by 1-2 ROUGE points for slightly different system variants.
[1, 1, 1]
['Table 2 shows the results on the RST corpus.', 'Our system is roughly comparable to Tree Knapsack here, and we note that none of the differences in the table are statistically significant.', 'We also observed significant variation between multiple runs on this corpus, with scores changing by 1-2 ROUGE points for slightly different system variants.']
[None, ['Full', 'Tree Knapsack'], ['Full', 'Tree Knapsack', 'First k words']]
1
P16-1191table_6
Weighted F-score performance on supersense prediction for the development set and two test sets provided by Johannsen et al. (2004). Our system performs comparably to state-of-the-art systems. † For the system of Ciaramita et al, the publicly avaliable reimplementation of Heilman was used
3
[['System/Data:', 'Baseline and upper bound', 'Most frequent sense'], ['System/Data:', 'Baseline and upper bound', 'Inter-annotator agreement'], ['System/Data:', 'SemCor-trained systems', '(Ciaramita and Altun 2006)â€\xa0'], ['System/Data:', 'SemCor-trained systems', 'Searn (Johannsen et al. 2014)'], ['System/Data:', 'SemCor-trained systems', 'HMM (Johannsen et al. 2014)'], ['System/Data:', 'SemCor-trained systems', 'Ours Semcor'], ['System/Data:', 'Twitter-trained systems', 'Searn (Johannsen et al. 2014)'], ['System/Data:', 'Twitter-trained systems', 'HMM (Johannsen et al. 2014)'], ['System/Data:', 'Twitter-trained systems', 'Ours Twitter']]
1
[['Tw-R-dev'], ['Tw-R-eval'], ['Tw-J-eval']]
[['47.54', '44.98', '38.65'], ['-', '69.15', '61.15'], ['48.96', '45.03', '39.65'], ['56.59', '50.89', '40.50'], ['57.14', '50.98', '41.84'], ['54.47', '50.30', '35.61'], ['67.72', '57.14', '42.42'], ['60.66', '51.40', '41.60'], ['61.12', '57.16', '41.97']]
column
['F-score', 'F-score', 'F-score']
['Ours Semcor', 'Ours Twitter']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tw-R-dev</th> <th>Tw-R-eval</th> <th>Tw-J-eval</th> </tr> </thead> <tbody> <tr> <td>System/Data: || Baseline and upper bound || Most frequent sense</td> <td>47.54</td> <td>44.98</td> <td>38.65</td> </tr> <tr> <td>System/Data: || Baseline and upper bound || Inter-annotator agreement</td> <td>-</td> <td>69.15</td> <td>61.15</td> </tr> <tr> <td>System/Data: || SemCor-trained systems || (Ciaramita and Altun 2006)â€</td> <td>48.96</td> <td>45.03</td> <td>39.65</td> </tr> <tr> <td>System/Data: || SemCor-trained systems || Searn (Johannsen et al. 2014)</td> <td>56.59</td> <td>50.89</td> <td>40.50</td> </tr> <tr> <td>System/Data: || SemCor-trained systems || HMM (Johannsen et al. 2014)</td> <td>57.14</td> <td>50.98</td> <td>41.84</td> </tr> <tr> <td>System/Data: || SemCor-trained systems || Ours Semcor</td> <td>54.47</td> <td>50.30</td> <td>35.61</td> </tr> <tr> <td>System/Data: || Twitter-trained systems || Searn (Johannsen et al. 2014)</td> <td>67.72</td> <td>57.14</td> <td>42.42</td> </tr> <tr> <td>System/Data: || Twitter-trained systems || HMM (Johannsen et al. 2014)</td> <td>60.66</td> <td>51.40</td> <td>41.60</td> </tr> <tr> <td>System/Data: || Twitter-trained systems || Ours Twitter</td> <td>61.12</td> <td>57.16</td> <td>41.97</td> </tr> </tbody></table>
Table 6
table_6
P16-1191
6
acl2016
5.2 Supersense Prediction. We evaluate our system on the same Twitter dataset with provided training and development (Tw-R-dev) set and two test sets: Tw-R-eval, reported by Johannsen et al. as RITTER, and Tw-J-eval, reported by Johannsen et al. as INHOUSE. Our results are shown in table 6 and compared to results reported in previous work by Johannsen et al. (2014), with two additional baselines: The SemCor system of Ciaramita and Altun (2006) and the most frequent sense. Our system achieves comparable performance to the best previously used supervised systems, without using any explicit gazetteers.
[2, 2, 1, 1]
['5.2 Supersense Prediction.', 'We evaluate our system on the same Twitter dataset with provided training and development (Tw-R-dev) set and two test sets: Tw-R-eval, reported by Johannsen et al. as RITTER, and Tw-J-eval, reported by Johannsen et al. as INHOUSE.', 'Our results are shown in table 6 and compared to results reported in previous work by Johannsen et al. (2014), with two additional baselines: The SemCor system of Ciaramita and Altun (2006) and the most frequent sense.', 'Our system achieves comparable performance to the best previously used supervised systems, without using any explicit gazetteers.']
[None, ['Tw-R-dev', 'Tw-R-eval', 'Tw-J-eval'], ['Ours Semcor', 'Ours Twitter', 'HMM (Johannsen et al. 2014)', '(Ciaramita and Altun 2006)â€\xa0', 'Most frequent sense'], ['Ours Semcor', 'Ours Twitter']]
1
P16-1195table_3
Performance on EVAL for the GEN task. Performance on DEV is indicated in parentheses.
3
[['C#', 'Model', 'IR'], ['C#', 'Model', 'MOSES'], ['C#', 'Model', 'SUM-NN'], ['C#', 'Model', 'CODE-NN'], ['SQL', 'Model', 'IR'], ['SQL', 'Model', 'MOSES'], ['SQL', 'Model', 'SUM-NN'], ['SQL', 'Model', 'CODE-NN']]
1
[['METEOR'], ['BLEU-4']]
[['7.9 (6.1)', '13.7 (12.6)'], ['9.1 (9.7)', '11.6 (11.5)'], ['10.6 (10.3)', '19.3 (18.2)'], ['12.3 (13.4)', '20.5 (20.4)'], ['6.3 (8.0)', '13.5 (13.0)'], ['8.3 (9.7)', '15.4 (15.9)'], ['6.4 (8.7)', '13.3 (14.2)'], ['10.9 (14.0)', '18.4 (17.0)']]
column
['METEOR', 'BLEU-4']
['CODE-NN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>C# || Model || IR</td> <td>7.9 (6.1)</td> <td>13.7 (12.6)</td> </tr> <tr> <td>C# || Model || MOSES</td> <td>9.1 (9.7)</td> <td>11.6 (11.5)</td> </tr> <tr> <td>C# || Model || SUM-NN</td> <td>10.6 (10.3)</td> <td>19.3 (18.2)</td> </tr> <tr> <td>C# || Model || CODE-NN</td> <td>12.3 (13.4)</td> <td>20.5 (20.4)</td> </tr> <tr> <td>SQL || Model || IR</td> <td>6.3 (8.0)</td> <td>13.5 (13.0)</td> </tr> <tr> <td>SQL || Model || MOSES</td> <td>8.3 (9.7)</td> <td>15.4 (15.9)</td> </tr> <tr> <td>SQL || Model || SUM-NN</td> <td>6.4 (8.7)</td> <td>13.3 (14.2)</td> </tr> <tr> <td>SQL || Model || CODE-NN</td> <td>10.9 (14.0)</td> <td>18.4 (17.0)</td> </tr> </tbody></table>
Table 3
table_3
P16-1195
7
acl2016
Table 3 shows automatic evaluation metrics for our model and baselines. CODE-NN outperforms all the other methods in terms of METEOR and BLEU-4 score. We attribute this to its ability to perform better content selection, focusing on the more salient parts of the code by using its attention mechanism jointly with its LSTM memory cells. The neural models have better performance on C# than SQL. This is in part because, unlike SQL, C# code contains informative intermediate variable names that are directly related to the objective of the code. On the other hand, SQL is more challenging in that it only has a handful of keywords and functions, and summarization models need to rely on other structural aspects of the code.
[1, 1, 2, 1, 2, 2]
['Table 3 shows automatic evaluation metrics for our model and baselines.', 'CODE-NN outperforms all the other methods in terms of METEOR and BLEU-4 score.', 'We attribute this to its ability to perform better content selection, focusing on the more salient parts of the code by using its attention mechanism jointly with its LSTM memory cells.', 'The neural models have better performance on C# than SQL.', 'This is in part because, unlike SQL, C# code contains informative intermediate variable names that are directly related to the objective of the code.', 'On the other hand, SQL is more challenging in that it only has a handful of keywords and functions, and summarization models need to rely on other structural aspects of the code.']
[None, ['CODE-NN', 'BLEU-4', 'METEOR'], ['CODE-NN'], ['CODE-NN', 'C#', 'SQL'], ['C#', 'SQL'], ['SQL']]
1
P16-1201table_6
Automatic evaluations of events from FN.
2
[['Training Corpus', 'ACE-ANN-FN'], ['Training Corpus', 'ACE-SF-FN'], ['Training Corpus', 'ACE-RF-FN'], ['Training Corpus', 'ACE-SL-FN'], ['Training Corpus', 'ACE-PSL-FN']]
1
[['Pre'], ['Rec'], ['F1']]
[['77.2', '63.5', '69.7'], ['73.2', '64.1', '68.4'], ['72.6', '63.9', '68.0'], ['77.5', '64.3', '70.3'], ['77.6', '65.2', '70.7']]
column
['Pre', 'Rec', 'F1']
['ACE-PSL-FN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pre</th> <th>Rec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Corpus || ACE-ANN-FN</td> <td>77.2</td> <td>63.5</td> <td>69.7</td> </tr> <tr> <td>Training Corpus || ACE-SF-FN</td> <td>73.2</td> <td>64.1</td> <td>68.4</td> </tr> <tr> <td>Training Corpus || ACE-RF-FN</td> <td>72.6</td> <td>63.9</td> <td>68.0</td> </tr> <tr> <td>Training Corpus || ACE-SL-FN</td> <td>77.5</td> <td>64.3</td> <td>70.3</td> </tr> <tr> <td>Training Corpus || ACE-PSL-FN</td> <td>77.6</td> <td>65.2</td> <td>70.7</td> </tr> </tbody></table>
Table 6
table_6
P16-1201
7
acl2016
Table 6 presents the results where we measure precision, recall and F1. Compared with ACE-ANN-FN, events from SF and RF hurt the performance. As analyzed in previous section, SF and RF yield quite a few false events, which dramatically hurt the accuracy. Moreover, ACE-SL-FN obtains a score of 70.3% in F1 measure, which outperforms ACE-ANN-FN. This result illustrates the effectiveness of our “same LU” hypothesis. Finally and most importantly, consistent with the results of manual evaluations, ACE-PSL-FN performs the best, which further proves the effectiveness of our proposed approach for event detection in FN.
[1, 1, 2, 1, 2, 1]
['Table 6 presents the results where we measure precision, recall and F1.', 'Compared with ACE-ANN-FN, events from SF and RF hurt the performance.', 'As analyzed in previous section, SF and RF yield quite a few false events, which dramatically hurt the accuracy.', 'Moreover, ACE-SL-FN obtains a score of 70.3% in F1 measure, which outperforms ACE-ANN-FN.', 'This result illustrates the effectiveness of our “same LU” hypothesis.', 'Finally and most importantly, consistent with the results of manual evaluations, ACE-PSL-FN performs the best, which further proves the effectiveness of our proposed approach for event detection in FN.']
[['Pre', 'Rec', 'F1'], ['ACE-ANN-FN', 'ACE-SF-FN', 'ACE-RF-FN'], ['ACE-SF-FN', 'ACE-RF-FN'], ['ACE-SL-FN', 'ACE-ANN-FN'], None, ['ACE-PSL-FN']]
1
P16-1206table_2
Model evaluation with standard metric and our new metric. Models vary in the amount of training data and feature types.
3
[['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'SXU', 'Corpus'], ['Data', 'SXU', 'Corpus'], ['Data', 'SXU', 'Corpus'], ['Data', 'SXU', 'Corpus']]
1
[['Size'], ['p'], ['r'], ['f1'], ['pb'], ['rb'], ['fb']]
[['20%', '90.04', '89.9', '89.97', '45.22', '43.37', '44.28'], ['50%', '92.87', '91.58', '92.22', '54.24', '49.12', '51.55'], ['80%', '94.07', '92.21', '93.13', '61.8', '54.74', '58.05'], ['100%', '94.03', '92.91', '93.47', '64.22', '59.16', '61.59'], ['20%', '92.93', '92.58', '92.76', '45.76', '44.13', '44.93'], ['50%', '95.22', '95.18', '95.2', '63', '62.22', '62.6'], ['80%', '95.68', '95.74', '95.71', '67.26', '66.96', '67.11'], ['100%', '96.19', '96.02', '96.11', '70.8', '69.45', '70.12'], ['20%', '87.32', '86.37', '86.84', '42.16', '40.23', '41.17'], ['50%', '89.34', '89.03', '89.19', '50.31', '49.26', '49.78'], ['80%', '91.42', '91.1', '91.26', '60.48', '59.25', '59.86'], ['100%', '92', '91.77', '91.89', '63.72', '62.7', '63.2'], ['20%', '89.7', '89.31', '89.5', '43.53', '42.35', '42.93'], ['50%', '93.04', '92.42', '92.73', '56.21', '54.27', '55.23'], ['80%', '94.45', '93.94', '94.19', '64.55', '62.5', '63.51'], ['100%', '94.89', '94.61', '94.75', '68.1', '66.63', '67.36']]
column
['Size', 'p', 'r', 'f1', 'pb', 'rb', 'fb']
['pb', 'rb', 'fb']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Size</th> <th>p</th> <th>r</th> <th>f1</th> <th>pb</th> <th>rb</th> <th>fb</th> </tr> </thead> <tbody> <tr> <td>Data || PKU || Corpus</td> <td>20%</td> <td>90.04</td> <td>89.9</td> <td>89.97</td> <td>45.22</td> <td>43.37</td> <td>44.28</td> </tr> <tr> <td>Data || PKU || Corpus</td> <td>50%</td> <td>92.87</td> <td>91.58</td> <td>92.22</td> <td>54.24</td> <td>49.12</td> <td>51.55</td> </tr> <tr> <td>Data || PKU || Corpus</td> <td>80%</td> <td>94.07</td> <td>92.21</td> <td>93.13</td> <td>61.8</td> <td>54.74</td> <td>58.05</td> </tr> <tr> <td>Data || PKU || Corpus</td> <td>100%</td> <td>94.03</td> <td>92.91</td> <td>93.47</td> <td>64.22</td> <td>59.16</td> <td>61.59</td> </tr> <tr> <td>Data || MSR || Corpus</td> <td>20%</td> <td>92.93</td> <td>92.58</td> <td>92.76</td> <td>45.76</td> <td>44.13</td> <td>44.93</td> </tr> <tr> <td>Data || MSR || Corpus</td> <td>50%</td> <td>95.22</td> <td>95.18</td> <td>95.2</td> <td>63</td> <td>62.22</td> <td>62.6</td> </tr> <tr> <td>Data || MSR || Corpus</td> <td>80%</td> <td>95.68</td> <td>95.74</td> <td>95.71</td> <td>67.26</td> <td>66.96</td> <td>67.11</td> </tr> <tr> <td>Data || MSR || Corpus</td> <td>100%</td> <td>96.19</td> <td>96.02</td> <td>96.11</td> <td>70.8</td> <td>69.45</td> <td>70.12</td> </tr> <tr> <td>Data || NCC || Corpus</td> <td>20%</td> <td>87.32</td> <td>86.37</td> <td>86.84</td> <td>42.16</td> <td>40.23</td> <td>41.17</td> </tr> <tr> <td>Data || NCC || Corpus</td> <td>50%</td> <td>89.34</td> <td>89.03</td> <td>89.19</td> <td>50.31</td> <td>49.26</td> <td>49.78</td> </tr> <tr> <td>Data || NCC || Corpus</td> <td>80%</td> <td>91.42</td> <td>91.1</td> <td>91.26</td> <td>60.48</td> <td>59.25</td> <td>59.86</td> </tr> <tr> <td>Data || NCC || Corpus</td> <td>100%</td> <td>92</td> <td>91.77</td> <td>91.89</td> <td>63.72</td> <td>62.7</td> <td>63.2</td> </tr> <tr> <td>Data || SXU || Corpus</td> <td>20%</td> <td>89.7</td> <td>89.31</td> <td>89.5</td> <td>43.53</td> <td>42.35</td> <td>42.93</td> </tr> <tr> <td>Data || SXU || Corpus</td> <td>50%</td> <td>93.04</td> <td>92.42</td> <td>92.73</td> <td>56.21</td> <td>54.27</td> <td>55.23</td> </tr> <tr> <td>Data || SXU || Corpus</td> <td>80%</td> <td>94.45</td> <td>93.94</td> <td>94.19</td> <td>64.55</td> <td>62.5</td> <td>63.51</td> </tr> <tr> <td>Data || SXU || Corpus</td> <td>100%</td> <td>94.89</td> <td>94.61</td> <td>94.75</td> <td>68.1</td> <td>66.63</td> <td>67.36</td> </tr> </tbody></table>
Table 2
table_2
P16-1206
8
acl2016
Table 2 shows the different evaluation results with standard metric and our balanced metric. We can see that the proposed evaluation metric generally gives lower and more distinguishable score, compared with the standard metric.
[1, 1]
['Table 2 shows the different evaluation results with standard metric and our balanced metric.', 'We can see that the proposed evaluation metric generally gives lower and more distinguishable score, compared with the standard metric.']
[None, ['pb', 'rb', 'fb']]
1
P16-1209table_3
Performance of various models using 25 dimensional CE features, A:Disease name recognition, B: Disease classification task
3
[['Task A', 'Model', 'NN+CE'], ['Task A', 'Model', 'Bi-RNN+CE'], ['Task A', 'Model', 'Bi-GRU+CE'], ['Task A', 'Model', 'Bi-LSTM+CE'], ['Task B', 'Model', 'NN+CE'], ['Task B', 'Model', 'Bi-RNN+CE'], ['Task B', 'Model', 'Bi-GRU+CE'], ['Task B', 'Model', 'Bi-LSTM+CE']]
2
[['Validation Set', 'Precision'], ['Validation Set', 'Recall'], ['Validation Set', 'F1 Score'], ['Test Set', 'Precision'], ['Test Set', 'Recall'], ['Test Set', 'F1 Score']]
[['76.98', '75.80', '76.39', '78.51', '72.75', '75.52'], ['71.96', '74.90', '73.40', '74.14', '72.12', '73.11'], ['76.28', '74.14', '75.19', '76.03', '69.81', '72.79'], ['81.52', '72.86', '76.94', '76.98', '75.80', '76.39'], ['67.27', '53.45', '59.57', '67.90', '49.95', '57.56'], ['61.34', '56.32', '58.72', '60.32', '57.28', '58.76'], ['61.94', '59.11', '60.49', '62.56', '56.50', '59.38'], ['61.82', '57.03', '59.33', '64.74', '55.53', '59.78']]
column
['Precision', 'Recall', 'F1 Score', 'Precision', 'Recall', 'F1 Score']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Validation Set || Precision</th> <th>Validation Set || Recall</th> <th>Validation Set || F1 Score</th> <th>Test Set || Precision</th> <th>Test Set || Recall</th> <th>Test Set || F1 Score</th> </tr> </thead> <tbody> <tr> <td>Task A || Model || NN+CE</td> <td>76.98</td> <td>75.80</td> <td>76.39</td> <td>78.51</td> <td>72.75</td> <td>75.52</td> </tr> <tr> <td>Task A || Model || Bi-RNN+CE</td> <td>71.96</td> <td>74.90</td> <td>73.40</td> <td>74.14</td> <td>72.12</td> <td>73.11</td> </tr> <tr> <td>Task A || Model || Bi-GRU+CE</td> <td>76.28</td> <td>74.14</td> <td>75.19</td> <td>76.03</td> <td>69.81</td> <td>72.79</td> </tr> <tr> <td>Task A || Model || Bi-LSTM+CE</td> <td>81.52</td> <td>72.86</td> <td>76.94</td> <td>76.98</td> <td>75.80</td> <td>76.39</td> </tr> <tr> <td>Task B || Model || NN+CE</td> <td>67.27</td> <td>53.45</td> <td>59.57</td> <td>67.90</td> <td>49.95</td> <td>57.56</td> </tr> <tr> <td>Task B || Model || Bi-RNN+CE</td> <td>61.34</td> <td>56.32</td> <td>58.72</td> <td>60.32</td> <td>57.28</td> <td>58.76</td> </tr> <tr> <td>Task B || Model || Bi-GRU+CE</td> <td>61.94</td> <td>59.11</td> <td>60.49</td> <td>62.56</td> <td>56.50</td> <td>59.38</td> </tr> <tr> <td>Task B || Model || Bi-LSTM+CE</td> <td>61.82</td> <td>57.03</td> <td>59.33</td> <td>64.74</td> <td>55.53</td> <td>59.78</td> </tr> </tbody></table>
Table 3
table_3
P16-1209
6
acl2016
Table 3 shows the results obtained by different RNN models with only character level word embedding features. For the task A (Disease name recognition) Bi-LSTM and NN models gave competitive performance on the test set, while Bi-RNN and Bi-GRU did not perform so well. On the other hand for the task B, there is 2.08% − 3.8% improved performance (F1-score) shown by RNN models over the NN model again on the test set. Bi-LSTM model obtained F1-score of 59.78% while NN model gave 57.56%. As discussed earlier, task B is difficult than task A as disease category is more likely to be influenced by the words falling outside the context window considered in window based methods. This could be reason for RNN models to perform well over the NN model. This hypothesis will be stronger if we observe similar pattern in our other experiments.
[1, 1, 1, 1, 2, 2, 2]
['Table 3 shows the results obtained by different RNN models with only character level word embedding features.', 'For the task A (Disease name recognition) Bi-LSTM and NN models gave competitive performance on the test set, while Bi-RNN and Bi-GRU did not perform so well.', 'On the other hand for the task B, there is 2.08% − 3.8% improved performance (F1-score) shown by RNN models over the NN model again on the test set.', 'Bi-LSTM model obtained F1-score of 59.78% while NN model gave 57.56%.', 'As discussed earlier, task B is difficult than task A as disease category is more likely to be influenced by the words falling outside the context window considered in window based methods.', 'This could be reason for RNN models to perform well over the NN model.', 'This hypothesis will be stronger if we observe similar pattern in our other experiments.']
[None, ['Task A', 'Bi-LSTM+CE', 'NN+CE'], ['Task B', 'Bi-RNN+CE', 'NN+CE'], ['Task B', 'Bi-LSTM+CE', 'NN+CE'], ['Task B', 'Task A'], ['Bi-RNN+CE'], None]
1
P16-1218table_3
Comparison with previous state-of-the-art models on Penn-YM, Penn-SD and CTB5.
2
[['Method', '(Zhang and Nivre 2011)'], ['Method', '(Bernd Bohnet 2012)'], ['Method', '(Zhang and McDonald 2014)'], ['Method', '(Dyer et al. 2015)'], ['Method', '(Weiss et al. 2015)'], ['Method', 'Our basic model + segment']]
2
[['Penn-YM', 'UAS'], ['Penn-YM', 'LAS'], ['Penn-SD', 'UAS'], ['Penn-SD', 'LAS'], ['CTB5', 'UAS'], ['CTB5', 'LAS']]
[['92.9', '91.8', '-', '-', '86.0', '84.4'], ['93.39', '92.38', '-', '-', '87.5', '85.9'], ['93.57', '92.48', '93.01', '90.64', '87.96', '86.34'], ['-', '-', '93.1', '90.9', '87.2', '85.7'], ['-', '-', '93.99', '92.05', '-', '-'], ['93.51', '92.45', '94.08', '91.82', '87.55', '86.23']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['Our basic model + segment']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Penn-YM || UAS</th> <th>Penn-YM || LAS</th> <th>Penn-SD || UAS</th> <th>Penn-SD || LAS</th> <th>CTB5 || UAS</th> <th>CTB5 || LAS</th> </tr> </thead> <tbody> <tr> <td>Method || (Zhang and Nivre 2011)</td> <td>92.9</td> <td>91.8</td> <td>-</td> <td>-</td> <td>86.0</td> <td>84.4</td> </tr> <tr> <td>Method || (Bernd Bohnet 2012)</td> <td>93.39</td> <td>92.38</td> <td>-</td> <td>-</td> <td>87.5</td> <td>85.9</td> </tr> <tr> <td>Method || (Zhang and McDonald 2014)</td> <td>93.57</td> <td>92.48</td> <td>93.01</td> <td>90.64</td> <td>87.96</td> <td>86.34</td> </tr> <tr> <td>Method || (Dyer et al. 2015)</td> <td>-</td> <td>-</td> <td>93.1</td> <td>90.9</td> <td>87.2</td> <td>85.7</td> </tr> <tr> <td>Method || (Weiss et al. 2015)</td> <td>-</td> <td>-</td> <td>93.99</td> <td>92.05</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Our basic model + segment</td> <td>93.51</td> <td>92.45</td> <td>94.08</td> <td>91.82</td> <td>87.55</td> <td>86.23</td> </tr> </tbody></table>
Table 3
table_3
P16-1218
7
acl2016
Table 3 lists the performances of our model as well as previous state-of-the-art systems on on PennYM, Penn-SD and CTB5. We compare to conventional state-of-the-art graph-based model (Zhang and McDonald, 2014), conventional state-of-theart transition-based model using beam search (Zhang and Nivre, 2011), transition-based model combining graph-based approach (Bernd Bohnet, 2012) , transition-based neural network model using stack LSTM (Dyer et al., 2015) and transitionbased neural network model using beam search (Weiss et al., 2015). Overall, our model achieves competitive accuracy on all three datasets. Although our model is slightly lower in accuracy than unlimited-order double beam model (Zhang and McDonald, 2014) on Penn-YM and CTB5, our model outperforms their model on Penn-SD. It seems that our model performs better on data sets with larger label sets, given the number of labels used in Penn-SD data set is almost four times more than Penn-YM and CTB5 data sets.
[1, 2, 1, 1, 1]
['Table 3 lists the performances of our model as well as previous state-of-the-art systems on on PennYM, Penn-SD and CTB5.', 'We compare to conventional state-of-the-art graph-based model (Zhang and McDonald, 2014), conventional state-of-theart transition-based model using beam search (Zhang and Nivre, 2011), transition-based model combining graph-based approach (Bernd Bohnet, 2012) , transition-based neural network model using stack LSTM (Dyer et al., 2015) and transitionbased neural network model using beam search (Weiss et al., 2015).', 'Overall, our model achieves competitive accuracy on all three datasets.', 'Although our model is slightly lower in accuracy than unlimited-order double beam model (Zhang and McDonald, 2014) on Penn-YM and CTB5, our model outperforms their model on Penn-SD.', 'It seems that our model performs better on data sets with larger label sets, given the number of labels used in Penn-SD data set is almost four times more than Penn-YM and CTB5 data sets.']
[['Penn-YM', 'Penn-SD', 'CTB5'], ['(Zhang and McDonald 2014)', '(Zhang and Nivre 2011)', '(Bernd Bohnet 2012)', '(Dyer et al. 2015)', '(Weiss et al. 2015)'], ['Our basic model + segment', 'Penn-YM', 'Penn-SD', 'CTB5'], ['Our basic model + segment', '(Zhang and McDonald 2014)', 'Penn-YM', 'CTB5', 'Penn-SD'], ['Our basic model + segment', 'Penn-SD', 'Penn-YM', 'CTB5']]
1
P16-1218table_4
Model performance of different way to learn segment embeddings.
2
[['Method', 'Average'], ['Method', 'LSTM-Minus']]
1
[['Peen-YM'], ['Peen-SD'], ['CTB5']]
[['93.23', '93.83', '87.24'], ['93.51', '94.08', '87.55']]
column
['UAS', 'UAS', 'UAS']
['LSTM-Minus']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Peen-YM</th> <th>Peen-SD</th> <th>CTB5</th> </tr> </thead> <tbody> <tr> <td>Method || Average</td> <td>93.23</td> <td>93.83</td> <td>87.24</td> </tr> <tr> <td>Method || LSTM-Minus</td> <td>93.51</td> <td>94.08</td> <td>87.55</td> </tr> </tbody></table>
Table 4
table_4
P16-1218
7
acl2016
To make comparison as fair as possible, we let two models have almost the same number parameters. Table 4 lists the UAS of two methods on test set. As we can see, LSTM-Minus shows better performance because our method further incorporates more sentence-level information into our model.
[2, 1, 1]
['To make comparison as fair as possible, we let two models have almost the same number parameters.', 'Table 4 lists the UAS of two methods on test set.', 'As we can see, LSTM-Minus shows better performance because our method further incorporates more sentence-level information into our model.']
[None, ['Method'], ['LSTM-Minus']]
1
P16-1220table_1
Results on the test set.
3
[['Method', 'Previous work', 'Berant et al. (2013)'], ['Method', 'Previous work', 'Yao and Van Durme (2014)'], ['Method', 'Previous work', 'Xu et al. (2014)'], ['Method', 'Previous work', 'Berant and Liang (2014)'], ['Method', 'Previous work', 'Bao et al. (2014)'], ['Method', 'Previous work', 'Bordes et al. (2014)'], ['Method', 'Previous work', 'Dong et al. (2015)'], ['Method', 'Previous work', 'Yao (2015)'], ['Method', 'Previous work', 'Bast and Haussmann (2015)'], ['Method', 'Previous work', 'Berant and Liang (2015)'], ['Method', 'Previous work', 'Reddy et al. (2016)'], ['Method', 'Previous work', 'Yih et al. (2015)'], ['Method', 'This work', 'Structured'], ['Method', 'This work', 'Structured + Joint'], ['Method', 'This work', 'Structured + Unstructured'], ['Method', 'This work', 'Structured + Joint + Unstructured']]
1
[['average F1']]
[['35.7'], ['33.0'], ['39.1'], ['39.9'], ['37.5'], ['39.2'], ['40.8'], ['44.3'], ['49.4'], ['49.7'], ['50.3'], ['52.5'], ['44.1'], ['47.1'], ['47.0'], ['53.3']]
column
['average F1']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>average F1</th> </tr> </thead> <tbody> <tr> <td>Method || Previous work || Berant et al. (2013)</td> <td>35.7</td> </tr> <tr> <td>Method || Previous work || Yao and Van Durme (2014)</td> <td>33.0</td> </tr> <tr> <td>Method || Previous work || Xu et al. (2014)</td> <td>39.1</td> </tr> <tr> <td>Method || Previous work || Berant and Liang (2014)</td> <td>39.9</td> </tr> <tr> <td>Method || Previous work || Bao et al. (2014)</td> <td>37.5</td> </tr> <tr> <td>Method || Previous work || Bordes et al. (2014)</td> <td>39.2</td> </tr> <tr> <td>Method || Previous work || Dong et al. (2015)</td> <td>40.8</td> </tr> <tr> <td>Method || Previous work || Yao (2015)</td> <td>44.3</td> </tr> <tr> <td>Method || Previous work || Bast and Haussmann (2015)</td> <td>49.4</td> </tr> <tr> <td>Method || Previous work || Berant and Liang (2015)</td> <td>49.7</td> </tr> <tr> <td>Method || Previous work || Reddy et al. (2016)</td> <td>50.3</td> </tr> <tr> <td>Method || Previous work || Yih et al. (2015)</td> <td>52.5</td> </tr> <tr> <td>Method || This work || Structured</td> <td>44.1</td> </tr> <tr> <td>Method || This work || Structured + Joint</td> <td>47.1</td> </tr> <tr> <td>Method || This work || Structured + Unstructured</td> <td>47.0</td> </tr> <tr> <td>Method || This work || Structured + Joint + Unstructured</td> <td>53.3</td> </tr> </tbody></table>
Table 1
table_1
P16-1220
6
acl2016
5.3.3 Impact of the Inference on Unstructured Data. As shown in Table 1, when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%). And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achieving a new state-of-the-art result. For the latter, we manually analyzed the cases in which unstructured inference helps.
[2, 1, 1, 2]
['5.3.3 Impact of the Inference on Unstructured Data.', 'As shown in Table 1, when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%).', 'And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achieving a new state-of-the-art result.', 'For the latter, we manually analyzed the cases in which unstructured inference helps.']
[None, ['Structured', 'Structured + Unstructured'], ['Structured + Joint + Unstructured'], None]
1
P16-1222table_2
Overall performance comparison against baselines.
2
[['Algorithm', 'Standard SMT (Koehn et al., 2003)'], ['Algorithm', 'Couplet SMT (Jiang and Zhou, 2008)'], ['Algorithm', 'LSTM-RNN (Sutskever et al., 2014)'], ['Algorithm', 'iPoet (Yan et al., 2013)'], ['Algorithm', 'Poetry SMT (He et al., 2012)'], ['Algorithm', 'RNNPG (Zhang and Lapata, 2014)'], ['Algorithm', 'Neural Couplet Machine (NCM)']]
1
[['Perplexity'], ['BLEU'], ['Human Evaluation (Syntactic)'], ['Human Evaluation (Semantic)'], ['Human Evaluation (Overall)']]
[['128', '21.68', '0.563', '0.248', '0.811'], ['97', '28.71', '0.916', '0.503', '1.419'], ['85', '24.23', '0.648', '0.233', '0.881'], ['143', '13.77', '0.228', '0.435', '0.663'], ['121', '23.11', '0.802', '0.516', '1.318'], ['99', '25.83', '0.853', '0.6', '1.453'], ['68', '32.62', '0.925', '0.631', '1.556']]
column
['Perplexity', 'BLEU', 'Human Evaluation (Syntactic)', 'Human Evaluation (Semantic)', 'Human Evaluation (Overall)']
['Neural Couplet Machine (NCM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perplexity</th> <th>BLEU</th> <th>Human Evaluation (Syntactic)</th> <th>Human Evaluation (Semantic)</th> <th>Human Evaluation (Overall)</th> </tr> </thead> <tbody> <tr> <td>Algorithm || Standard SMT (Koehn et al., 2003)</td> <td>128</td> <td>21.68</td> <td>0.563</td> <td>0.248</td> <td>0.811</td> </tr> <tr> <td>Algorithm || Couplet SMT (Jiang and Zhou, 2008)</td> <td>97</td> <td>28.71</td> <td>0.916</td> <td>0.503</td> <td>1.419</td> </tr> <tr> <td>Algorithm || LSTM-RNN (Sutskever et al., 2014)</td> <td>85</td> <td>24.23</td> <td>0.648</td> <td>0.233</td> <td>0.881</td> </tr> <tr> <td>Algorithm || iPoet (Yan et al., 2013)</td> <td>143</td> <td>13.77</td> <td>0.228</td> <td>0.435</td> <td>0.663</td> </tr> <tr> <td>Algorithm || Poetry SMT (He et al., 2012)</td> <td>121</td> <td>23.11</td> <td>0.802</td> <td>0.516</td> <td>1.318</td> </tr> <tr> <td>Algorithm || RNNPG (Zhang and Lapata, 2014)</td> <td>99</td> <td>25.83</td> <td>0.853</td> <td>0.6</td> <td>1.453</td> </tr> <tr> <td>Algorithm || Neural Couplet Machine (NCM)</td> <td>68</td> <td>32.62</td> <td>0.925</td> <td>0.631</td> <td>1.556</td> </tr> </tbody></table>
Table 2
table_2
P16-1222
8
acl2016
5.4 Performance. In Table 2 we show the overall performance of our proposed NCM system compared with strong competing methods as described above. We see that, for perplexity, BLEU and human judgments, our system outperforms other baseline models.
[2, 1, 1]
['5.4 Performance.', 'In Table 2 we show the overall performance of our proposed NCM system compared with strong competing methods as described above.', 'We see that, for perplexity, BLEU and human judgments, our system outperforms other baseline models.']
[None, ['Neural Couplet Machine (NCM)', 'Perplexity', 'BLEU', 'Human Evaluation (Syntactic)', 'Human Evaluation (Semantic)', 'Human Evaluation (Overall)'], ['Neural Couplet Machine (NCM)']]
1
P16-1223table_2
Accuracy of all models on the CNN and Daily Mail datasets. Results marked † are from (Hermann et al., 2015) and results marked ‡ are from (Hill et al., 2016). Classifier and Neural net denote our entity-centric classifier and neural network systems respectively. The numbers marked with ∗ indicate that the results are from ensemble models.
2
[['Model', 'Frame-semantic model†'], ['Model', 'Word distance model†'], ['Model', 'Deep LSTM Reader†'], ['Model', 'Attentive Reader†'], ['Model', 'Impatient Reader†'], ['Model', 'MemNNs (window memory)‡'], ['Model', 'MemNNs (window memory+self-sup.)‡'], ['Model', 'MemNNs (ensemble)‡'], ['Model', 'Ours: Classifier'], ['Model', 'Ours: Neural net']]
2
[['CNN', 'Dev'], ['CNN', 'Test'], ['Daily Mail', 'Dev'], ['Daily Mail', 'Test']]
[['36.3', '40.2', '35.5', '35.5'], ['50.5', '50.9', '56.4', '55.5'], ['55.0', '57.0', '63.3', '62.2'], ['61.6', '63.0', '70.5', '69.0'], ['61.8', '63.8', '69.0', '68.0'], ['58.0', '60.6', 'N/A', 'N/A'], ['63.4', '66.8', 'N/A', 'N/A'], ['66.2∗', '69.4∗', 'N/A', 'N/A'], ['67.1', '67.9', '69.1', '68.3'], ['72.4', '72.4', '76.9', '75.8']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours: Classifier', 'Ours: Neural net']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN || Dev</th> <th>CNN || Test</th> <th>Daily Mail || Dev</th> <th>Daily Mail || Test</th> </tr> </thead> <tbody> <tr> <td>Model || Frame-semantic model†</td> <td>36.3</td> <td>40.2</td> <td>35.5</td> <td>35.5</td> </tr> <tr> <td>Model || Word distance model†</td> <td>50.5</td> <td>50.9</td> <td>56.4</td> <td>55.5</td> </tr> <tr> <td>Model || Deep LSTM Reader†</td> <td>55.0</td> <td>57.0</td> <td>63.3</td> <td>62.2</td> </tr> <tr> <td>Model || Attentive Reader†</td> <td>61.6</td> <td>63.0</td> <td>70.5</td> <td>69.0</td> </tr> <tr> <td>Model || Impatient Reader†</td> <td>61.8</td> <td>63.8</td> <td>69.0</td> <td>68.0</td> </tr> <tr> <td>Model || MemNNs (window memory)‡</td> <td>58.0</td> <td>60.6</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Model || MemNNs (window memory+self-sup.)‡</td> <td>63.4</td> <td>66.8</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Model || MemNNs (ensemble)‡</td> <td>66.2∗</td> <td>69.4∗</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Model || Ours: Classifier</td> <td>67.1</td> <td>67.9</td> <td>69.1</td> <td>68.3</td> </tr> <tr> <td>Model || Ours: Neural net</td> <td>72.4</td> <td>72.4</td> <td>76.9</td> <td>75.8</td> </tr> </tbody></table>
Table 2
table_2
P16-1223
6
acl2016
Table 2 presents our main results. The conventional feature-based classifier obtains 67.9% accuracy on the CNN test set. Not only does this significantly outperform any of the symbolic approaches reported in (Hermann et al., 2015), it also outperforms all the neural network systems from their paper and the best single-system result reported so far from (Hill et al., 2016). This suggests that the task might not be as difficult as suggested, and a simple feature set can cover many of the cases. More dramatically, our single-model neural network surpasses the previous results by a large margin (over 5%), pushing up the state-of-the-art accuracies to 72.4% and 75.8% respectively. Due to resource constraints, we have not had a chance to investigate ensembles of models, which generally can bring further gains, as demonstrated in (Hill et al., 2016) and many other papers.
[1, 1, 1, 2, 1, 2]
['Table 2 presents our main results.', 'The conventional feature-based classifier obtains 67.9% accuracy on the CNN test set.', 'Not only does this significantly outperform any of the symbolic approaches reported in (Hermann et al., 2015), it also outperforms all the neural network systems from their paper and the best single-system result reported so far from (Hill et al., 2016).', 'This suggests that the task might not be as difficult as suggested, and a simple feature set can cover many of the cases.', 'More dramatically, our single-model neural network surpasses the previous results by a large margin (over 5%), pushing up the state-of-the-art accuracies to 72.4% and 75.8% respectively.', 'Due to resource constraints, we have not had a chance to investigate ensembles of models, which generally can bring further gains, as demonstrated in (Hill et al., 2016) and many other papers.']
[None, ['Ours: Classifier', 'CNN', 'Test'], ['Ours: Classifier', 'CNN', 'Test'], None, ['Ours: Neural net', 'CNN', 'Test', 'Daily Mail'], None]
1
P16-1226table_4
Performance scores of our method compared to the path-based baselines and the state-of-the-art distributional methods for hypernymy detection, on both variations of the dataset – with lexical and random split to train / test / validation.
3
[['method', 'Path-based', 'Snow'], ['method', 'Path-based', 'Snow + Gen'], ['method', 'Path-based', 'HypeNET Path-based']]
2
[['random split', 'precision'], ['random split', 'recall'], ['random split', 'F1'], ['lexical split', 'precision'], ['lexical split', 'recall'], ['lexical split', 'F1']]
[['0.843', '0.452', '0.589', '0.760', '0.438', '0.556'], ['0.852', '0.561', '0.676', '0.759', '0.530', '0.624'], ['0.811', '0.716', '0.761', '0.691', '0.632', '0.660']]
column
['precision', 'recall', 'F1', 'precision', 'recall', 'F1']
['HypeNET Path-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>random split || precision</th> <th>random split || recall</th> <th>random split || F1</th> <th>lexical split || precision</th> <th>lexical split || recall</th> <th>lexical split || F1</th> </tr> </thead> <tbody> <tr> <td>method || Path-based || Snow</td> <td>0.843</td> <td>0.452</td> <td>0.589</td> <td>0.760</td> <td>0.438</td> <td>0.556</td> </tr> <tr> <td>method || Path-based || Snow + Gen</td> <td>0.852</td> <td>0.561</td> <td>0.676</td> <td>0.759</td> <td>0.530</td> <td>0.624</td> </tr> <tr> <td>method || Path-based || HypeNET Path-based</td> <td>0.811</td> <td>0.716</td> <td>0.761</td> <td>0.691</td> <td>0.632</td> <td>0.660</td> </tr> </tbody></table>
Table 4
table_4
P16-1226
7
acl2016
Table 4 displays performance scores of HypeNET and the baselines. HypeNET Path-based is our path-based recurrent neural network model (Section 3.1). Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in Nakashole et al. (2012). HypeNET Path-based outperforms both path-based baselines by a significant improvement in recall and with slightly lower precision. The recall boost is due to better path generalization, as demonstrated in Section 7.1.
[1, 2, 1, 1, 2]
['Table 4 displays performance scores of HypeNET and the baselines.', 'HypeNET Path-based is our path-based recurrent neural network model (Section 3.1).', 'Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in Nakashole et al. (2012).', 'HypeNET Path-based outperforms both path-based baselines by a significant improvement in recall and with slightly lower precision.', 'The recall boost is due to better path generalization, as demonstrated in Section 7.1.']
[None, ['HypeNET Path-based'], ['HypeNET Path-based'], ['HypeNET Path-based'], None]
1
P16-1228table_2
Performance of different rule integration methods on SST2. 1) CNN is the base network; 2) “-but-clause” takes the clause after “but” as input; 3) “-‘2-reg” imposes a regularization term γkσθ(S) − σθ(Y )k2 to the CNN objective, with the strength γ selected on dev set; 4) “-project” projects the trained base CNN to the rule-regularized subspace with Eq.(3); 5) “-opt-project” directly optimizes the projected CNN; 6) “-pipeline” distills the pre-trained “-opt-project” to a plain CNN; 7-8) “-Rule-p” and “Rule-q” are our models with p being the distilled student network and q the teacher network. Note that “-but-clause” and “-‘2-reg” are ad-hoc methods applicable specifically to the “but”-rule.
2
[['Model', 'CNN (Kim, 2014)'], ['Model', '-but-clause'], ['Model', '-l2-reg'], ['Model', '-project'], ['Model', '-opt-project'], ['Model', '-pipeline'], ['Model', '-Rule-p'], ['Model', '-Rule-q']]
1
[['Accuracy (%)']]
[['87.2'], ['87.3'], ['87.5'], ['87.9'], ['88.3'], ['87.9'], ['88.8'], ['89.3']]
column
['Accuracy (%)']
['-Rule-p', '-Rule-q']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || CNN (Kim, 2014)</td> <td>87.2</td> </tr> <tr> <td>Model || -but-clause</td> <td>87.3</td> </tr> <tr> <td>Model || -l2-reg</td> <td>87.5</td> </tr> <tr> <td>Model || -project</td> <td>87.9</td> </tr> <tr> <td>Model || -opt-project</td> <td>88.3</td> </tr> <tr> <td>Model || -pipeline</td> <td>87.9</td> </tr> <tr> <td>Model || -Rule-p</td> <td>88.8</td> </tr> <tr> <td>Model || -Rule-q</td> <td>89.3</td> </tr> </tbody></table>
Table 2
table_2
P16-1228
8
acl2016
To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other possible integration approaches. Table 2 lists these methods and their performance on the SST2 task. We see that: 1) Although all methods lead to different degrees of improvement, our framework outperforms all other competitors with a large margin. 2) In particular, compared to the pipelined method in Row 6 which is in analogous to the structure compilation work (Liang et al., 2008), our iterative distillation (section 3.2) provides better performance. Another advantage of our method is that we only train one set of neural parameters, as opposed to two separate sets as in the pipelined approach. 3) The distilled student network “-Rule-p” achieves much superior accuracy compared to the base CNN, as well as “-project” and “-opt-project” which explicitly project CNN to the rule-constrained subspace. This validates that our distillation procedure transfers the structured knowledge into the neural parameters effectively. The inferior accuracy of “-opt-project” can be partially attributed to the poor performance of its neural network part which achieves only 85.1% accuracy and leads to inaccurate evaluation of the “-but-clause” rule in Eq.(5).
[2, 1, 1, 1, 2, 1, 2, 1]
['To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other possible integration approaches.', 'Table 2 lists these methods and their performance on the SST2 task.', 'We see that: 1) Although all methods lead to different degrees of improvement, our framework outperforms all other competitors with a large margin.', '2) In particular, compared to the pipelined method in Row 6 which is in analogous to the structure compilation work (Liang et al., 2008), our iterative distillation (section 3.2) provides better performance.', 'Another advantage of our method is that we only train one set of neural parameters, as opposed to two separate sets as in the pipelined approach.', '3) The distilled student network “-Rule-p” achieves much superior accuracy compared to the base CNN, as well as “-project” and “-opt-project” which explicitly project CNN to the rule-constrained subspace.', 'This validates that our distillation procedure transfers the structured knowledge into the neural parameters effectively.', 'The inferior accuracy of “-opt-project” can be partially attributed to the poor performance of its neural network part which achieves only 85.1% accuracy and leads to inaccurate evaluation of the “-but-clause” rule in Eq.(5).']
[None, None, ['-Rule-p', '-Rule-q'], ['-Rule-p', '-Rule-q', '-pipeline'], ['-Rule-p', '-Rule-q'], ['-Rule-p', '-project', '-opt-project'], ['-Rule-p'], ['-opt-project', '-but-clause']]
1
P16-1230table_2
Statistical evaluation of the prediction of the on-line GP systems with respect to Subj rating.
2
[['Subj', 'Fail'], ['Subj', 'Suc.'], ['Subj', 'Total']]
1
[['Prec.'], ['Recall'], ['F-measure'], ['Number']]
[['1.00', '0.52', '0.68', '204'], ['0.95', '1.00', '0.97', '1892'], ['0.96', '0.95', '0.95', '2096']]
column
['Prec.', 'Recall', 'F-measure', 'Number']
['Subj']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Recall</th> <th>F-measure</th> <th>Number</th> </tr> </thead> <tbody> <tr> <td>Subj || Fail</td> <td>1.00</td> <td>0.52</td> <td>0.68</td> <td>204</td> </tr> <tr> <td>Subj || Suc.</td> <td>0.95</td> <td>1.00</td> <td>0.97</td> <td>1892</td> </tr> <tr> <td>Subj || Total</td> <td>0.96</td> <td>0.95</td> <td>0.95</td> <td>2096</td> </tr> </tbody></table>
Table 1
table_1
P16-1230
9
acl2016
Here we investigate further the accuracy of the model in predicting the subjective success rate. An evaluation of the on-line GP reward model between 1 and 850 training dialogues is presented in Table 2. Since three reward models were learnt each with 850 dialogues, there were a total of 2550 training dialogues. Of these, the models queried the user for feedback a total of 454 times, leaving 2096 dialogues for which learning relied on the reward model’s prediction. The results shown in the table are thus the average over 2096 dialogues. As can be seen, there was a significant imbalance between success and fail labels since the policy was improving along with the training dialogues. This lowered the recall on failed dialogue prediction as the model was biased to data with positive labels. Nevertheless, its precision scores well. On the other hand, the successful dialogues were accurately predicted by the proposed model.
[1, 1, 2, 2, 1, 1, 1, 1, 1]
['Here we investigate further the accuracy of the model in predicting the subjective success rate.', 'An evaluation of the on-line GP reward model between 1 and 850 training dialogues is presented in Table 2.', 'Since three reward models were learnt each with 850 dialogues, there were a total of 2550 training dialogues.', 'Of these, the models queried the user for feedback a total of 454 times, leaving 2096 dialogues for which learning relied on the reward model’s prediction.', 'The results shown in the table are thus the average over 2096 dialogues.', 'As can be seen, there was a significant imbalance between success and fail labels since the policy was improving along with the training dialogues.', 'This lowered the recall on failed dialogue prediction as the model was biased to data with positive labels.', 'Nevertheless, its precision scores well.', 'On the other hand, the successful dialogues were accurately predicted by the proposed model.']
[None, None, None, ['Total', 'Number'], ['Total', 'Number'], ['Fail', 'Suc.', 'Number'], ['Recall', 'Fail', 'Suc.'], ['Prec.', 'Fail', 'Suc.'], ['Suc.', 'F-measure']]
1
P16-1231table_1
Final POS tagging test set results on English WSJ and Treebank Union as well as CoNLL’09. We also show the performance of our pre-trained open source model, “Parsey McParseface.”
2
[['Method', 'Linear CRF'], ['Method', 'Ling et al. (2015)'], ['Method', 'Our Local (B=1)'], ['Method', 'Our Local (B=8)'], ['Method', 'Our Global (B=8)'], ['Method', 'Parsey McParseface']]
2
[['WSJ', 'En'], ['News', 'En-Union'], ['Web', 'En-Union'], ['QTB', 'En-Union'], ['CoNLL ’09', 'Ca'], ['CoNLL 09', 'Ch'], ['CoNLL 09', 'Cz'], ['CoNLL 09', 'En'], ['CoNLL 09', 'Ge'], ['CoNLL 09', 'Ja'], ['CoNLL 09', 'Sp'], ['Avg', '-']]
[['97.17', '97.60', '94.58', '96.04', '98.81', '94.45', '98.90', '97.50', '97.14', '97.90', '98.79', '97.17'], ['97.78', '97.44', '94.03', '96.18', '98.77', '94.38', '99.00', '97.60', '97.84', '97.06', '98.71', '97.16'], ['97.44', '97.66', '94.46', '96.59', '98.91', '94.56', '98.96', '97.36', '97.35', '98.02', '98.88', '97.29'], ['97.45', '97.69', '94.46', '96.64', '98.88', '94.56', '98.96', '97.40', '97.35', '98.02', '98.89', '97.30'], ['97.44', '97.77', '94.80', '96.86', '99.03', '94.72', '99.02', '97.65', '97.52', '98.37', '98.97', '97.47'], ['-', '97.52', '94.24', '96.45', '-', '-', '-', '-', '-', '-', '-', '-']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ || En</th> <th>News || En-Union</th> <th>Web || En-Union</th> <th>QTB || En-Union</th> <th>CoNLL ’09 || Ca</th> <th>CoNLL 09 || Ch</th> <th>CoNLL 09 || Cz</th> <th>CoNLL 09 || En</th> <th>CoNLL 09 || Ge</th> <th>CoNLL 09 || Ja</th> <th>CoNLL 09 || Sp</th> <th>Avg || -</th> </tr> </thead> <tbody> <tr> <td>Method || Linear CRF</td> <td>97.17</td> <td>97.60</td> <td>94.58</td> <td>96.04</td> <td>98.81</td> <td>94.45</td> <td>98.90</td> <td>97.50</td> <td>97.14</td> <td>97.90</td> <td>98.79</td> <td>97.17</td> </tr> <tr> <td>Method || Ling et al. (2015)</td> <td>97.78</td> <td>97.44</td> <td>94.03</td> <td>96.18</td> <td>98.77</td> <td>94.38</td> <td>99.00</td> <td>97.60</td> <td>97.84</td> <td>97.06</td> <td>98.71</td> <td>97.16</td> </tr> <tr> <td>Method || Our Local (B=1)</td> <td>97.44</td> <td>97.66</td> <td>94.46</td> <td>96.59</td> <td>98.91</td> <td>94.56</td> <td>98.96</td> <td>97.36</td> <td>97.35</td> <td>98.02</td> <td>98.88</td> <td>97.29</td> </tr> <tr> <td>Method || Our Local (B=8)</td> <td>97.45</td> <td>97.69</td> <td>94.46</td> <td>96.64</td> <td>98.88</td> <td>94.56</td> <td>98.96</td> <td>97.40</td> <td>97.35</td> <td>98.02</td> <td>98.89</td> <td>97.30</td> </tr> <tr> <td>Method || Our Global (B=8)</td> <td>97.44</td> <td>97.77</td> <td>94.80</td> <td>96.86</td> <td>99.03</td> <td>94.72</td> <td>99.02</td> <td>97.65</td> <td>97.52</td> <td>98.37</td> <td>98.97</td> <td>97.47</td> </tr> <tr> <td>Method || Parsey McParseface</td> <td>-</td> <td>97.52</td> <td>94.24</td> <td>96.45</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
P16-1231
5
acl2016
In Table 1 we compare our model to a linear CRF and to the compositional character-to-word LSTM model of Ling et al. (2015). The CRF is a first-order linear model with exact inference and the same emission features as our model. It additionally also has transition features of the word, cluster and character n-gram up to length 3 on both endpoints of the transition. The results for Ling et al. (2015) were solicited from the authors. Our local model already compares favorably against these methods on average. Using beam search with a locally normalized model does not help, but with global normalization it leads to a 7% reduction in relative error, empirically demonstrating the effect of label bias. The set of character ngrams feature is very important, increasing average accuracy on the CoNLL 09 datasets by about 0.5% absolute. This shows that character-level modeling can also be done with a simple feed-forward network without recurrence.
[1, 2, 2, 2, 1, 1, 1, 2]
['In Table 1 we compare our model to a linear CRF and to the compositional character-to-word LSTM model of Ling et al. (2015).', 'The CRF is a first-order linear model with exact inference and the same emission features as our model.', 'It additionally also has transition features of the word, cluster and character n-gram up to length 3 on both endpoints of the transition.', 'The results for Ling et al. (2015) were solicited from the authors.', 'Our local model already compares favorably against these methods on average.', 'Using beam search with a locally normalized model does not help, but with global normalization it leads to a 7% reduction in relative error, empirically demonstrating the effect of label bias.', 'The set of character ngrams feature is very important, increasing average accuracy on the CoNLL 09 datasets by about 0.5% absolute.', 'This shows that character-level modeling can also be done with a simple feed-forward network without recurrence.']
[['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)', 'Linear CRF', 'Ling et al. (2015)'], ['Linear CRF'], ['Linear CRF'], ['Ling et al. (2015)'], ['Our Local (B=1)', 'Our Local (B=8)', 'Avg'], ['Our Local (B=8)'], ['CoNLL 09', 'Our Global (B=8)'], ['Our Global (B=8)']]
1
P16-1231table_4
Sentence compression results on News data. Automatic refers to application of the same automatic extraction rules used to generate the News training corpus.
2
[['Method', 'Filippova et al. (2015)'], ['Method', 'Automatic'], ['Method', 'Our Local (B=1)'], ['Method', 'Our Local (B=8)'], ['Method', 'Our Global (B=8)']]
2
[['Generated corpus', 'A'], ['Generated corpus', 'F1'], ['Human eval', 'readability'], ['Human eval', 'informativeness']]
[['35.36', '82.83', '4.66', '4.03'], ['-', '-', '4.31', '3.77'], ['30.51', '78.72', '4.58', '4.03'], ['31.19', '75.69', '-', '-'], ['35.16', '81.41', '4.67', '4.07']]
column
['A', 'F1', 'readability', 'informativeness']
['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Generated corpus || A</th> <th>Generated corpus || F1</th> <th>Human eval || readability</th> <th>Human eval || informativeness</th> </tr> </thead> <tbody> <tr> <td>Method || Filippova et al. (2015)</td> <td>35.36</td> <td>82.83</td> <td>4.66</td> <td>4.03</td> </tr> <tr> <td>Method || Automatic</td> <td>-</td> <td>-</td> <td>4.31</td> <td>3.77</td> </tr> <tr> <td>Method || Our Local (B=1)</td> <td>30.51</td> <td>78.72</td> <td>4.58</td> <td>4.03</td> </tr> <tr> <td>Method || Our Local (B=8)</td> <td>31.19</td> <td>75.69</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Our Global (B=8)</td> <td>35.16</td> <td>81.41</td> <td>4.67</td> <td>4.07</td> </tr> </tbody></table>
Table 4
table_4
P16-1231
7
acl2016
Table 4 shows our sentence compression results. Our globally normalized model again significantly outperforms the local model. Beam search with a locally normalized model suffers from severe label bias issues that we discuss on a concrete example in Section 5. We also compare to the sentence compression system from Filippova et al. (2015), a 3-layer stacked LSTM which uses dependency label information. The LSTM and our global model perform on par on both the automatic evaluation as well as the human ratings, but our model is roughly 100 times faster. All compressions kept approximately 42% of the tokens on average and all the models are significantly better than the automatic extractions (p < 0.05).
[1, 1, 2, 2, 1, 2]
['Table 4 shows our sentence compression results.', 'Our globally normalized model again significantly outperforms the local model.', 'Beam search with a locally normalized model suffers from severe label bias issues that we discuss on a concrete example in Section 5.', 'We also compare to the sentence compression system from Filippova et al. (2015), a 3-layer stacked LSTM which uses dependency label information.', 'The LSTM and our global model perform on par on both the automatic evaluation as well as the human ratings, but our model is roughly 100 times faster.', 'All compressions kept approximately 42% of the tokens on average and all the models are significantly better than the automatic extractions (p < 0.05).']
[None, ['Our Global (B=8)', 'Our Local (B=1)', 'Our Local (B=8)'], ['Our Local (B=1)', 'Our Local (B=8)'], ['Filippova et al. (2015)'], ['Our Global (B=8)', 'Generated corpus', 'Human eval'], None]
1
P16-2002table_1
Performance comparison between different embeddings style.
1
[['alarm'], ['apps'], ['calendar'], ['communication'], ['finance'], ['flights'], ['games'], ['hotel'], ['livemovie'], ['livetv'], ['movies'], ['music'], ['mystuff'], ['note'], ['ondevice'], ['places'], ['reminder'], ['sports'], ['timer'], ['travel'], ['tv'], ['weather'], ['Average']]
1
[['w/o Embed'], ['6B-50d'], ['840B-300d'], ['SENT'], ['SENT+']]
[['97.25', '97.68', '97.5', '97.68', '97.74'], ['89.16', '91.07', '92.52', '94.24', '94.3'], ['91.34', '92.43', '92.32', '92.53', '92.43'], ['99.1', '99.13', '99.08', '99.08', '99.12'], ['90.44', '90.84', '90.72', '90.76', '90.82'], ['94.19', '92.99', '93.99', '94.59', '94.59'], ['90.16', '91.79', '92.09', '93.08', '92.92'], ['93.23', '94.21', '93.97', '94.7', '94.78'], ['90.88', '92.64', '92.8', '93.28', '93.37'], ['83.14', '85.02', '84.67', '85.41', '85.86'], ['93.27', '94.01', '93.97', '94.75', '95.16'], ['87.87', '90.37', '90.9', '91.75', '91.33'], ['94.2', '94.4', '94.51', '94.51', '94.95'], ['97.62', '98.36', '98.36', '98.49', '98.52'], ['97.51', '97.77', '97.6', '97.77', '97.84'], ['97.29', '97.68', '97.68', '98.01', '97.75'], ['98.72', '98.96', '98.94', '98.96', '98.96'], ['76.96', '78.53', '78.38', '78.7', '79.44'], ['91.1', '91.79', '91.33', '92.33', '92.61'], ['81.58', '82.57', '82.43', '83.64', '82.81'], ['91.42', '94.11', '94.91', '95.19', '95.47'], ['97.31', '97.33', '97.4', '97.4', '97.47'], ['91.99', '92.89', '93.00', '93.49', '93.56']]
row
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['SENT+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>w/o Embed</th> <th>6B-50d</th> <th>840B-300d</th> <th>SENT</th> <th>SENT+</th> </tr> </thead> <tbody> <tr> <td>alarm</td> <td>97.25</td> <td>97.68</td> <td>97.5</td> <td>97.68</td> <td>97.74</td> </tr> <tr> <td>apps</td> <td>89.16</td> <td>91.07</td> <td>92.52</td> <td>94.24</td> <td>94.3</td> </tr> <tr> <td>calendar</td> <td>91.34</td> <td>92.43</td> <td>92.32</td> <td>92.53</td> <td>92.43</td> </tr> <tr> <td>communication</td> <td>99.1</td> <td>99.13</td> <td>99.08</td> <td>99.08</td> <td>99.12</td> </tr> <tr> <td>finance</td> <td>90.44</td> <td>90.84</td> <td>90.72</td> <td>90.76</td> <td>90.82</td> </tr> <tr> <td>flights</td> <td>94.19</td> <td>92.99</td> <td>93.99</td> <td>94.59</td> <td>94.59</td> </tr> <tr> <td>games</td> <td>90.16</td> <td>91.79</td> <td>92.09</td> <td>93.08</td> <td>92.92</td> </tr> <tr> <td>hotel</td> <td>93.23</td> <td>94.21</td> <td>93.97</td> <td>94.7</td> <td>94.78</td> </tr> <tr> <td>livemovie</td> <td>90.88</td> <td>92.64</td> <td>92.8</td> <td>93.28</td> <td>93.37</td> </tr> <tr> <td>livetv</td> <td>83.14</td> <td>85.02</td> <td>84.67</td> <td>85.41</td> <td>85.86</td> </tr> <tr> <td>movies</td> <td>93.27</td> <td>94.01</td> <td>93.97</td> <td>94.75</td> <td>95.16</td> </tr> <tr> <td>music</td> <td>87.87</td> <td>90.37</td> <td>90.9</td> <td>91.75</td> <td>91.33</td> </tr> <tr> <td>mystuff</td> <td>94.2</td> <td>94.4</td> <td>94.51</td> <td>94.51</td> <td>94.95</td> </tr> <tr> <td>note</td> <td>97.62</td> <td>98.36</td> <td>98.36</td> <td>98.49</td> <td>98.52</td> </tr> <tr> <td>ondevice</td> <td>97.51</td> <td>97.77</td> <td>97.6</td> <td>97.77</td> <td>97.84</td> </tr> <tr> <td>places</td> <td>97.29</td> <td>97.68</td> <td>97.68</td> <td>98.01</td> <td>97.75</td> </tr> <tr> <td>reminder</td> <td>98.72</td> <td>98.96</td> <td>98.94</td> <td>98.96</td> <td>98.96</td> </tr> <tr> <td>sports</td> <td>76.96</td> <td>78.53</td> <td>78.38</td> <td>78.7</td> <td>79.44</td> </tr> <tr> <td>timer</td> <td>91.1</td> <td>91.79</td> <td>91.33</td> <td>92.33</td> <td>92.61</td> </tr> <tr> <td>travel</td> <td>81.58</td> <td>82.57</td> <td>82.43</td> <td>83.64</td> <td>82.81</td> </tr> <tr> <td>tv</td> <td>91.42</td> <td>94.11</td> <td>94.91</td> <td>95.19</td> <td>95.47</td> </tr> <tr> <td>weather</td> <td>97.31</td> <td>97.33</td> <td>97.4</td> <td>97.4</td> <td>97.47</td> </tr> <tr> <td>Average</td> <td>91.99</td> <td>92.89</td> <td>93.00</td> <td>93.49</td> <td>93.56</td> </tr> </tbody></table>
Table 1
table_1
P16-2002
4
acl2016
3.6 Results of Intent Classification Task. Table 1 shows the performance of intent classification across domains. For the baseline, SVM without embedding (w/o Embed) achieved 91.99% accuracy, which is already very competitive. However, the models with word embedding trained on 6 billion tokens (6B-50d) and 840 billion tokens (840B-300d) (Pennington et al., 2014) achieved 92.89% and 93.00%, respectively. 50d and 300d denote size of embedding dimension. To use word embeddings as a sentence representation, we simply use averaged word vectors over a sentence, normalized and conjoined with the original representation as in (2). Surprisingly, when we use sentence representation (SENT) induced from the sketching method with our data set, we can boost the performance up to 93.49%, corresponding to a 18.78% decrease in error relative to a SVM without representation. Also, we see that the extended sentence representation (SENT+) can get additional gains.
[2, 1, 1, 1, 2, 2, 1, 1]
['3.6 Results of Intent Classification Task.', 'Table 1 shows the performance of intent classification across domains.', 'For the baseline, SVM without embedding (w/o Embed) achieved 91.99% accuracy, which is already very competitive.', 'However, the models with word embedding trained on 6 billion tokens (6B-50d) and 840 billion tokens (840B-300d) (Pennington et al., 2014) achieved 92.89% and 93.00%, respectively.', '50d and 300d denote size of embedding dimension.', 'To use word embeddings as a sentence representation, we simply use averaged word vectors over a sentence, normalized and conjoined with the original representation as in (2).', 'Surprisingly, when we use sentence representation (SENT) induced from the sketching method with our data set, we can boost the performance up to 93.49%, corresponding to a 18.78% decrease in error relative to a SVM without representation.', 'Also, we see that the extended sentence representation (SENT+) can get additional gains.']
[None, None, ['w/o Embed', 'Average'], ['6B-50d', '840B-300d', 'Average'], None, None, ['SENT', 'w/o Embed', 'Average'], ['SENT+', 'Average']]
1
P16-2002table_2
Performance for selected domains as the number of unlabeled data increases.
1
[['apps'], ['music'], ['tv']]
1
[['0'], ['10%'], ['20%'], ['30%'], ['40%'], ['50%'], ['60%'], ['70%'], ['80%'], ['90%'], ['100%']]
[['89.16', '89.83', '90.04', '90.26', '90.88', '91.9', '92.41', '92.41', '92.95', '93.72', '94.3'], ['87.87', '89.12', '89.61', '90.4', '90.83', '91.26', '91.31', '91.33', '91.38', '91.33', '91.33'], ['91.42', '92.28', '92.83', '93.61', '93.96', '94.67', '94.91', '95.12', '95.34', '95.44', '95.47']]
row
['accuracy', 'accuracy', 'accuracy']
['0', '10%', '20%', '30%', '40%', '50%', '60%', '70%', '80%', '90%', '100%']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>0</th> <th>10%</th> <th>20%</th> <th>30%</th> <th>40%</th> <th>50%</th> <th>60%</th> <th>70%</th> <th>80%</th> <th>90%</th> <th>100%</th> </tr> </thead> <tbody> <tr> <td>apps</td> <td>89.16</td> <td>89.83</td> <td>90.04</td> <td>90.26</td> <td>90.88</td> <td>91.9</td> <td>92.41</td> <td>92.41</td> <td>92.95</td> <td>93.72</td> <td>94.3</td> </tr> <tr> <td>music</td> <td>87.87</td> <td>89.12</td> <td>89.61</td> <td>90.4</td> <td>90.83</td> <td>91.26</td> <td>91.31</td> <td>91.33</td> <td>91.38</td> <td>91.33</td> <td>91.33</td> </tr> <tr> <td>tv</td> <td>91.42</td> <td>92.28</td> <td>92.83</td> <td>93.61</td> <td>93.96</td> <td>94.67</td> <td>94.91</td> <td>95.12</td> <td>95.34</td> <td>95.44</td> <td>95.47</td> </tr> </tbody></table>
Table 2
table_2
P16-2002
5
acl2016
As in Table 2 , we also measured performance of our method (SENT+) as a function of the percentage of unlabeled data we used from total unlabeled sentences. The overall trend is clear: as the number of sentences are added to the data for inducing sentence representation, the test performance improves because of both better coverage and better quality of embedding. We believe that if we consume more data, we can boost up the performance even more.
[1, 1, 1]
['As in Table 2 , we also measured performance of our method (SENT+) as a function of the percentage of unlabeled data we used from total unlabeled sentences.', 'The overall trend is clear: as the number of sentences are added to the data for inducing sentence representation, the test performance improves because of both better coverage and better quality of embedding.', 'We believe that if we consume more data, we can boost up the performance even more.']
[None, ['0', '10%', '20%', '30%', '40%', '50%', '60%', '70%', '80%', '90%', '100%'], ['100%']]
1
P16-2006table_2
Development and test set results for shiftreduce dependency parser on Penn Treebank using only (s1, s0, q0) positional features.
2
[['Parser', 'C & M 2014'], ['Parser', 'Dyer et al. 2015'], ['Parser', 'Weiss et al. 2015'], ['Parser', '+ Percept./Beam'], ['Parser', 'Bi-LSTM'], ['Parser', '2-Layer Bi-LSTM']]
2
[['Dev', 'UAS'], ['Dev', 'LAS'], ['Test', 'UAS'], ['Test', 'LAS']]
[['92.0', '89.7', '91.8', '89.6'], ['93.2', '90.9', '93.1', '90.9'], ['-', '-', '93.19', '91.18'], ['-', '-', '93.99', '92.05'], ['93.31', '91.01', '93.21', '91.16'], ['93.67', '91.48', '93.42', '91.36']]
column
['UAS', 'LAS', 'UAS', 'LAS']
['Bi-LSTM', '2-Layer Bi-LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || UAS</th> <th>Dev || LAS</th> <th>Test || UAS</th> <th>Test || LAS</th> </tr> </thead> <tbody> <tr> <td>Parser || C &amp; M 2014</td> <td>92.0</td> <td>89.7</td> <td>91.8</td> <td>89.6</td> </tr> <tr> <td>Parser || Dyer et al. 2015</td> <td>93.2</td> <td>90.9</td> <td>93.1</td> <td>90.9</td> </tr> <tr> <td>Parser || Weiss et al. 2015</td> <td>-</td> <td>-</td> <td>93.19</td> <td>91.18</td> </tr> <tr> <td>Parser || + Percept./Beam</td> <td>-</td> <td>-</td> <td>93.99</td> <td>92.05</td> </tr> <tr> <td>Parser || Bi-LSTM</td> <td>93.31</td> <td>91.01</td> <td>93.21</td> <td>91.16</td> </tr> <tr> <td>Parser || 2-Layer Bi-LSTM</td> <td>93.67</td> <td>91.48</td> <td>93.42</td> <td>91.36</td> </tr> </tbody></table>
Table 2
table_2
P16-2006
4
acl2016
Table 2 shows results for English Penn Treebank using Stanford dependencies. Despite the minimally designed feature representation, relatively few training iterations, and lack of precomputed embeddings, the parser performed on par with state-of-the-art incremental dependency parsers, and slightly outperformed the state-of-the-art greedy parser.
[1, 1]
['Table 2 shows results for English Penn Treebank using Stanford dependencies.', 'Despite the minimally designed feature representation, relatively few training iterations, and lack of precomputed embeddings, the parser performed on par with state-of-the-art incremental dependency parsers, and slightly outperformed the state-of-the-art greedy parser.']
[None, ['Bi-LSTM', '2-Layer Bi-LSTM']]
1
P16-2011table_3
Results on Chinese event detection.
2
[['Model', 'MaxEnt'], ['Model', 'Rich-C'], ['Model', 'HNN']]
2
[['Trigger Identification', 'P'], ['Trigger Identification', 'R'], ['Trigger Identification', 'F'], ['Trigger Classification', 'P'], ['Trigger Classification', 'R'], ['Trigger Classification', 'F']]
[['50.0', '77.0', '60.6', '47.5', '73.1', '57.6'], ['62.2', '71.9', '66.7', '58.9', '68.1', '63.2'], ['74.2', '63.1', '68.2', '77.1', '53.1', '63.0']]
column
['P', 'R', 'F', 'P', 'R', 'F']
['HNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification || P</th> <th>Trigger Identification || R</th> <th>Trigger Identification || F</th> <th>Trigger Classification || P</th> <th>Trigger Classification || R</th> <th>Trigger Classification || F</th> </tr> </thead> <tbody> <tr> <td>Model || MaxEnt</td> <td>50.0</td> <td>77.0</td> <td>60.6</td> <td>47.5</td> <td>73.1</td> <td>57.6</td> </tr> <tr> <td>Model || Rich-C</td> <td>62.2</td> <td>71.9</td> <td>66.7</td> <td>58.9</td> <td>68.1</td> <td>63.2</td> </tr> <tr> <td>Model || HNN</td> <td>74.2</td> <td>63.1</td> <td>68.2</td> <td>77.1</td> <td>53.1</td> <td>63.0</td> </tr> </tbody></table>
Table 3
table_3
P16-2011
5
acl2016
Table 3 shows the comparison results between our model and the state-of-the-art methods (Li et al., 2013; Chen et al., 2012). MaxEnt (Li et al., 2013) is a pipeline model, which employs human-designed lexical and syntactic features. Rich-C is developed by Chen et al. (2012), which also incorporates Chinese-specific features to improve Chinese event detection. We can see that our method outperforms methods based on human designed features for event trigger identification and achieves comparable F-score for event classification.
[1, 2, 2, 1]
['Table 3 shows the comparison results between our model and the state-of-the-art methods (Li et al., 2013; Chen et al., 2012).', 'MaxEnt (Li et al., 2013) is a pipeline model, which employs human-designed lexical and syntactic features.', 'Rich-C is developed by Chen et al. (2012), which also incorporates Chinese-specific features to improve Chinese event detection.', 'We can see that our method outperforms methods based on human designed features for event trigger identification and achieves comparable F-score for event classification.']
[None, ['MaxEnt'], ['Rich-C'], ['HNN', 'Trigger Identification', 'F']]
1
P16-2018table_3
Per category performance.
2
[['Category', 'brand'], ['Category', 'model'], ['Category', 'product'], ['Category', 'product family'], ['Category', 'Overall']]
2
[['CRF', 'P (%)'], ['CRF', 'R (%)'], ['CRF', 'F1'], ['SEARN', 'P (%)'], ['SEARN', 'R (%)'], ['SEARN', 'F1'], ['STRUCTPERCEPTRON', 'P (%)'], ['STRUCTPERCEPTRON', 'R (%)'], ['STRUCTPERCEPTRON', 'F1'], ['LSTM-CRF', 'P (%)'], ['LSTM-CRF', 'R (%)'], ['LSTM-CRF', 'F1']]
[['91.79', '87.93', '89.82', '89.3', '89.3', '89.3', '93.99', '91.20', '92.57', '95.15', '92.29', '93.70'], ['86.28', '80.71', '83.40', '80.7', '78.9', '79.8', '85.56', '80.89', '83.16', '87.25', '85.90', '86.57'], ['87.85', '88.16', '88.00', '83.4', '85.0', '84.2', '87.90', '87.92', '87.91', '91.94', '90.98', '91.46'], ['89.27', '81.41', '85.16', '81.4', '79.0', '80.2', '88.12', '82.17', '85.04', '87.98', '87.47', '87.73'], ['88.86', '86.29', '87.55', '84.3', '84.5', '84.4', '89.18', '87.10', '88.13', '91.61', '90.24', '90.92']]
column
['P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1']
['LSTM-CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CRF || P (%)</th> <th>CRF || R (%)</th> <th>CRF || F1</th> <th>SEARN || P (%)</th> <th>SEARN || R (%)</th> <th>SEARN || F1</th> <th>STRUCTPERCEPTRON || P (%)</th> <th>STRUCTPERCEPTRON || R (%)</th> <th>STRUCTPERCEPTRON || F1</th> <th>LSTM-CRF || P (%)</th> <th>LSTM-CRF || R (%)</th> <th>LSTM-CRF || F1</th> </tr> </thead> <tbody> <tr> <td>Category || brand</td> <td>91.79</td> <td>87.93</td> <td>89.82</td> <td>89.3</td> <td>89.3</td> <td>89.3</td> <td>93.99</td> <td>91.20</td> <td>92.57</td> <td>95.15</td> <td>92.29</td> <td>93.70</td> </tr> <tr> <td>Category || model</td> <td>86.28</td> <td>80.71</td> <td>83.40</td> <td>80.7</td> <td>78.9</td> <td>79.8</td> <td>85.56</td> <td>80.89</td> <td>83.16</td> <td>87.25</td> <td>85.90</td> <td>86.57</td> </tr> <tr> <td>Category || product</td> <td>87.85</td> <td>88.16</td> <td>88.00</td> <td>83.4</td> <td>85.0</td> <td>84.2</td> <td>87.90</td> <td>87.92</td> <td>87.91</td> <td>91.94</td> <td>90.98</td> <td>91.46</td> </tr> <tr> <td>Category || product family</td> <td>89.27</td> <td>81.41</td> <td>85.16</td> <td>81.4</td> <td>79.0</td> <td>80.2</td> <td>88.12</td> <td>82.17</td> <td>85.04</td> <td>87.98</td> <td>87.47</td> <td>87.73</td> </tr> <tr> <td>Category || Overall</td> <td>88.86</td> <td>86.29</td> <td>87.55</td> <td>84.3</td> <td>84.5</td> <td>84.4</td> <td>89.18</td> <td>87.10</td> <td>88.13</td> <td>91.61</td> <td>90.24</td> <td>90.92</td> </tr> </tbody></table>
Table 3
table_3
P16-2018
4
acl2016
Table 3 shows the performance of the algorithms with the manually designed features against the automatically induced ones with LSTM-CRF. We show the performance of each individual product entity category. Compared to all models and settings, LSTM-CRF reaches the best performance of 90.92 F1 score. The most challenging entity types are product family and model, due to their “wild” and irregular nature.
[1, 1, 1, 1]
['Table 3 shows the performance of the algorithms with the manually designed features against the automatically induced ones with LSTM-CRF.', 'We show the performance of each individual product entity category.', 'Compared to all models and settings, LSTM-CRF reaches the best performance of 90.92 F1 score.', 'The most challenging entity types are product family and model, due to their “wild” and irregular nature.']
[['LSTM-CRF'], None, ['LSTM-CRF', 'F1'], ['product family', 'model']]
1
P17-1003table_4
Evaluation of the programs with the highest F1 score in the beam (abest
2
[['Settings', 'No curriculum'], ['Settings', 'Curriculum']]
1
[['Prec.'], ['Rec.'], ['F1'], ['Acc.']]
[['79.1', '91.1', '78.5', '67.2'], ['88.6', '96.1', '89.5', '79.8']]
column
['Prec.', 'Rec.', 'F1', 'Acc.']
['Settings']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Settings || No curriculum</td> <td>79.1</td> <td>91.1</td> <td>78.5</td> <td>67.2</td> </tr> <tr> <td>Settings || Curriculum</td> <td>88.6</td> <td>96.1</td> <td>89.5</td> <td>79.8</td> </tr> </tbody></table>
Table 4
table_4
P17-1003
8
acl2017
We compare the performance of the best programs found with and without curriculum learning in Table 4. We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric.
[1, 1]
['We compare the performance of the best programs found with and without curriculum learning in Table 4.', 'We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric.']
[['Curriculum', 'No curriculum'], ['Curriculum', 'No curriculum']]
1
P17-1005table_4
GRAPHQUESTIONS results. Numbers for comparison systems are from Su et al. (2016).
2
[['Models', 'SEMPRE (Berant et al. 2013)'], ['Models', 'PARASEMPRE (Berant and Liang 2014)'], ['Models', 'JACANA (Yao and Van Durme 2014)'], ['Models', 'Neural Baseline'], ['Models', 'SCANNER']]
1
[['F1']]
[['10.80'], ['12.79'], ['5.08'], ['16.24'], ['17.02']]
column
['F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Models || SEMPRE (Berant et al. 2013)</td> <td>10.80</td> </tr> <tr> <td>Models || PARASEMPRE (Berant and Liang 2014)</td> <td>12.79</td> </tr> <tr> <td>Models || JACANA (Yao and Van Durme 2014)</td> <td>5.08</td> </tr> <tr> <td>Models || Neural Baseline</td> <td>16.24</td> </tr> <tr> <td>Models || SCANNER</td> <td>17.02</td> </tr> </tbody></table>
Table 4
table_4
P17-1005
7
acl2017
Finally, Table 4 presents our results on GRAPHQUESTIONS. We report F1 for SCANNER,the neural baseline model, and three symbolic systems presented in Su et al. (2016). SCANNER achieves a new state of the art on this dataset with a gain of 4.23 F1 points over the best previously reported model.
[1, 1, 1]
['Finally, Table 4 presents our results on GRAPHQUESTIONS.', 'We report F1 for SCANNER,the neural baseline model, and three symbolic systems presented in Su et al. (2016).', 'SCANNER achieves a new state of the art on this dataset with a gain of 4.23 F1 points over the best previously reported model.']
[None, ['F1', 'SCANNER', 'SEMPRE (Berant et al. 2013)', 'PARASEMPRE (Berant and Liang 2014)', 'JACANA (Yao and Van Durme 2014)'], ['SCANNER', 'PARASEMPRE (Berant and Liang 2014)', 'F1']]
1
P17-1005table_6
SPADES results.
2
[['models', 'Unsupervised CCG (Bisk et al. 2016)'], ['models', 'Semi-supervised CCG (Bisk et al. 2016)'], ['models', 'Neural baseline'], ['models', 'Supervised CCG (Bisk et al. 2016)'], ['models', 'Rule-based system (Bisk et al. 2016)'], ['models', 'SCANNER']]
1
[['F1']]
[['24.8'], ['28.4'], ['28.6'], ['30.9'], ['31.4'], ['31.5']]
column
['F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>models || Unsupervised CCG (Bisk et al. 2016)</td> <td>24.8</td> </tr> <tr> <td>models || Semi-supervised CCG (Bisk et al. 2016)</td> <td>28.4</td> </tr> <tr> <td>models || Neural baseline</td> <td>28.6</td> </tr> <tr> <td>models || Supervised CCG (Bisk et al. 2016)</td> <td>30.9</td> </tr> <tr> <td>models || Rule-based system (Bisk et al. 2016)</td> <td>31.4</td> </tr> <tr> <td>models || SCANNER</td> <td>31.5</td> </tr> </tbody></table>
Table 6
table_6
P17-1005
7
acl2017
Table 6 reports SCANNER's performance on SPADES. For all Freebase related datasets we use average F1 (Berant et al., 2013) as our evaluation metric. Previous work on this dataset has used a semantic parsing framework similar to ours where natural language is converted to an intermediate syntactic representation and then grounded to Freebase. Specifically, Bisk et al. (2016) evaluate the effectiveness of four different CCG parsers on the semantic parsing task when varying the amount of supervision required. As can be seen, SCANNER outperforms all CCG variants (from unsupervised to fully supervised) without having access to any manually annotated derivations or lexicons. For fair comparison, we also built a neural baseline that encodes an utterance with a recurrent neural network and then predicts a grounded meaning representation directly (Ture and Jojic, 2016; Yih et al., 2016). Again, we observe that SCANNER outperforms this baseline.
[1, 1, 2, 2, 1, 2, 1]
["Table 6 reports SCANNER's performance on SPADES.", 'For all Freebase related datasets we use average F1 (Berant et al., 2013) as our evaluation metric.', 'Previous work on this dataset has used a semantic parsing framework similar to ours where natural language is converted to an intermediate syntactic representation and then grounded to Freebase.', 'Specifically, Bisk et al. (2016) evaluate the effectiveness of four different CCG parsers on the semantic parsing task when varying the amount of supervision required.', 'As can be seen, SCANNER outperforms all CCG variants (from unsupervised to fully supervised) without having access to any manually annotated derivations or lexicons.', 'For fair comparison, we also built a neural baseline that encodes an utterance with a recurrent neural network and then predicts a grounded meaning representation directly (Ture and Jojic, 2016; Yih et al., 2016).', 'Again, we observe that SCANNER outperforms this baseline.']
[None, ['F1'], None, ['Unsupervised CCG (Bisk et al. 2016)', 'Semi-supervised CCG (Bisk et al. 2016)', 'Supervised CCG (Bisk et al. 2016)', 'Rule-based system (Bisk et al. 2016)'], ['SCANNER', 'Unsupervised CCG (Bisk et al. 2016)', 'Semi-supervised CCG (Bisk et al. 2016)', 'Supervised CCG (Bisk et al. 2016)', 'Rule-based system (Bisk et al. 2016)'], ['Neural baseline'], ['SCANNER', 'Neural baseline']]
1
P17-1005table_8
Evaluation of predicates induced by SCANNER against EASYCCG. We report F1(%) across datasets. For SPADES, we also provide a breakdown for various utterance types.
3
[['Dataset', 'SPADES', '-'], ['Dataset', 'linguistic constructions of spades', 'conj (1422)'], ['Dataset', 'linguistic constructions of spades', 'control (132)'], ['Dataset', 'linguistic constructions of spades', 'pp (3489)'], ['Dataset', 'linguistic constructions of spades', 'subord (76)'], ['Dataset', 'WEBQUESTIONS', '-'], ['Dataset', 'GRAPHQUESTIONS', '-']]
1
[['SCANNER'], ['Baseline']]
[['51.2', '45.5'], ['56.1', '66.4'], ['28.3', '40.5'], ['46.2', '23.1'], ['37.9', '52.9'], ['42.1', '25.5'], ['11.9', '15.3']]
column
['F1', 'F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SCANNER</th> <th>Baseline</th> </tr> </thead> <tbody> <tr> <td>Dataset || SPADES || -</td> <td>51.2</td> <td>45.5</td> </tr> <tr> <td>Dataset || linguistic constructions of spades || conj (1422)</td> <td>56.1</td> <td>66.4</td> </tr> <tr> <td>Dataset || linguistic constructions of spades || control (132)</td> <td>28.3</td> <td>40.5</td> </tr> <tr> <td>Dataset || linguistic constructions of spades || pp (3489)</td> <td>46.2</td> <td>23.1</td> </tr> <tr> <td>Dataset || linguistic constructions of spades || subord (76)</td> <td>37.9</td> <td>52.9</td> </tr> <tr> <td>Dataset || WEBQUESTIONS || -</td> <td>42.1</td> <td>25.5</td> </tr> <tr> <td>Dataset || GRAPHQUESTIONS || -</td> <td>11.9</td> <td>15.3</td> </tr> </tbody></table>
Table 8
table_8
P17-1005
8
acl2017
As shown in Table 8, on SPADES and WEBQUESTIONS, the predicates learned by our model match the output of EASY CCG more closely than the heuristic baseline. But for GRAPHQUESTIONS which contains more compositional questions, the mismatch is higher. However, since the key idea of our model is to capture salient meaning for the task at hand rather than strictly obey syntax, we would not expect the predicates induced by our system to entirely agree with those produced by the syntactic parser. To further analyze how the learned predicates differ from syntax-based ones, we grouped utterances in SPADES into four types of linguistic constructions: coordination (conj), control and raising (control), prepositional phrase attachment (pp), and subordinate clauses (subord). Table 8 also shows the breakdown of matching scores per linguistic construction, with the number of utterances in each type.
[1, 1, 2, 1, 1]
['As shown in Table 8, on SPADES and WEBQUESTIONS, the predicates learned by our model match the output of EASY CCG more closely than the heuristic baseline.', 'But for GRAPHQUESTIONS which contains more compositional questions, the mismatch is higher.', 'However, since the key idea of our model is to capture salient meaning for the task at hand rather than strictly obey syntax, we would not expect the predicates induced by our system to entirely agree with those produced by the syntactic parser.', 'To further analyze how the learned predicates differ from syntax-based ones, we grouped utterances in SPADES into four types of linguistic constructions: coordination (conj), control and raising (control), prepositional phrase attachment (pp), and subordinate clauses (subord).', 'Table 8 also shows the breakdown of matching scores per linguistic construction, with the number of utterances in each type.']
[['SPADES', 'WEBQUESTIONS', 'SCANNER', 'Baseline'], ['GRAPHQUESTIONS', 'SCANNER', 'Baseline'], ['SCANNER'], ['linguistic constructions of spades', 'conj (1422)', 'control (132)', 'pp (3489)', 'subord (76)'], ['linguistic constructions of spades']]
1
P17-1009table_2
Results of all three tasks on the KBP 2016 evaluation sets. The KBP2016 results are those achieved by the best-performing coreference resolver in the official KBP 2016 evaluation. ∆ is the performance difference between the JOINT model and the corresponding INDEP. model. All results are expressed in terms of F-score.
2
[['English', 'KBP2016'], ['English', 'INDEP.'], ['English', 'JOINT'], ['English', 'delta over INDEP.']]
1
[['MUC'], ['B3'], ['CEAFe'], ['BLANC'], ['AVG-F'], ['Trigger'], ['Anaphoricity']]
[['26.37', '37.49', '34.21', '22.25', '30.08', '46.99', '-'], ['22.71', '40.72', '39', '22.71', '31.28', '48.82', '27.35'], ['27.41', '40.9', '39', '25', '33.08', '49.3', '31.94'], ['4.7', '0.18', '0', '2.29', '1.8', '0.48', '4.59']]
column
['MUC', 'B3', 'CEAFe', 'BLANC', 'AVG-F', 'Trigger', 'Anaphoricity']
['JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC</th> <th>B3</th> <th>CEAFe</th> <th>BLANC</th> <th>AVG-F</th> <th>Trigger</th> <th>Anaphoricity</th> </tr> </thead> <tbody> <tr> <td>English || KBP2016</td> <td>26.37</td> <td>37.49</td> <td>34.21</td> <td>22.25</td> <td>30.08</td> <td>46.99</td> <td>-</td> </tr> <tr> <td>English || INDEP.</td> <td>22.71</td> <td>40.72</td> <td>39</td> <td>22.71</td> <td>31.28</td> <td>48.82</td> <td>27.35</td> </tr> <tr> <td>English || JOINT</td> <td>27.41</td> <td>40.9</td> <td>39</td> <td>25</td> <td>33.08</td> <td>49.3</td> <td>31.94</td> </tr> <tr> <td>English || delta over INDEP.</td> <td>4.7</td> <td>0.18</td> <td>0</td> <td>2.29</td> <td>1.8</td> <td>0.48</td> <td>4.59</td> </tr> </tbody></table>
Table 2
table_2
P17-1009
7
acl2017
Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score. Table 2 shows the results on the English evaluation set. Specifically, row 1 shows the performance of the best event coreference system participating in KBP 2016 (Lu and Ng,2016). This system adopts a pipeline architecture. It first uses an ensemble of one-nearest-neighbor classifiers for trigger detection. Using the extracted triggers, it then applies a pipeline of three sieves, each of which is a one-nearest-neighbor classifier, for event coreference. As we can see, this system achieves an AVG-F of 30.08 for event coreference and an F-score of 46.99 for trigger detection. Row 2 shows the performance of the independent models, each of which is trained independently of the other models. Specifically, each independent model is trained using only the unary factors associated with it. As we can see, the independent models outperform the top KBP 2016 system by 1.2 points in AVG-F for event coreference and 1.83 points for trigger detection. Results of our joint model are shown in row 3. The absolute performance differences between the joint model and the independent models are shown in row 4. As we can see, the joint model outperforms the independent models for all three tasks: by 1.80 points for event coreference, 0.48 points for trigger detection and 4.59 points for anaphoricity determination. Most encouragingly, the joint model outperforms the top KBP 2016 system for both event coreference and trigger detection. For event coreference, it outperforms the top KBP system w.r.t. all scoring metrics, yielding an improvement of 3 points in AVG-F. For trigger detection, it outperforms the top KBP system by 2.31 points.
[1, 1, 1, 2, 2, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1]
['Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score.', 'Table 2 shows the results on the English evaluation set.', 'Specifically, row 1 shows the performance of the best event coreference system participating in KBP 2016 (Lu and Ng,2016).', 'This system adopts a pipeline architecture.', 'It first uses an ensemble of one-nearest-neighbor classifiers for trigger detection.', 'Using the extracted triggers, it then applies a pipeline of three sieves, each of which is a one-nearest-neighbor classifier, for event coreference.', 'As we can see, this system achieves an AVG-F of 30.08 for event coreference and an F-score of 46.99 for trigger detection.', 'Row 2 shows the performance of the independent models, each of which is trained independently of the other models.', 'Specifically, each independent model is trained using only the unary factors associated with it.', 'As we can see, the independent models outperform the top KBP 2016 system by 1.2 points in AVG-F for event coreference and 1.83 points for trigger detection.', 'Results of our joint model are shown in row 3.', 'The absolute performance differences between the joint model and the independent models are shown in row 4.', 'As we can see, the joint model outperforms the independent models for all three tasks: by 1.80 points for event coreference, 0.48 points for trigger detection and 4.59 points for anaphoricity determination.', 'Most encouragingly, the joint model outperforms the top KBP 2016 system for both event coreference and trigger detection.', 'For event coreference, it outperforms the top KBP system w.r.t. all scoring metrics, yielding an improvement of 3 points in AVG-F.', 'For trigger detection, it outperforms the top KBP system by 2.31 points.']
[['AVG-F', 'Trigger', 'Anaphoricity'], ['English'], ['KBP2016'], ['KBP2016'], ['KBP2016'], ['KBP2016'], ['KBP2016', 'AVG-F', 'Trigger'], ['INDEP.'], ['INDEP.'], ['INDEP.', 'KBP2016', 'AVG-F', 'Trigger'], ['JOINT'], ['delta over INDEP.'], ['delta over INDEP.', 'Trigger', 'Anaphoricity'], ['JOINT', 'KBP2016', 'AVG-F', 'Trigger'], ['JOINT', 'KBP2016', 'AVG-F'], ['JOINT', 'KBP2016', 'Trigger']]
1
P17-1009table_3
Results of model ablations on the KBP 2016 evaluation sets. Each row of ablation results is obtained by either adding one type of interaction factor to the INDEP. model or deleting one type of interaction factor from the JOINT model. For each column, the results are expressed in terms of changes to the INDEP. model’s F-score shown in row 1.
1
[['INDEP.'], ['INDEP.+CorefTrigger'], ['INDEP.+CorefAnaph'], ['INDEP.+TriggerAnaph'], ['JOINT-CorefTrigger'], ['JOINT-CorefAnaph'], ['JOINT-TriggerAnaph'], ['JOINT']]
2
[['English', 'Coref'], ['English', 'Trigger'], ['English', 'Anaph'], ['Chinese', 'Coref'], ['Chinese', 'Trigger'], ['Chinese', 'Anaph']]
[['31.28', '48.82', '27.35', '25.84', '39.82', '19.31'], ['0.39', '0.42', '-0.05', '0.95', '0.56', '-0.37'], ['0.4', '-0.08', '3.45', '0.37', '0.04', '-0.11'], ['0.11', '0.38', '1.35', '0.14', '0.52', '0.02'], ['0.56', '-0.06', '4.41', '0.19', '0.35', '3.34'], ['0.63', '0.66', '1.46', '1.5', '0.88', '0.28'], ['1.89', '0.5', '4.01', '1.65', '0.5', '1.79'], ['1.8', '0.48', '4.59', '1.95', '0.71', '4.02']]
row
['F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score']
['JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English || Coref</th> <th>English || Trigger</th> <th>English || Anaph</th> <th>Chinese || Coref</th> <th>Chinese || Trigger</th> <th>Chinese || Anaph</th> </tr> </thead> <tbody> <tr> <td>INDEP.</td> <td>31.28</td> <td>48.82</td> <td>27.35</td> <td>25.84</td> <td>39.82</td> <td>19.31</td> </tr> <tr> <td>INDEP.+CorefTrigger</td> <td>0.39</td> <td>0.42</td> <td>-0.05</td> <td>0.95</td> <td>0.56</td> <td>-0.37</td> </tr> <tr> <td>INDEP.+CorefAnaph</td> <td>0.4</td> <td>-0.08</td> <td>3.45</td> <td>0.37</td> <td>0.04</td> <td>-0.11</td> </tr> <tr> <td>INDEP.+TriggerAnaph</td> <td>0.11</td> <td>0.38</td> <td>1.35</td> <td>0.14</td> <td>0.52</td> <td>0.02</td> </tr> <tr> <td>JOINT-CorefTrigger</td> <td>0.56</td> <td>-0.06</td> <td>4.41</td> <td>0.19</td> <td>0.35</td> <td>3.34</td> </tr> <tr> <td>JOINT-CorefAnaph</td> <td>0.63</td> <td>0.66</td> <td>1.46</td> <td>1.5</td> <td>0.88</td> <td>0.28</td> </tr> <tr> <td>JOINT-TriggerAnaph</td> <td>1.89</td> <td>0.5</td> <td>4.01</td> <td>1.65</td> <td>0.5</td> <td>1.79</td> </tr> <tr> <td>JOINT</td> <td>1.8</td> <td>0.48</td> <td>4.59</td> <td>1.95</td> <td>0.71</td> <td>4.02</td> </tr> </tbody></table>
Table 3
table_3
P17-1009
8
acl2017
Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model. The results of each task are expressed in terms of changes to the corresponding independent model’s F-score. Among the three types of factors, Coref-Trigger interactions contributes the most to coreference performance, regardless of whether it is applied in isolation or in combination with the other two types of factors to the independent coreference model. In addition, it is the most effective type of factor for improving trigger detection. When applied in combination, it also improves anaphoricity determination, although less effectively than the other two types of factors. When applied in isolation to the independent models, Coref-Anaphoricity interactions improves coreference resolution but has a mixed impact on anaphoricity determination. When applied in combination with other types of factors, it improves both tasks, particularly anaphoricity determination. Its impact on trigger detection, however, is generally negative. When applied in isolation to the independent models, Trigger-Anaphoricity interactions improves both trigger detection and anaphoricity determination. When applied in combination with other types of factors, it still improves anaphoricity determination (particularly on Chinese), but has a mixed effect on trigger detection. Among the three types of factors, it has the least impact on coreference resolution.
[1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model.', 'The results of each task are expressed in terms of changes to the corresponding independent model’s F-score.', 'Among the three types of factors, Coref-Trigger interactions contributes the most to coreference performance, regardless of whether it is applied in isolation or in combination with the other two types of factors to the independent coreference model.', 'In addition, it is the most effective type of factor for improving trigger detection.', 'When applied in combination, it also improves anaphoricity determination, although less effectively than the other two types of factors.', 'When applied in isolation to the independent models, Coref-Anaphoricity interactions improves coreference resolution but has a mixed impact on anaphoricity determination.', 'When applied in combination with other types of factors, it improves both tasks, particularly anaphoricity determination.', 'Its impact on trigger detection, however, is generally negative.', 'When applied in isolation to the independent models, Trigger-Anaphoricity interactions improves both trigger detection and anaphoricity determination.', 'When applied in combination with other types of factors, it still improves anaphoricity determination (particularly on Chinese), but has a mixed effect on trigger detection.', 'Among the three types of factors, it has the least impact on coreference resolution.']
[['English', 'Chinese'], None, ['INDEP.+CorefTrigger', 'JOINT'], ['INDEP.+CorefTrigger', 'Trigger'], ['JOINT', 'Anaph', 'JOINT-CorefTrigger'], ['INDEP.+CorefAnaph', 'Coref', 'Anaph'], ['JOINT', 'Coref', 'Anaph'], ['INDEP.+CorefAnaph', 'Trigger'], ['INDEP.+TriggerAnaph', 'Trigger', 'Anaph'], ['JOINT', 'Anaph', 'Chinese', 'Trigger'], ['INDEP.+TriggerAnaph', 'Coref']]
1
P17-1011table_6
Evaluation results of AES on three datasets. Basic: the basic feature sets; mode: discourse mode features.
1
[['SVR-Basic'], ['SVR-Basic+mode'], ['BLRR-Basic'], ['BLRR-Basic+mode']]
2
[['QWK Score', 'Prompt 1'], ['QWK Score', 'Prompt 2'], ['QWK Score', 'Prompt 3']]
[['0.554', '0.468', '0.457'], ['0.6', '0.501', '0.481'], ['0.683', '0.557', '0.513'], ['0.696', '0.565', '0.527']]
column
['QWK Score', 'QWK Score', 'QWK Score']
['SVR-Basic', 'SVR-Basic+mode', 'BLRR-Basic', 'BLRR-Basic+mode']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QWK Score || Prompt 1</th> <th>QWK Score || Prompt 2</th> <th>QWK Score || Prompt 3</th> </tr> </thead> <tbody> <tr> <td>SVR-Basic</td> <td>0.554</td> <td>0.468</td> <td>0.457</td> </tr> <tr> <td>SVR-Basic+mode</td> <td>0.6</td> <td>0.501</td> <td>0.481</td> </tr> <tr> <td>BLRR-Basic</td> <td>0.683</td> <td>0.557</td> <td>0.513</td> </tr> <tr> <td>BLRR-Basic+mode</td> <td>0.696</td> <td>0.565</td> <td>0.527</td> </tr> </tbody></table>
Table 6
table_6
P17-1011
8
acl2017
Table 6 shows the evaluation results of AES on three datasets. We can see that the BLRR algorithm performs better than the SVR algorithm. No matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets. The trends are consistent over all three datasets.
[1, 1, 1, 1]
['Table 6 shows the evaluation results of AES on three datasets.', 'We can see that the BLRR algorithm performs better than the SVR algorithm.', 'No matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets.', 'The trends are consistent over all three datasets.']
[None, ['BLRR-Basic', 'SVR-Basic'], ['SVR-Basic', 'SVR-Basic+mode', 'BLRR-Basic', 'BLRR-Basic+mode'], ['Prompt 1', 'Prompt 2', 'Prompt 3']]
1
P17-1012table_1
Accuracy of encoders with position features (wrd+pos) and without (wrd) in terms of BLEU and perplexity (PPL) on IWSLT’14 German to English translation; results include unknown word replacement. Deep Convolutional 6/3 is the only multi-layer configuration, more layers for the LSTMs did not improve accuracy on this dataset.
2
[['System/Encoder', 'Phrase-based'], ['System/Encoder', 'LSTM'], ['System/Encoder', 'BiLSTM'], ['System/Encoder', 'Pooling'], ['System/Encoder', 'Convolutional'], ['System/Encoder', 'Deep Convolutional 6/3']]
2
[['BLEU', 'wrd+pos'], ['BLEU', 'wrd'], ['PPL', 'wrd+pos']]
[['-', '28.4', '-'], ['27.4', '27.3', '10.8'], ['29.7', '29.8', '9.9'], ['26.1', '19.7', '11'], ['29.9', '20.1', '9.1'], ['30.4', '25.2', '8.9']]
column
['BLEU', 'BLEU', 'PPL']
['wrd+pos']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || wrd+pos</th> <th>BLEU || wrd</th> <th>PPL || wrd+pos</th> </tr> </thead> <tbody> <tr> <td>System/Encoder || Phrase-based</td> <td>-</td> <td>28.4</td> <td>-</td> </tr> <tr> <td>System/Encoder || LSTM</td> <td>27.4</td> <td>27.3</td> <td>10.8</td> </tr> <tr> <td>System/Encoder || BiLSTM</td> <td>29.7</td> <td>29.8</td> <td>9.9</td> </tr> <tr> <td>System/Encoder || Pooling</td> <td>26.1</td> <td>19.7</td> <td>11</td> </tr> <tr> <td>System/Encoder || Convolutional</td> <td>29.9</td> <td>20.1</td> <td>9.1</td> </tr> <tr> <td>System/Encoder || Deep Convolutional 6/3</td> <td>30.4</td> <td>25.2</td> <td>8.9</td> </tr> </tbody></table>
Table 1
table_1
P17-1012
5
acl2017
Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM). Next, we increase the depth of the convolutional encoder. We choose a good setting by independently varying the number of layers in CNN-a and CNN-c between 1 and 10 and obtained best validation set perplexity with six layers forCNN-a and three layers for CNN-c. This configuration outperforms BiLSTM by 0.7 BLEU (Deep Convolutional 6/3). We investigate depth in the convolutional encoder more in §5.3. Among recurrent encoders, the BiLSTM is 2.3 BLEU better than the uni-directional version. The simple pooling encoder which does not contain any parameters is only 1.3 BLEU lower than a unidirectional LSTM encoder and 3.6 BLEU lower than BiLSTM. The results without position embeddings (words) show that position information is crucial for convolutional encoders. In particular for shallow models (Pooling and Convolutional), whereas deeper models are less effected. Recurrent encoders do not benefit from explicit position information because this information can be naturally extracted through the sequential computation.
[1, 2, 2, 1, 2, 1, 1, 1, 2, 2]
['Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM).', 'Next, we increase the depth of the convolutional encoder.', 'We choose a good setting by independently varying the number of layers in CNN-a and CNN-c between 1 and 10 and obtained best validation set perplexity with six layers forCNN-a and three layers for CNN-c.', 'This configuration outperforms BiLSTM by 0.7 BLEU (Deep Convolutional 6/3).', 'We investigate depth in the convolutional encoder more in §5.3.', 'Among recurrent encoders, the BiLSTM is 2.3 BLEU better than the uni-directional version.', 'The simple pooling encoder which does not contain any parameters is only 1.3 BLEU lower than a unidirectional LSTM encoder and 3.6 BLEU lower than BiLSTM.', 'The results without position embeddings (words) show that position information is crucial for convolutional encoders.', 'In particular for shallow models (Pooling and Convolutional), whereas deeper models are less effected.', 'Recurrent encoders do not benefit from explicit position information because this information can be naturally extracted through the sequential computation.']
[['wrd+pos', 'Convolutional', 'LSTM', 'BiLSTM'], None, None, ['wrd+pos', 'Deep Convolutional 6/3', 'BiLSTM', 'BLEU'], ['Convolutional'], ['wrd+pos', 'Convolutional', 'BiLSTM', 'BLEU'], ['wrd+pos', 'Pooling', 'LSTM', 'BiLSTM', 'BLEU'], ['wrd', 'Convolutional', 'wrd+pos'], ['Pooling', 'Convolutional'], ['wrd+pos']]
1
P17-1012table_2
Accuracy on three WMT tasks, including results published in previous work. For deep convolutional encoders, we include the number of layers in CNN-a and CNN-c, respectively.
3
[['(Sennrich et al. 2016a)', 'Encoder', 'BiGRU'], ['Single-layer decoder', 'Encoder', 'BiLSTM'], ['Single-layer decoder', 'Encoder', 'Convolutional'], ['Single-layer decoder', 'Encoder', 'Deep Convolutional 8/4']]
1
[['Vocabulary Size'], ['BLEU']]
[['90K', '28.1'], ['80K', '27.5'], ['80K', '27.1'], ['80K', '27.8']]
column
['Vocabulary Size', 'BLEU']
['Deep Convolutional 8/4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>(Sennrich et al. 2016a) || Encoder || BiGRU</td> <td>90K</td> <td>28.1</td> </tr> <tr> <td>Single-layer decoder || Encoder || BiLSTM</td> <td>80K</td> <td>27.5</td> </tr> <tr> <td>Single-layer decoder || Encoder || Convolutional</td> <td>80K</td> <td>27.1</td> </tr> <tr> <td>Single-layer decoder || Encoder || Deep Convolutional 8/4</td> <td>80K</td> <td>27.8</td> </tr> </tbody></table>
Table 2
table_2
P17-1012
6
acl2017
The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a). Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024. A singlelayer convolutional encoder with embedding size 256 performs at 27.1 BLEU. Increasing the number of convolutional layers to 8 in CNN-a and 4 in CNN-c achieves 27.8 BLEU which outperforms our baseline and is competitive to the state of the art.
[1, 1, 1, 1]
['The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a).', 'Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024.', 'A singlelayer convolutional encoder with embedding size 256 performs at 27.1 BLEU.', 'Increasing the number of convolutional layers to 8 in CNN-a and 4 in CNN-c achieves 27.8 BLEU which outperforms our baseline and is competitive to the state of the art.']
[['Deep Convolutional 8/4', '(Sennrich et al. 2016a)'], ['Encoder', 'BiLSTM', '(Sennrich et al. 2016a)', 'BLEU'], ['Single-layer decoder', 'Encoder', 'Convolutional', 'BLEU'], ['Deep Convolutional 8/4', 'BLEU', '(Sennrich et al. 2016a)', 'Convolutional', 'BiLSTM']]
1
P17-1014table_3
BLEU results for AMR Generation. *Model has been trained on a previous release of the corpus (LDC2014T12).
2
[['Model', 'GIGA-20M'], ['Model', 'GIGA-2M'], ['Model', 'GIGA-200k'], ['Model', 'AMR-ONLY'], ['Model', 'PBMT* (Pourdamghani et al. 2016)'], ['Model', 'TSP (Song et al. 2016)'], ['Model', 'TREETOSTR (Flanigan et al. 2016)']]
1
[['Dev'], ['Test']]
[['33.1', '33.8'], ['31.8', '32.3'], ['27.2', '27.4'], ['21.7', '22'], ['27.2', '26.9'], ['21.1', '22.4'], ['23', '23']]
column
['BLEU', 'BLEU']
['GIGA-20M', 'GIGA-2M', 'GIGA-200k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || GIGA-20M</td> <td>33.1</td> <td>33.8</td> </tr> <tr> <td>Model || GIGA-2M</td> <td>31.8</td> <td>32.3</td> </tr> <tr> <td>Model || GIGA-200k</td> <td>27.2</td> <td>27.4</td> </tr> <tr> <td>Model || AMR-ONLY</td> <td>21.7</td> <td>22</td> </tr> <tr> <td>Model || PBMT* (Pourdamghani et al. 2016)</td> <td>27.2</td> <td>26.9</td> </tr> <tr> <td>Model || TSP (Song et al. 2016)</td> <td>21.1</td> <td>22.4</td> </tr> <tr> <td>Model || TREETOSTR (Flanigan et al. 2016)</td> <td>23</td> <td>23</td> </tr> </tbody></table>
Table 3
table_3
P17-1014
6
acl2017
Table 3 summarizes our AMR generation results on the development and test set. We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds. Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points. Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus. We leave scaling our models to all of Gigaword for future work.
[1, 1, 1, 2, 2]
['Table 3 summarizes our AMR generation results on the development and test set.', 'We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.', 'Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.', 'Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.', 'We leave scaling our models to all of Gigaword for future work.']
[None, ['GIGA-20M', 'GIGA-2M', 'GIGA-200k'], ['GIGA-20M', 'TSP (Song et al. 2016)', 'TREETOSTR (Flanigan et al. 2016)'], ['GIGA-20M'], None]
1
P17-1051table_5
Performance of MORSE against Morfessor on the non-canonical version of SD17
1
[['Morfessor'], ['MORSE'], ['MORSE-CV']]
1
[['P'], ['R'], ['F1']]
[['65.95', '51.13', '57.60'], ['75.35', '83.60', '79.26'], ['84.6', '78.36', '81.29']]
column
['P', 'R', 'F1']
['MORSE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Morfessor</td> <td>65.95</td> <td>51.13</td> <td>57.60</td> </tr> <tr> <td>MORSE</td> <td>75.35</td> <td>83.60</td> <td>79.26</td> </tr> <tr> <td>MORSE-CV</td> <td>84.6</td> <td>78.36</td> <td>81.29</td> </tr> </tbody></table>
Table 5
table_5
P17-1051
7
acl2017
Based on the results in Table 5, we make the following observations. Comparing MORSE-CV to MORSE reflects the fundamental difference between SD17 and MC datasets. Comparing MORSE-CV to Morfessor, we observe a significant jump in performance (an increase of 24%).
[1, 1, 1]
['Based on the results in Table 5, we make the following observations.', 'Comparing MORSE-CV to MORSE reflects the fundamental difference between SD17 and MC datasets.', 'Comparing MORSE-CV to Morfessor, we observe a significant jump in performance (an increase of 24%).']
[None, ['MORSE-CV', 'MORSE'], ['MORSE-CV', 'Morfessor']]
1