table_id_paper
stringlengths 15
15
| caption
stringlengths 14
1.88k
| row_header_level
int32 1
9
| row_headers
large_stringlengths 15
1.75k
| column_header_level
int32 1
6
| column_headers
large_stringlengths 7
1.01k
| contents
large_stringlengths 18
2.36k
| metrics_loc
stringclasses 2
values | metrics_type
large_stringlengths 5
532
| target_entity
large_stringlengths 2
330
| table_html_clean
large_stringlengths 274
7.88k
| table_name
stringclasses 9
values | table_id
stringclasses 9
values | paper_id
stringlengths 8
8
| page_no
int32 1
13
| dir
stringclasses 8
values | description
large_stringlengths 103
3.8k
| class_sentence
stringlengths 3
120
| sentences
large_stringlengths 110
3.92k
| header_mention
stringlengths 12
1.8k
| valid
int32 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P17-1052table_2 | Error rates (%) on larger datasets in comparison with previous models. The previous results are roughly sorted in the order of error rates (best to worst). The best results and the second best are shown in bold and italic, respectively. ‘tv’ stands for tv-embeddings. ‘w2v’ stands for word2vec. ‘(w2v)’ in row 7 indicates that the best results among those with and without word2vec pretraining are shown. Note that ‘best’ in rows 4&6–8 indicates that we are giving an ‘unfair’ advantage to these models by choosing the best test error rate among a number of variations presented in the respective papers. [JZ16]: Johnson and Zhang (2016), [YYDHSH16]: Yang et al. (2016), [CSBL16]: Conneau et al. (2016), [ZZL15]: Zhang et al. (2015) | 2 | [['Models', 'DPCNN + unsupervised embed.'], ['Models', 'ShallowCNN + unsup. embed. [JZ16]'], ['Models', 'Hierarchical attention net [YYDHSH16]'], ['Models', '[CSBL16] char-level CNN: best'], ['Models', 'fastText bigrams (Joulin et al., 2016)'], ['Models', '[ZZL15] char-level CNN: best'], ['Models', '[ZZL15] word-level CNN: best'], ['Models', '[ZZL15] linear model: best']] | 1 | [['Yelp.p'], ['Yelp.f'], ['Yahoo'], ['Ama.f'], ['Ama.p']] | [['2.64', '30.58', '23.9', '34.81', '3.32'], ['2.9', '32.39', '24.85', '36.24', '3.79'], ['-', '-', '24.2', '36.4', '-'], ['4.28', '35.28', '26.57', '37', '4.28'], ['4.3', '36.1', '27.7', '39.8', '5.4'], ['4.88', '37.95', '28.8', '40.43', '4.93'], ['4.6', '39.58', '28.84', '42.39', '5.51'], ['4.36', '40.14', '28.96', '44.74', '7.98']] | column | ['Error rates', 'Error rates', 'Error rates', 'Error rates', 'Error rates'] | ['DPCNN + unsupervised embed.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp.p</th> <th>Yelp.f</th> <th>Yahoo</th> <th>Ama.f</th> <th>Ama.p</th> </tr> </thead> <tbody> <tr> <td>Models || DPCNN + unsupervised embed.</td> <td>2.64</td> <td>30.58</td> <td>23.9</td> <td>34.81</td> <td>3.32</td> </tr> <tr> <td>Models || ShallowCNN + unsup. embed. [JZ16]</td> <td>2.9</td> <td>32.39</td> <td>24.85</td> <td>36.24</td> <td>3.79</td> </tr> <tr> <td>Models || Hierarchical attention net [YYDHSH16]</td> <td>-</td> <td>-</td> <td>24.2</td> <td>36.4</td> <td>-</td> </tr> <tr> <td>Models || [CSBL16] char-level CNN: best</td> <td>4.28</td> <td>35.28</td> <td>26.57</td> <td>37</td> <td>4.28</td> </tr> <tr> <td>Models || fastText bigrams (Joulin et al., 2016)</td> <td>4.3</td> <td>36.1</td> <td>27.7</td> <td>39.8</td> <td>5.4</td> </tr> <tr> <td>Models || [ZZL15] char-level CNN: best</td> <td>4.88</td> <td>37.95</td> <td>28.8</td> <td>40.43</td> <td>4.93</td> </tr> <tr> <td>Models || [ZZL15] word-level CNN: best</td> <td>4.6</td> <td>39.58</td> <td>28.84</td> <td>42.39</td> <td>5.51</td> </tr> <tr> <td>Models || [ZZL15] linear model: best</td> <td>4.36</td> <td>40.14</td> <td>28.96</td> <td>44.74</td> <td>7.98</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1052 | 6 | acl2017 | We first report the error rates of our full model (DPCNN with 15 weight layers plus unsupervised embeddings) on the larger five datasets (Table 2). To put it into perspective, we also show the previous results in the literature. The previous results are roughly sorted in the order of error rates from best to worst. On all the five datasets, DPCNN outperforms all of the previous results, which validates the effectiveness of our approach. | [1, 1, 2, 1] | ['We first report the error rates of our full model (DPCNN with 15 weight layers plus unsupervised embeddings) on the larger five datasets (Table 2).', 'To put it into perspective, we also show the previous results in the literature.', 'The previous results are roughly sorted in the order of error rates from best to worst.', 'On all the five datasets, DPCNN outperforms all of the previous results, which validates the effectiveness of our approach.'] | [['DPCNN + unsupervised embed.'], None, None, ['DPCNN + unsupervised embed.']] | 1 |
P17-1053table_2 | Accuracy on the SimpleQuestions and WebQSP relation detection tasks (test sets). The top shows performance of baselines. On the bottom we give the results of our proposed model together with the ablation tests. | 4 | [['Model', 'AMPCNN (Yin et al. 2016)', 'Relation Input Views', 'words'], ['Model', 'BiCNN (Yih et al. 2015)', 'Relation Input Views', 'char-3-gram'], ['Model', 'BiLSTM w/ words', 'Relation Input Views', 'words'], ['Model', 'BiLSTM w/ relation names', 'Relation Input Views', 'rel_names'], ['Model', 'Hier-Res-BiLSTM (HR-BiLSTM)', 'Relation Input Views', 'words + rel_names']] | 2 | [['Accuracy', 'SimpleQuestions'], ['Accuracy', 'WebQSP']] | [['91.3', '-'], ['90.0', '77.74'], ['91.2', '79.32'], ['88.9', '78.96'], ['93.3', '82.53']] | column | ['Accuracy', 'Accuracy'] | ['Hier-Res-BiLSTM (HR-BiLSTM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || SimpleQuestions</th> <th>Accuracy || WebQSP</th> </tr> </thead> <tbody> <tr> <td>Model || AMPCNN (Yin et al. 2016) || Relation Input Views || words</td> <td>91.3</td> <td>-</td> </tr> <tr> <td>Model || BiCNN (Yih et al. 2015) || Relation Input Views || char-3-gram</td> <td>90.0</td> <td>77.74</td> </tr> <tr> <td>Model || BiLSTM w/ words || Relation Input Views || words</td> <td>91.2</td> <td>79.32</td> </tr> <tr> <td>Model || BiLSTM w/ relation names || Relation Input Views || rel_names</td> <td>88.9</td> <td>78.96</td> </tr> <tr> <td>Model || Hier-Res-BiLSTM (HR-BiLSTM) || Relation Input Views || words + rel_names</td> <td>93.3</td> <td>82.53</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1053 | 8 | acl2017 | Table 2 shows the results on two relation detection tasks. The AMPCNN result is from (Yin et al., 2016), which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from (Yih et al., 2015), where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p < 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. | [1, 2, 2, 2, 1, 1, 1, 1] | ['Table 2 shows the results on two relation detection tasks.', 'The AMPCNN result is from (Yin et al., 2016), which yielded state-of-the-art scores by outperforming several attention-based methods.', 'We re-implemented the BiCNN model from (Yih et al., 2015), where both questions and relations are represented with the word hash trick on character tri-grams.', 'The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions.', 'Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p < 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively).', 'Note that using only relation names instead of words results in a weaker baseline BiLSTM model.', 'The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%).', 'However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions.'] | [None, ['AMPCNN (Yin et al. 2016)'], None, ['BiLSTM w/ words', 'WebQSP'], ['Hier-Res-BiLSTM (HR-BiLSTM)', 'BiLSTM w/ words', 'SimpleQuestions', 'WebQSP'], ['BiLSTM w/ relation names'], ['BiLSTM w/ relation names', 'SimpleQuestions'], ['BiLSTM w/ relation names', 'WebQSP', 'SimpleQuestions']] | 1 |
P17-1085table_1 | Performance on ACE05 test dataset. The dashed (“–”) performance numbers were missing in the original paper (Miwa and Bansal, 2016). | 2 | [['Method', 'Li and Ji (2014)'], ['Method', 'SPTree'], ['Method', 'SPTree1'], ['Method', 'Our Model']] | 2 | [['Entity', 'P'], ['Entity', 'R'], ['Entity', 'F1'], ['Relation', 'P'], ['Relation', 'R'], ['Relation', 'F1'], ['Entity+Relation', 'P'], ['Entity+Relation', 'R'], ['Entity+Relation', 'F1']] | [['0.852', '0.769', '0.808', '0.689', '0.419', '0.521', '0.654', '0.398', '0.495'], ['0.829', '0.839', '0.834', '-', '-', '-', '0.572', '0.54', '0.556'], ['0.823', '0.839', '0.831', '0.605', '0.553', '0.578', '0.578', '0.529', '0.553'], ['0.84', '0.813', '0.826', '0.579', '0.54', '0.559', '0.555', '0.518', '0.536']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Entity || P</th> <th>Entity || R</th> <th>Entity || F1</th> <th>Relation || P</th> <th>Relation || R</th> <th>Relation || F1</th> <th>Entity+Relation || P</th> <th>Entity+Relation || R</th> <th>Entity+Relation || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Li and Ji (2014)</td> <td>0.852</td> <td>0.769</td> <td>0.808</td> <td>0.689</td> <td>0.419</td> <td>0.521</td> <td>0.654</td> <td>0.398</td> <td>0.495</td> </tr> <tr> <td>Method || SPTree</td> <td>0.829</td> <td>0.839</td> <td>0.834</td> <td>-</td> <td>-</td> <td>-</td> <td>0.572</td> <td>0.54</td> <td>0.556</td> </tr> <tr> <td>Method || SPTree1</td> <td>0.823</td> <td>0.839</td> <td>0.831</td> <td>0.605</td> <td>0.553</td> <td>0.578</td> <td>0.578</td> <td>0.529</td> <td>0.553</td> </tr> <tr> <td>Method || Our Model</td> <td>0.84</td> <td>0.813</td> <td>0.826</td> <td>0.579</td> <td>0.54</td> <td>0.559</td> <td>0.555</td> <td>0.518</td> <td>0.536</td> </tr> </tbody></table> | Table 1 | table_1 | P17-1085 | 7 | acl2017 | Table 1 compares the performance of our system with respect to the baselines on ACE05 dataset. We find that our joint model significantly outperforms the joint structured perceptron model (Li and Ji, 2014) on both entities and relations, despite the unavailability of features such as dependency trees, POS tags, etc. However, if we compare our model to the SPTree models, then we find that their model has better recall on both entities and relations. | [1, 1, 1] | ['Table 1 compares the performance of our system with respect to the baselines on ACE05 dataset.', 'We find that our joint model significantly outperforms the joint structured perceptron model (Li and Ji, 2014) on both entities and relations, despite the unavailability of features such as dependency trees, POS tags, etc.', 'However, if we compare our model to the SPTree models, then we find that their model has better recall on both entities and relations.'] | [None, ['Our Model', 'Li and Ji (2014)'], ['Our Model', 'SPTree', 'SPTree1', 'R', 'Entity', 'Relation', 'Entity+Relation']] | 1 |
P17-1171table_5 | Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set. | 2 | [['Features', 'Full'], ['Features', 'No ftoken'], ['Features', 'No fexact match'], ['Features', 'No faligned'], ['Features', 'No faligned and fexact match']] | 1 | [['F1']] | [['78.8'], ['78.0'], ['77.3'], ['77.3'], ['59.4']] | column | ['F1'] | ['Full'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Features || Full</td> <td>78.8</td> </tr> <tr> <td>Features || No ftoken</td> <td>78.0</td> </tr> <tr> <td>Features || No fexact match</td> <td>77.3</td> </tr> <tr> <td>Features || No faligned</td> <td>77.3</td> </tr> <tr> <td>Features || No faligned and fexact match</td> <td>59.4</td> </tr> </tbody></table> | Table 5 | table_5 | P17-1171 | 8 | acl2017 | We conducted an ablation analysis on the feature vector of paragraph tokens. As shown in Table 5 all the features contribute to the performance of our final system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More interestingly, if we remove both faligned and fexact match, the performance drops dramatically. | [2, 1, 1, 1] | ['We conducted an ablation analysis on the feature vector of paragraph tokens.', 'As shown in Table 5 all the features contribute to the performance of our final system.', 'Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%.', 'More interestingly, if we remove both faligned and fexact match, the performance drops dramatically.'] | [None, ['Full'], ['No faligned'], ['No faligned and fexact match']] | 1 |
P17-1171table_6 | Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Finetune (DS): Document Reader models trained on SQuAD and fine-tuned on each DS training set independently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant supervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/ yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2. | 2 | [['Dataset', 'SQuAD (All Wikipedia)'], ['Dataset', 'CuratedTREC'], ['Dataset', 'WebQuestions'], ['Dataset', 'WikiMovies']] | 2 | [['YodaQA', '-'], ['DrQA', 'SQuAD'], ['DrQA', '+Fine-tune (DS)'], ['DrQA', '+Multitask (DS)']] | [['n/a', '27.1', '28.4', '29.8'], ['31.3', '19.7', '25.7', '25.4'], ['39.8', '11.8', '19.5', '20.7'], ['n/a', '24.5', '34.3', '36.5']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['+Multitask (DS)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YodaQA || -</th> <th>DrQA || SQuAD</th> <th>DrQA || +Fine-tune (DS)</th> <th>DrQA || +Multitask (DS)</th> </tr> </thead> <tbody> <tr> <td>Dataset || SQuAD (All Wikipedia)</td> <td>n/a</td> <td>27.1</td> <td>28.4</td> <td>29.8</td> </tr> <tr> <td>Dataset || CuratedTREC</td> <td>31.3</td> <td>19.7</td> <td>25.7</td> <td>25.4</td> </tr> <tr> <td>Dataset || WebQuestions</td> <td>39.8</td> <td>11.8</td> <td>19.5</td> <td>20.7</td> </tr> <tr> <td>Dataset || WikiMovies</td> <td>n/a</td> <td>24.5</td> <td>34.3</td> <td>36.5</td> </tr> </tbody></table> | Table 6 | table_6 | P17-1171 | 9 | acl2017 | Table 6 presents the results. Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets. We are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on SQuAD is outperformed on all four of the datasets by the multitask model that uses distant supervision. However performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as fine-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can find is our overall goal, and that is the Multitask (DS) system. We compare to an unconstrained QA system using redundant resources (not just Wikipedia), YodaQA (Baudis, 2015), giving results which were previously reported on CuratedTREC and WebQuestions. Despite the increased difficulty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4). The gap is slightly bigger on WebQuestions, likely because this dataset was created from the specific structure of Freebase which YodaQA uses directly. | [1, 1, 2, 2, 1, 2, 1, 1, 1, 1] | ['Table 6 presents the results.', 'Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets.', 'We are interested in a single, full system that can answer any question using Wikipedia.', 'The single model trained only on SQuAD is outperformed on all four of the datasets by the multitask model that uses distant supervision.', 'However performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring.', 'The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as fine-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps.', 'Nevertheless, the best single model that we can find is our overall goal, and that is the Multitask (DS) system.', 'We compare to an unconstrained QA system using redundant resources (not just Wikipedia), YodaQA (Baudis, 2015), giving results which were previously reported on CuratedTREC and WebQuestions.', 'Despite the increased difficulty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4).', 'The gap is slightly bigger on WebQuestions, likely because this dataset was created from the specific structure of Freebase which YodaQA uses directly.'] | [None, ['DrQA', 'Dataset'], ['SQuAD'], ['SQuAD'], ['SQuAD'], ['+Multitask (DS)'], ['+Multitask (DS)'], ['YodaQA', 'CuratedTREC', 'WebQuestions'], ['YodaQA', 'CuratedTREC', '+Multitask (DS)'], ['WebQuestions', 'YodaQA']] | 1 |
P17-1176table_3 | Comparison with previous work on Spanish-French and German-French translation tasks from the Europarl corpus. English is treated as the pivot language. The likelihood method uses 100K parallel source-target sentences, which are not available for other methods. | 3 | [['Cheng et al. (2016a)', 'Method', 'pivot'], ['Cheng et al. (2016a)', 'Method', 'hard'], ['Cheng et al. (2016a)', 'Method', 'soft'], ['Cheng et al. (2016a)', 'Method', 'likelihood'], ['Ours', 'Method', 'Ours sent-beam'], ['Ours', 'Method', 'word-sampling']] | 1 | [['Es→ Fr'], ['De→ Fr']] | [['29.79', '23.70'], ['29.93', '23.88'], ['30.57', '23.79'], ['32.59', '25.93'], ['31.64', '24.39'], ['33.86', '27.03']] | column | ['BLEU', 'BLEU'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Es→ Fr</th> <th>De→ Fr</th> </tr> </thead> <tbody> <tr> <td>Cheng et al. (2016a) || Method || pivot</td> <td>29.79</td> <td>23.70</td> </tr> <tr> <td>Cheng et al. (2016a) || Method || hard</td> <td>29.93</td> <td>23.88</td> </tr> <tr> <td>Cheng et al. (2016a) || Method || soft</td> <td>30.57</td> <td>23.79</td> </tr> <tr> <td>Cheng et al. (2016a) || Method || likelihood</td> <td>32.59</td> <td>25.93</td> </tr> <tr> <td>Ours || Method || Ours sent-beam</td> <td>31.64</td> <td>24.39</td> </tr> <tr> <td>Ours || Method || word-sampling</td> <td>33.86</td> <td>27.03</td> </tr> </tbody></table> | Table 3 | table_3 | P17-1176 | 6 | acl2017 | Table 3 gives BLEU scores on the Europarl corpus of our best performing sentence-level method (sent-beam) and word-level method (word-sampling) compared with pivot-based methods (Cheng et al., 2016a). We use the same data preprocessing as (Cheng et al., 2016a). We find that both the sent-beam and word-sampling methods outperform the pivot-based approaches in a zero-resource scenario across language pairs. Our word-sampling method improves over the best performing zero-resource soft method on Spanish-French translation by +3.29 BLEU points and German-French translation by +3.24 BLEU points. In addition, the word-sampling mothod surprisingly obtains improvement over the likelihood method, which leverages a source-target parallel corpus. | [1, 2, 1, 1, 1] | ['Table 3 gives BLEU scores on the Europarl corpus of our best performing sentence-level method (sent-beam) and word-level method (word-sampling) compared with pivot-based methods (Cheng et al., 2016a).', 'We use the same data preprocessing as (Cheng et al., 2016a).', 'We find that both the sent-beam and word-sampling methods outperform the pivot-based approaches in a zero-resource scenario across language pairs.', 'Our word-sampling method improves over the best performing zero-resource soft method on Spanish-French translation by +3.29 BLEU points and German-French translation by +3.24 BLEU points.', 'In addition, the word-sampling mothod surprisingly obtains improvement over the likelihood method, which leverages a source-target parallel corpus.'] | [['Ours sent-beam', 'word-sampling', 'Cheng et al. (2016a)'], ['Cheng et al. (2016a)'], ['Ours sent-beam', 'word-sampling', 'pivot'], ['word-sampling', 'soft', 'Es→ Fr'], ['word-sampling', 'likelihood']] | 1 |
P17-1185table_2 | Comparison of the methods in terms of the semantic similarity task. Each entry represents the Spearman’s correlation between predicted similarities and the manually assessed ones. | 4 | [['Dim. d', 'd = 100', 'Algorithm', 'SGD-SGNS'], ['Dim. d', 'd = 100', 'Algorithm', 'SVD-SPPMI'], ['Dim. d', 'd = 100', 'Algorithm', 'RO-SGNS'], ['Dim. d', 'd = 200', 'Algorithm', 'SGD-SGNS'], ['Dim. d', 'd = 200', 'Algorithm', 'SVD-SPPMI'], ['Dim. d', 'd = 200', 'Algorithm', 'RO-SGNS'], ['Dim. d', 'd = 500', 'Algorithm', 'SGD-SGNS'], ['Dim. d', 'd = 500', 'Algorithm', 'SVD-SPPMI'], ['Dim. d', 'd = 500', 'Algorithm', 'RO-SGNS']] | 1 | [['ws-sim'], ['ws-rel'], ['ws-full'], ['simlex'], ['men']] | [['0.719', '0.57', '0.662', '0.288', '0.645'], ['0.722', '0.585', '0.669', '0.317', '0.686'], ['0.729', '0.597', '0.677', '0.322', '0.683'], ['0.733', '0.584', '0.677', '0.317', '0.664'], ['0.747', '0.625', '0.694', '0.347', '0.71'], ['0.757', '0.647', '0.708', '0.353', '0.701'], ['0.738', '0.6', '0.688', '0.35', '0.712'], ['0.765', '0.639', '0.707', '0.38', '0.737'], ['0.767', '0.654', '0.715', '0.383', '0.732']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['RO-SGNS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ws-sim</th> <th>ws-rel</th> <th>ws-full</th> <th>simlex</th> <th>men</th> </tr> </thead> <tbody> <tr> <td>Dim. d || d = 100 || Algorithm || SGD-SGNS</td> <td>0.719</td> <td>0.57</td> <td>0.662</td> <td>0.288</td> <td>0.645</td> </tr> <tr> <td>Dim. d || d = 100 || Algorithm || SVD-SPPMI</td> <td>0.722</td> <td>0.585</td> <td>0.669</td> <td>0.317</td> <td>0.686</td> </tr> <tr> <td>Dim. d || d = 100 || Algorithm || RO-SGNS</td> <td>0.729</td> <td>0.597</td> <td>0.677</td> <td>0.322</td> <td>0.683</td> </tr> <tr> <td>Dim. d || d = 200 || Algorithm || SGD-SGNS</td> <td>0.733</td> <td>0.584</td> <td>0.677</td> <td>0.317</td> <td>0.664</td> </tr> <tr> <td>Dim. d || d = 200 || Algorithm || SVD-SPPMI</td> <td>0.747</td> <td>0.625</td> <td>0.694</td> <td>0.347</td> <td>0.71</td> </tr> <tr> <td>Dim. d || d = 200 || Algorithm || RO-SGNS</td> <td>0.757</td> <td>0.647</td> <td>0.708</td> <td>0.353</td> <td>0.701</td> </tr> <tr> <td>Dim. d || d = 500 || Algorithm || SGD-SGNS</td> <td>0.738</td> <td>0.6</td> <td>0.688</td> <td>0.35</td> <td>0.712</td> </tr> <tr> <td>Dim. d || d = 500 || Algorithm || SVD-SPPMI</td> <td>0.765</td> <td>0.639</td> <td>0.707</td> <td>0.38</td> <td>0.737</td> </tr> <tr> <td>Dim. d || d = 500 || Algorithm || RO-SGNS</td> <td>0.767</td> <td>0.654</td> <td>0.715</td> <td>0.383</td> <td>0.732</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1185 | 7 | acl2017 | However, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods in terms of it. We see that our method outperforms the competitors on all datasets except for men dataset where it obtains slightly worse results. Moreover, it is important that the higher dimension entails higher performance gain of our method in comparison to the competitors. | [2, 1, 1, 1] | ['However, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2).', 'Table 2 presents the comparison of the methods in terms of it.', 'We see that our method outperforms the competitors on all datasets except for men dataset where it obtains slightly worse results.', 'Moreover, it is important that the higher dimension entails higher performance gain of our method in comparison to the competitors.'] | [None, None, ['RO-SGNS', 'ws-sim', 'ws-rel', 'ws-full', 'simlex', 'men'], ['RO-SGNS', 'Dim. d']] | 1 |
P17-1187table_1 | Evaluation results of word similarity computation. | 2 | [['Model', 'CBOW'], ['Model', 'GloVe'], ['Model', 'Skip-gram'], ['Model', 'SSA'], ['Model', 'SAC'], ['Model', 'MST'], ['Model', 'SAT']] | 1 | [['Wordsim-240'], ['Wordsim-297']] | [['57.7', '61.1'], ['59.8', '58.7'], ['58.5', '63.3'], ['58.9', '64'], ['59', '63.1'], ['59.2', '62.8'], ['63.2', '65.6']] | column | ['correlation', 'correlation'] | ['SSA', 'SAC', 'SAT', 'MST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wordsim-240</th> <th>Wordsim-297</th> </tr> </thead> <tbody> <tr> <td>Model || CBOW</td> <td>57.7</td> <td>61.1</td> </tr> <tr> <td>Model || GloVe</td> <td>59.8</td> <td>58.7</td> </tr> <tr> <td>Model || Skip-gram</td> <td>58.5</td> <td>63.3</td> </tr> <tr> <td>Model || SSA</td> <td>58.9</td> <td>64</td> </tr> <tr> <td>Model || SAC</td> <td>59</td> <td>63.1</td> </tr> <tr> <td>Model || MST</td> <td>59.2</td> <td>62.8</td> </tr> <tr> <td>Model || SAT</td> <td>63.2</td> <td>65.6</td> </tr> </tbody></table> | Table 1 | table_1 | P17-1187 | 6 | acl2017 | Table 1 shows the results of these models for word similarity computation. From the results we can observe that: (1) Our SAT model outperforms other models, including all baselines, on both two test sets. This indicates that, by utilizing sememe annotation properly, our model can better capture the semantic relations of words, and learn more accurate word embeddings. (2) The SSA model represents a word with the average of its sememe embeddings. In general, SSA model performs slightly better than baselines, which tentatively proves that sememe information is helpful. The reason is that words which share common sememe embeddings will benefit from each other. Especially, those words with lower frequency, which cannot be learned sufficiently using conventional WRL models, in contrast, can obtain better word embeddings from SSA simply because their sememe embeddings can be trained sufficiently through other words. (3) The SAT model performs much better than SSA and SAC. This indicates that SAT can obtain more precise sense distribution of a word. The reason has been mentioned above that, different from SAC using only one target word as attention for WSD, SAT adopts richer contextual information as attention for WSD. (4) SAT works better than MST, and we can conclude that a soft disambiguation over senses prevents inevitable errors when selecting only one most-probable sense. The result makes sense because, for many words, their various senses are not always entirely different from each other, but share some common elements. In some contexts, a single sense may not convey the exact meaning of this word. | [1, 1, 2, 2, 1, 2, 2, 1, 1, 2, 1, 2, 2] | ['Table 1 shows the results of these models for word similarity computation.', 'From the results we can observe that: (1) Our SAT model outperforms other models, including all baselines, on both two test sets.', 'This indicates that, by utilizing sememe annotation properly, our model can better capture the semantic relations of words, and learn more accurate word embeddings.', '(2) The SSA model represents a word with the average of its sememe embeddings.', 'In general, SSA model performs slightly better than baselines, which tentatively proves that sememe information is helpful.', 'The reason is that words which share common sememe embeddings will benefit from each other.', 'Especially, those words with lower frequency, which cannot be learned sufficiently using conventional WRL models, in contrast, can obtain better word embeddings from SSA simply because their sememe embeddings can be trained sufficiently through other words.', '(3) The SAT model performs much better than SSA and SAC.', 'This indicates that SAT can obtain more precise sense distribution of a word.', 'The reason has been mentioned above that, different from SAC using only one target word as attention for WSD, SAT adopts richer contextual information as attention for WSD.', '(4) SAT works better than MST, and we can conclude that a soft disambiguation over senses prevents inevitable errors when selecting only one most-probable sense.', 'The result makes sense because, for many words, their various senses are not always entirely different from each other, but share some common elements.', 'In some contexts, a single sense may not convey the exact meaning of this word.'] | [None, ['SAT', 'Wordsim-240', 'Wordsim-297'], None, ['SSA'], ['SSA'], None, ['SSA'], ['SAT', 'SSA', 'SAC'], ['SAT'], ['SAC'], ['SAT', 'MST'], None, None] | 1 |
P17-1187table_2 | Evaluation results of word analogy inference. | 2 | [['Model', 'CBOW'], ['Model', 'GloVe'], ['Model', 'Skip-gram'], ['Model', 'SSA'], ['Model', 'SAC'], ['Model', 'MST'], ['Model', 'SAT']] | 2 | [['Accuracy', 'Capital'], ['Accuracy', 'City'], ['Accuracy', 'Relationship'], ['Accuracy', 'All'], ['Mean Rank', 'Capital'], ['Mean Rank', 'City'], ['Mean Rank', 'Relationship'], ['Mean Rank', 'All']] | [['49.8', '85.7', '86', '64.2', '36.98', '1.23', '62.64', '37.62'], ['57.3', '74.3', '81.6', '65.8', '19.09', '1.71', '3.58', '12.63'], ['66.8', '93.7', '76.8', '73.4', '137.19', '1.07', '2.95', '83.51'], ['62.3', '93.7', '81.6', '71.9', '45.74', '1.06', '3.33', '28.52'], ['61.6', '95.4', '77.9', '70.8', '19.08', '1.02', '2.18', '12.18'], ['65.7', '95.4', '82.7', '74.5', '50.29', '1.05', '2.48', '31.05'], ['83.2', '98.9', '82.4', '85.3', '14.42', '1.01', '2.63', '9.48']] | column | ['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Mean Rank', 'Mean Rank', 'Mean Rank', 'Mean Rank'] | ['SSA', 'SAC', 'SAT', 'MST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || Capital</th> <th>Accuracy || City</th> <th>Accuracy || Relationship</th> <th>Accuracy || All</th> <th>Mean Rank || Capital</th> <th>Mean Rank || City</th> <th>Mean Rank || Relationship</th> <th>Mean Rank || All</th> </tr> </thead> <tbody> <tr> <td>Model || CBOW</td> <td>49.8</td> <td>85.7</td> <td>86</td> <td>64.2</td> <td>36.98</td> <td>1.23</td> <td>62.64</td> <td>37.62</td> </tr> <tr> <td>Model || GloVe</td> <td>57.3</td> <td>74.3</td> <td>81.6</td> <td>65.8</td> <td>19.09</td> <td>1.71</td> <td>3.58</td> <td>12.63</td> </tr> <tr> <td>Model || Skip-gram</td> <td>66.8</td> <td>93.7</td> <td>76.8</td> <td>73.4</td> <td>137.19</td> <td>1.07</td> <td>2.95</td> <td>83.51</td> </tr> <tr> <td>Model || SSA</td> <td>62.3</td> <td>93.7</td> <td>81.6</td> <td>71.9</td> <td>45.74</td> <td>1.06</td> <td>3.33</td> <td>28.52</td> </tr> <tr> <td>Model || SAC</td> <td>61.6</td> <td>95.4</td> <td>77.9</td> <td>70.8</td> <td>19.08</td> <td>1.02</td> <td>2.18</td> <td>12.18</td> </tr> <tr> <td>Model || MST</td> <td>65.7</td> <td>95.4</td> <td>82.7</td> <td>74.5</td> <td>50.29</td> <td>1.05</td> <td>2.48</td> <td>31.05</td> </tr> <tr> <td>Model || SAT</td> <td>83.2</td> <td>98.9</td> <td>82.4</td> <td>85.3</td> <td>14.42</td> <td>1.01</td> <td>2.63</td> <td>9.48</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1187 | 7 | acl2017 | Table 2 shows the evaluation results of these models for word analogy inference. From the table, we can observe that: (1) The SAT model performs best among all models, and the superiority is more significant than that on word similarity computation. This indicates that SAT will enhance the modeling of implicit relations between word embeddings in the semantic space. The reason is that sememes annotated to word senses have encoded these word relations. For example, capital and Cuba are two sememes of the word “Havana”, which provide explicit semantic relations between the words “Cuba” and “Havana”. (2) The SAT model does well on both classes of Capital and City, because some words in these classes have low frequencies, while their sememes occur so many times that sememe embeddings can be learned sufficiently. With these sememe embeddings, these low-frequent words can be learned more efficiently by SAT. (3) It seems that CBOW works better than SAT on Relationship class. Whereas for the mean rank, CBOW gets the worst results, which indicates the performance of CBOW is unstable. On the contrary, although the accuracy of SAT is a bit lower than that of CBOW, SAT seldom gives an outrageous prediction. In most wrong cases, SAT predicts the word “grandfather” instead of “grandmother”, which is not completely nonsense, because in HowNet the words “grandmother”, “grandfather”, “grandma” and some other similar words share four common sememes while only one sememe of them are different. These similar sememes make the attention process less discriminative with each othe. But for the wrong cases of CBOW, we find that many mistakes are about words with low frequencies, such as “stepdaughter” which occurs merely for 358 times. Considering sememes may relieve this problem. | [1, 1, 2, 2, 2, 1, 2, 1, 1, 1, 2, 2, 2, 2] | ['Table 2 shows the evaluation results of these models for word analogy inference.', 'From the table, we can observe that: (1) The SAT model performs best among all models, and the superiority is more significant than that on word similarity computation.', 'This indicates that SAT will enhance the modeling of implicit relations between word embeddings in the semantic space.', 'The reason is that sememes annotated to word senses have encoded these word relations.', 'For example, capital and Cuba are two sememes of the word “Havana”, which provide explicit semantic relations between the words “Cuba” and “Havana”.', '(2) The SAT model does well on both classes of Capital and City, because some words in these classes have low frequencies, while their sememes occur so many times that sememe embeddings can be learned sufficiently.', 'With these sememe embeddings, these low-frequent words can be learned more efficiently by SAT.', '(3) It seems that CBOW works better than SAT on Relationship class.', 'Whereas for the mean rank, CBOW gets the worst results, which indicates the performance of CBOW is unstable.', 'On the contrary, although the accuracy of SAT is a bit lower than that of CBOW, SAT seldom gives an outrageous prediction.', 'In most wrong cases, SAT predicts the word “grandfather” instead of “grandmother”, which is not completely nonsense, because in HowNet the words “grandmother”, “grandfather”, “grandma” and some other similar words share four common sememes while only one sememe of them are different.', 'These similar sememes make the attention process less discriminative with each othe.', 'But for the wrong cases of CBOW, we find that many mistakes are about words with low frequencies, such as “stepdaughter” which occurs merely for 358 times.', 'Considering sememes may relieve this problem.'] | [None, ['SAT'], ['SAT'], None, None, ['SAT', 'Capital', 'City'], ['SAT'], ['CBOW', 'SAT', 'Relationship'], ['CBOW', 'Mean Rank'], ['SAT', 'CBOW'], ['SAT'], None, ['CBOW'], None] | 1 |
P17-1189table_2 | Results of Chinese SRL tested on CPB and CSB with automatic PoS tagging, using standard LSTM RNN model (Experiment 1). | 2 | [['Corpus', 'CSB'], ['Corpus', 'CPB']] | 1 | [['Pr. (%)'], ['Rec. (%)'], ['F1 (%)']] | [['75.8', '73.45', '74.61'], ['76.75', '73.03', '74.84']] | column | ['Pr. (%)', 'Rec. (%)', 'F1 (%)'] | ['CSB', 'CPB'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pr. (%)</th> <th>Rec. (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Corpus || CSB</td> <td>75.8</td> <td>73.45</td> <td>74.61</td> </tr> <tr> <td>Corpus || CPB</td> <td>76.75</td> <td>73.03</td> <td>74.84</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1189 | 8 | acl2017 | 5.2 Results . Performance on Chinese SemBank. Table 2 gives the results of Experiment 1. We see that precision on CPB with automatic PoS tagging is about 0.9 percentage point higher than that on CSB, while recall is about 0.4 percentage point lower, and the gap between F1 scores on CPB and CSB is not significant, which is only about 0.3 percentage point, although the size of CSB is smaller. | [2, 2, 1, 1] | ['5.2 Results .', 'Performance on Chinese SemBank.', 'Table 2 gives the results of Experiment 1.', 'We see that precision on CPB with automatic PoS tagging is about 0.9 percentage point higher than that on CSB, while recall is about 0.4 percentage point lower, and the gap between F1 scores on CPB and CSB is not significant, which is only about 0.3 percentage point, although the size of CSB is smaller.'] | [None, None, None, ['Pr. (%)', 'CPB', 'CSB', 'Rec. (%)', 'F1 (%)']] | 1 |
P17-1189table_3 | Result comparison on CPB dataset. Compared to learning with single corpus using bi-LSTM model (77.09%), learning with CSB can improve the performance by at list 0.59%. Also the best score (79.67%) was achieved by the PNN GRA model. | 3 | [['Method', '-', 'Xue (2008) ME'], ['Method', '-', 'Collobert and Weston (2008) MTL'], ['Method', '-', 'Ding and Chang (2009) CRF'], ['Method', '-', 'Yang et al. (2014) Multi-Predicate'], ['Method', '-', 'Wang et al. (2015) bi-LSTM'], ['Method', '-', 'Sha et al. (2016) bi-LSTM+QOM'], ['Method', 'With external language resources', 'Wang et al. (2015) +Gigaword embedding'], ['Method', 'With external language resources', 'Wang et al. (2015) +NetBank embedding'], ['Method', 'With external language resources', 'Guo et al. (2016) +Relataion Classification'], ['Method', 'With CSB corpus', 'bi-LSTM+CSB embedding'], ['Method', 'With CSB corpus', 'Two-column finetuning'], ['Method', 'With CSB corpus', 'Two-column progressive (ours)'], ['Method', 'With CSB corpus', 'Two-column Progressive+GRA (ours)']] | 1 | [['F1 (%)']] | [['71.9'], ['74.05'], ['72.64'], ['75.31'], ['77.09 (+0.00)'], ['77.69'], ['77.21'], ['77.59'], ['75.46'], ['77.68 (+0.59)'], ['78.42 (+1.33)'], ['79.30 (+2.21)'], ['79.67 (+2.58)']] | column | ['F1 (%)'] | ['Two-column progressive (ours)', 'Two-column Progressive+GRA (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Method || - || Xue (2008) ME</td> <td>71.9</td> </tr> <tr> <td>Method || - || Collobert and Weston (2008) MTL</td> <td>74.05</td> </tr> <tr> <td>Method || - || Ding and Chang (2009) CRF</td> <td>72.64</td> </tr> <tr> <td>Method || - || Yang et al. (2014) Multi-Predicate</td> <td>75.31</td> </tr> <tr> <td>Method || - || Wang et al. (2015) bi-LSTM</td> <td>77.09 (+0.00)</td> </tr> <tr> <td>Method || - || Sha et al. (2016) bi-LSTM+QOM</td> <td>77.69</td> </tr> <tr> <td>Method || With external language resources || Wang et al. (2015) +Gigaword embedding</td> <td>77.21</td> </tr> <tr> <td>Method || With external language resources || Wang et al. (2015) +NetBank embedding</td> <td>77.59</td> </tr> <tr> <td>Method || With external language resources || Guo et al. (2016) +Relataion Classification</td> <td>75.46</td> </tr> <tr> <td>Method || With CSB corpus || bi-LSTM+CSB embedding</td> <td>77.68 (+0.59)</td> </tr> <tr> <td>Method || With CSB corpus || Two-column finetuning</td> <td>78.42 (+1.33)</td> </tr> <tr> <td>Method || With CSB corpus || Two-column progressive (ours)</td> <td>79.30 (+2.21)</td> </tr> <tr> <td>Method || With CSB corpus || Two-column Progressive+GRA (ours)</td> <td>79.67 (+2.58)</td> </tr> </tbody></table> | Table 3 | table_3 | P17-1189 | 8 | acl2017 | Table 3 summarizes the SRL performance of previous benchmark methods and our experiments described above. Collobert and Weston only conducted their experiments on English corpus, but we notice that their approach has been implemented and tested on CPB by Wang et al. (2015), so we also put their result here for comparison. We can make several observations from these results. Our approach significantly outperforms Sha et al. (2016) by a large margin (Wilcoxon Signed Rank Test, p < 0.05), even without using GRA. This result can prove the ability of our model to capture underlying similarities between heterogeneous SRL resources. The results of methods using external language resources are also presented in Table 3. Not surprisingly, we see that the overall best F1 score, 79.67%, is achieved by the progressive nets with the GRAs. Without GRA, the F1 drops 0.37% percentage point to 79.30, confirming that gated recurrent adapter structure is more suitable for our task because it can remember what has been transferred in previous time steps. | [1, 2, 2, 1, 2, 1, 2, 1] | ['Table 3 summarizes the SRL performance of previous benchmark methods and our experiments described above.', 'Collobert and Weston only conducted their experiments on English corpus, but we notice that their approach has been implemented and tested on CPB by Wang et al. (2015), so we also put their result here for comparison.', 'We can make several observations from these results.', 'Our approach significantly outperforms Sha et al. (2016) by a large margin (Wilcoxon Signed Rank Test, p < 0.05), even without using GRA.', 'This result can prove the ability of our model to capture underlying similarities between heterogeneous SRL resources.', 'The results of methods using external language resources are also presented in Table 3.', 'Not surprisingly, we see that the overall best F1 score, 79.67%, is achieved by the progressive nets with the GRAs.', 'Without GRA, the F1 drops 0.37% percentage point to 79.30, confirming that gated recurrent adapter structure is more suitable for our task because it can remember what has been transferred in previous time steps.'] | [None, ['Collobert and Weston (2008) MTL', 'Wang et al. (2015) bi-LSTM'], None, ['Two-column Progressive+GRA (ours)', 'Sha et al. (2016) bi-LSTM+QOM'], None, None, ['F1 (%)', 'Two-column Progressive+GRA (ours)'], ['Two-column progressive (ours)', 'F1 (%)']] | 1 |
P17-1190table_2 | Results on SemEval textual similarity datasets (Pearson’s r × 100) when experimenting with different regularization techniques. | 4 | [['Model', 'AVG', 'Regularization', 'none'], ['Model', 'AVG', 'Regularization', 'dropout'], ['Model', 'AVG', 'Regularization', 'word dropout'], ['Model', 'LSTM', 'Regularization', 'none'], ['Model', 'LSTM', 'Regularization', 'L2'], ['Model', 'LSTM', 'Regularization', 'dropout'], ['Model', 'LSTM', 'Regularization', 'word dropout'], ['Model', 'LSTM', 'Regularization', 'scrambling'], ['Model', 'LSTM', 'Regularization', 'dropout + scrambling'], ['Model', 'LSTMAVG', 'Regularization', 'none'], ['Model', 'LSTMAVG', 'Regularization', 'dropout + scrambling'], ['Model', 'BiLSTMAVG', 'Regularization', 'dropout + scrambling']] | 1 | [['Oracle'], ['2016 STS']] | [['68.5', '68.4'], ['68.4', '68.3'], ['68.3', '68.3'], ['60.6', '59.3'], ['60.3', '56.5'], ['58.1', '55.3'], ['66.2', '65.3'], ['66.3', '65.1'], ['68.4', '68.4'], ['67.7', '67.5'], ['69.2', '68.6'], ['69.4', '68.7']] | column | ['correlation', 'correlation'] | ['dropout + scrambling'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Oracle</th> <th>2016 STS</th> </tr> </thead> <tbody> <tr> <td>Model || AVG || Regularization || none</td> <td>68.5</td> <td>68.4</td> </tr> <tr> <td>Model || AVG || Regularization || dropout</td> <td>68.4</td> <td>68.3</td> </tr> <tr> <td>Model || AVG || Regularization || word dropout</td> <td>68.3</td> <td>68.3</td> </tr> <tr> <td>Model || LSTM || Regularization || none</td> <td>60.6</td> <td>59.3</td> </tr> <tr> <td>Model || LSTM || Regularization || L2</td> <td>60.3</td> <td>56.5</td> </tr> <tr> <td>Model || LSTM || Regularization || dropout</td> <td>58.1</td> <td>55.3</td> </tr> <tr> <td>Model || LSTM || Regularization || word dropout</td> <td>66.2</td> <td>65.3</td> </tr> <tr> <td>Model || LSTM || Regularization || scrambling</td> <td>66.3</td> <td>65.1</td> </tr> <tr> <td>Model || LSTM || Regularization || dropout + scrambling</td> <td>68.4</td> <td>68.4</td> </tr> <tr> <td>Model || LSTMAVG || Regularization || none</td> <td>67.7</td> <td>67.5</td> </tr> <tr> <td>Model || LSTMAVG || Regularization || dropout + scrambling</td> <td>69.2</td> <td>68.6</td> </tr> <tr> <td>Model || BiLSTMAVG || Regularization || dropout + scrambling</td> <td>69.4</td> <td>68.7</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1190 | 6 | acl2017 | The results are shown in Table 2. They show that dropping entire word embeddings and scrambling input sequences is very effective in improving the result of the LSTM, while neither type of dropout improves AVG. Moreover, averaging the hidden states of the LSTM is the most effective modification to the LSTM in improving performance. All of these modifications can be combined to significantly improve the LSTM, finally allowing it to overtake AVG. | [1, 1, 1, 1] | ['The results are shown in Table 2.', 'They show that dropping entire word embeddings and scrambling input sequences is very effective in improving the result of the LSTM, while neither type of dropout improves AVG.', 'Moreover, averaging the hidden states of the LSTM is the most effective modification to the LSTM in improving performance.', 'All of these modifications can be combined to significantly improve the LSTM, finally allowing it to overtake AVG.'] | [None, ['Model', 'dropout', 'word dropout', 'scrambling', 'AVG', 'none'], ['LSTMAVG'], ['LSTMAVG', 'dropout + scrambling']] | 1 |
P17-1190table_3 | Results on SemEval textual similarity datasets (Pearson’s r × 100) for the GRAN architectures. The first row, marked as (no reg.) is the GRAN without any regularization. The other rows show the result of the various GRAN models using dropout and scrambling. | 2 | [['Model', 'GRAN (no reg.)'], ['Model', 'GRAN'], ['Model', 'GRAN-2'], ['Model', 'GRAN-3'], ['Model', 'GRAN-4'], ['Model', 'GRAN-5'], ['Model', 'BiGRAN']] | 1 | [['Oracle'], ['STS 2016']] | [['68', '68'], ['69.5', '68.9'], ['68.8', '68.1'], ['69', '67.2'], ['68.6', '68.1'], ['66.1', '64.8'], ['69.7', '68.4']] | column | ['correlation', 'correlation'] | ['GRAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Oracle</th> <th>STS 2016</th> </tr> </thead> <tbody> <tr> <td>Model || GRAN (no reg.)</td> <td>68</td> <td>68</td> </tr> <tr> <td>Model || GRAN</td> <td>69.5</td> <td>68.9</td> </tr> <tr> <td>Model || GRAN-2</td> <td>68.8</td> <td>68.1</td> </tr> <tr> <td>Model || GRAN-3</td> <td>69</td> <td>67.2</td> </tr> <tr> <td>Model || GRAN-4</td> <td>68.6</td> <td>68.1</td> </tr> <tr> <td>Model || GRAN-5</td> <td>66.1</td> <td>64.8</td> </tr> <tr> <td>Model || BiGRAN</td> <td>69.7</td> <td>68.4</td> </tr> </tbody></table> | Table 3 | table_3 | P17-1190 | 6 | acl2017 | In Table 3, we compare the various GRAN architectures. We find that the GRAN provides a small improvement over the best LSTM configuration, possibly because of its similarity to AVG. It also outperforms the other GRAN models, despite being the simplest. | [1, 1, 1] | ['In Table 3, we compare the various GRAN architectures.', 'We find that the GRAN provides a small improvement over the best LSTM configuration, possibly because of its similarity to AVG.', 'It also outperforms the other GRAN models, despite being the simplest.'] | [None, ['GRAN'], ['GRAN', 'GRAN (no reg.)']] | 1 |
P17-1190table_6 | Results from supervised training on the STS and SICK datasets (Pearson’s r × 100) for the GRAN architectures. The last column is the average result on the two datasets. | 2 | [['Model', 'GRAN'], ['Model', 'GRAN-2'], ['Model', 'GRAN-3'], ['Model', 'GRAN-4'], ['Model', 'GRAN-5']] | 1 | [['STS'], ['SICK'], ['Avg.']] | [['81.6', '85.3', '83.5'], ['77.4', '85.1', '81.3'], ['81.3', '85.4', '83.4'], ['80.1', '85.5', '82.8'], ['70.9', '83', '77']] | column | ['r', 'r', 'r'] | ['GRAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STS</th> <th>SICK</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Model || GRAN</td> <td>81.6</td> <td>85.3</td> <td>83.5</td> </tr> <tr> <td>Model || GRAN-2</td> <td>77.4</td> <td>85.1</td> <td>81.3</td> </tr> <tr> <td>Model || GRAN-3</td> <td>81.3</td> <td>85.4</td> <td>83.4</td> </tr> <tr> <td>Model || GRAN-4</td> <td>80.1</td> <td>85.5</td> <td>82.8</td> </tr> <tr> <td>Model || GRAN-5</td> <td>70.9</td> <td>83</td> <td>77</td> </tr> </tbody></table> | Table 6 | table_6 | P17-1190 | 7 | acl2017 | In Table 6 we compare the various GRAN architectures under the same settings as the previous experiment. We find that the GRAN still has the best overall performance. | [1, 1] | ['In Table 6 we compare the various GRAN architectures under the same settings as the previous experiment.', 'We find that the GRAN still has the best overall performance.'] | [['GRAN'], ['GRAN']] | 1 |
P17-1191table_1 | Results on Belinkov et al. (2014)’s PPA test set. HPCD (full) is from the original paper, and it uses syntactic SkipGram. GloVe-retro is GloVe vectors retrofitted (Faruqui et al., 2015) to WordNet 3.1, and GloVe-extended refers to the synset embeddings obtained by running AutoExtend (Rothe and Sch¨utze, 2015) on GloVe. | 4 | [['System', 'HPCD (full)', 'Initialization', 'Syntactic-SG'], ['System', 'LSTM-PP', 'Initialization', 'GloVe'], ['System', 'LSTM-PP', 'Initialization', 'GloVe-retro'], ['System', 'OntoLSTM-PP', 'Initialization', 'GloVe-extended']] | 2 | [['Test', 'Acc.']] | [['88.7'], ['84.3'], ['84.8'], ['89.7']] | column | ['Acc.'] | ['OntoLSTM-PP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test || Acc.</th> </tr> </thead> <tbody> <tr> <td>System || HPCD (full) || Initialization || Syntactic-SG</td> <td>88.7</td> </tr> <tr> <td>System || LSTM-PP || Initialization || GloVe</td> <td>84.3</td> </tr> <tr> <td>System || LSTM-PP || Initialization || GloVe-retro</td> <td>84.8</td> </tr> <tr> <td>System || OntoLSTM-PP || Initialization || GloVe-extended</td> <td>89.7</td> </tr> </tbody></table> | Table 1 | table_1 | P17-1191 | 6 | acl2017 | Table 1 shows that our proposed token level embedding scheme OntoLSTM-PP outperforms the better variant of our baseline LSTM-PP (with GloVe-retro intialization) by an absolute accuracy difference of 4.9%, or a relative error reduction of 32%. OntoLSTM-PP also outperforms HPCD (full), the previous best result on this dataset. | [1, 1] | ['Table 1 shows that our proposed token level embedding scheme OntoLSTM-PP outperforms the better variant of our baseline LSTM-PP (with GloVe-retro intialization) by an absolute accuracy difference of 4.9%, or a relative error reduction of 32%.', 'OntoLSTM-PP also outperforms HPCD (full), the previous best result on this dataset.'] | [['OntoLSTM-PP', 'Initialization', 'GloVe-retro', 'Acc.'], ['OntoLSTM-PP', 'HPCD (full)']] | 1 |
P17-1191table_2 | Results from RBG dependency parser with features coming from various PP attachment predictors and oracle attachments. | 2 | [['System', 'RBG'], ['System', 'RBG + HPCD (full)'], ['System', 'RBG + LSTM-PP'], ['System', 'RBG + OntoLSTM-PP'], ['System', 'RBG + Oracle PP']] | 1 | [['Full UAS'], ['PPA Acc.']] | [['94.17', '88.51'], ['94.19', '89.59'], ['94.14', '86.35'], ['94.3', '90.11'], ['94.6', '98.97']] | column | ['Full UAS', 'PPA Acc.'] | ['RBG + OntoLSTM-PP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Full UAS</th> <th>PPA Acc.</th> </tr> </thead> <tbody> <tr> <td>System || RBG</td> <td>94.17</td> <td>88.51</td> </tr> <tr> <td>System || RBG + HPCD (full)</td> <td>94.19</td> <td>89.59</td> </tr> <tr> <td>System || RBG + LSTM-PP</td> <td>94.14</td> <td>86.35</td> </tr> <tr> <td>System || RBG + OntoLSTM-PP</td> <td>94.3</td> <td>90.11</td> </tr> <tr> <td>System || RBG + Oracle PP</td> <td>94.6</td> <td>98.97</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1191 | 6 | acl2017 | Table 2 shows the effect of using the PP attachment predictions as features within a dependency parser. We note there is a relatively small difference in unlabeled attachment accuracy for all dependencies (not only PP attachments), even when gold PP attachments are used as additional features to the parser. However, when gold PP attachment are used, we note a large potential improvement of 10.46 points in PP attachment accuracies (between the PPA accuracy for RBG and RBG + Oracle PP), which confirms that adding PP predictions as features is an effective approach. Our proposed model RBG + OntoLSTM-PP recovers 15% of this potential improvement, while RBG + HPCD (full) recovers 10%, which illustrates that PP attachment remains a difficult problem with plenty of room for improvements even when using a dedicated model to predict PP attachments and using its predictions in a dependency parser. We also note that, although we use the same predictions of the HPCD (full) model in Belinkov et al. (2014) 5 , we report different results than Belinkov et al. (2014). For example, the unlabeled attachment score (UAS) of the baselines RBG and RBG + HPCD (full) are 94.17 and 94.19, respectively, in Table 2, compared to 93.96 and 94.05, respectively, in Belinkov et al. (2014). This is due to the use of different versions of the RBG parser.6. | [1, 2, 1, 1, 2, 1, 2] | ['Table 2 shows the effect of using the PP attachment predictions as features within a dependency parser.', 'We note there is a relatively small difference in unlabeled attachment accuracy for all dependencies (not only PP attachments), even when gold PP attachments are used as additional features to the parser.', 'However, when gold PP attachment are used, we note a large potential improvement of 10.46 points in PP attachment accuracies (between the PPA accuracy for RBG and RBG + Oracle PP), which confirms that adding PP predictions as features is an effective approach.', 'Our proposed model RBG + OntoLSTM-PP recovers 15% of this potential improvement, while RBG + HPCD (full) recovers 10%, which illustrates that PP attachment remains a difficult problem with plenty of room for improvements even when using a dedicated model to predict PP attachments and using its predictions in a dependency parser.', 'We also note that, although we use the same predictions of the HPCD (full) model in Belinkov et al. (2014) 5 , we report different results than Belinkov et al. (2014).', 'For example, the unlabeled attachment score (UAS) of the baselines RBG and RBG + HPCD (full) are 94.17 and 94.19, respectively, in Table 2, compared to 93.96 and 94.05, respectively, in Belinkov et al. (2014).', 'This is due to the use of different versions of the RBG parser.6.'] | [None, None, ['PPA Acc.', 'RBG', 'RBG + Oracle PP'], ['RBG + OntoLSTM-PP', 'RBG + HPCD (full)'], ['RBG + HPCD (full)'], ['Full UAS', 'RBG', 'RBG + HPCD (full)'], ['RBG']] | 1 |
P17-1194table_2 | Performance of alternative sequence labeling architectures on NER and chunking datasets, measured using CoNLL standard entity-level F1 score. | 1 | [['Baseline'], ['+ dropout'], ['+ LMcost']] | 2 | [['CoNLL-00', 'DEV'], ['CoNLL-01', 'TEST'], ['CoNLL-03', 'DEV'], ['CoNLL-04', 'TEST'], ['CHEMDNER', 'DEV'], ['CHEMDNER', 'TEST'], ['JNLPBA', 'DEV'], ['JNLPBA', 'TEST']] | [['92.92', '92.67', '90.85', '85.63', '83.63', '84.51', '77.13', '72.79'], ['93.4', '93.15', '91.14', '86', '84.78', '85.67', '77.61', '73.16'], ['94.22', '93.88', '91.48', '86.26', '85.45', '86.27', '78.51', '73.83']] | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['+ dropout', '+ LMcost'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoNLL-00 || DEV</th> <th>CoNLL-01 || TEST</th> <th>CoNLL-03 || DEV</th> <th>CoNLL-04 || TEST</th> <th>CHEMDNER || DEV</th> <th>CHEMDNER || TEST</th> <th>JNLPBA || DEV</th> <th>JNLPBA || TEST</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>92.92</td> <td>92.67</td> <td>90.85</td> <td>85.63</td> <td>83.63</td> <td>84.51</td> <td>77.13</td> <td>72.79</td> </tr> <tr> <td>+ dropout</td> <td>93.4</td> <td>93.15</td> <td>91.14</td> <td>86</td> <td>84.78</td> <td>85.67</td> <td>77.61</td> <td>73.16</td> </tr> <tr> <td>+ LMcost</td> <td>94.22</td> <td>93.88</td> <td>91.48</td> <td>86.26</td> <td>85.45</td> <td>86.27</td> <td>78.51</td> <td>73.83</td> </tr> </tbody></table> | Table 2 | table_2 | P17-1194 | 6 | acl2017 | Table 2 contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement -applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks. While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings. For JNLPBA, the system achieves 73.83% compared to 72.55% by Zhou and Su (2004) and 72.70% by Rei et al. (2016). On CoNLL-03, Lample et al. (2016) achieve a considerably higher result of 90.94%, possibly due to their use of specialised word embeddings and a custom version of LSTM. However, our system does outperform a similar architecture by Huang et al. (2015), achieving 86.26% compared to 84.26% F1 score on the CoNLL-03 dataset. | [1, 2, 2, 1, 1, 1, 1] | ['Table 2 contains results for evaluating the different architectures on NER and chunking.', 'On these tasks, the application of dropout provides a consistent improvement -applying some variance onto the input embeddings results in more robust models for NER and chunking.', 'The addition of the language modeling objective consistently further improves performance on all benchmarks.', 'While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings.', 'For JNLPBA, the system achieves 73.83% compared to 72.55% by Zhou and Su (2004) and 72.70% by Rei et al. (2016).', 'On CoNLL-03, Lample et al. (2016) achieve a considerably higher result of 90.94%, possibly due to their use of specialised word embeddings and a custom version of LSTM.', 'However, our system does outperform a similar architecture by Huang et al. (2015), achieving 86.26% compared to 84.26% F1 score on the CoNLL-03 dataset.'] | [None, None, None, None, ['JNLPBA'], ['CoNLL-03'], ['+ LMcost', 'CoNLL-03']] | 1 |
P17-1195table_6 | Result of end-to-end problem solving | 2 | [['Dataset', 'DEV'], ['Dataset', 'TEST']] | 1 | [['Correct'], ['Timeout'], ['Wrong'], ['No RCF'], ['Parse Failure']] | [['27.60%', '10.90%', '12.10%', '12.10%', '37.40%'], ['11.40%', '1.80%', '11.40%', '6.80%', '68.60%']] | column | ['Correct', 'Timeout', 'Wrong', 'No RCF', 'Parse Failure'] | ['DEV', 'TEST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Correct</th> <th>Timeout</th> <th>Wrong</th> <th>No RCF</th> <th>Parse Failure</th> </tr> </thead> <tbody> <tr> <td>Dataset || DEV</td> <td>27.60%</td> <td>10.90%</td> <td>12.10%</td> <td>12.10%</td> <td>37.40%</td> </tr> <tr> <td>Dataset || TEST</td> <td>11.40%</td> <td>1.80%</td> <td>11.40%</td> <td>6.80%</td> <td>68.60%</td> </tr> </tbody></table> | Table 6 | table_6 | P17-1195 | 8 | acl2017 | Table 6 presents the result of end-to-end problem solving on the UNIV data. It shows the failure in the semantic parsing is a major bottleneck in the current system. Since a problem in UNIV includes more than three sentences on average, parsing a whole problem is quite a high bar for a semantic parser. It is however necessary to solve it by the nature of the task. Once a problem-level logical form was produced, the system yielded a correct solution for 27.60% of such problems in DEV and 11.40% in TEST. | [1, 2, 2, 2, 1] | ['Table 6 presents the result of end-to-end problem solving on the UNIV data.', 'It shows the failure in the semantic parsing is a major bottleneck in the current system.', 'Since a problem in UNIV includes more than three sentences on average, parsing a whole problem is quite a high bar for a semantic parser.', 'It is however necessary to solve it by the nature of the task.', 'Once a problem-level logical form was produced, the system yielded a correct solution for 27.60% of such problems in DEV and 11.40% in TEST.'] | [None, None, None, None, ['Correct', 'DEV', 'TEST']] | 1 |
P17-2001table_2 | The detailed comparison of E-E and E-T against relation types to Mirza and Tonelli (2016) (Micro-average Overall F1-score) on test data. | 2 | [['LINK type', 'AFTER'], ['LINK type', 'BEFORE'], ['LINK type', 'SIMULTA.'], ['LINK type', 'INCLUDES'], ['LINK type', 'IS INCLUD.'], ['LINK type', 'VAGUE'], ['LINK type', 'Overall']] | 2 | [['Our', 'E-D'], ['Mirza', 'E-D'], ['Our', 'E-E'], ['Mirza', 'E-E']] | [['0.582', '0.466', '0.44', '0.43'], ['0.634', '0.671', '0.46', '0.471'], ['-', '-', '-', '-'], ['0.056', '0.25', '0.025', '0.049'], ['0.595', '0.6', '0.17', '0.25'], ['0.526', '0.502', '0.624', '0.613'], ['0.546', '0.534', '0.529', '0.519']] | column | ['F1-score', 'F1-score', 'F1-score', 'F1-score'] | ['Our'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our || E-D</th> <th>Mirza || E-D</th> <th>Our || E-E</th> <th>Mirza || E-E</th> </tr> </thead> <tbody> <tr> <td>LINK type || AFTER</td> <td>0.582</td> <td>0.466</td> <td>0.44</td> <td>0.43</td> </tr> <tr> <td>LINK type || BEFORE</td> <td>0.634</td> <td>0.671</td> <td>0.46</td> <td>0.471</td> </tr> <tr> <td>LINK type || SIMULTA.</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>LINK type || INCLUDES</td> <td>0.056</td> <td>0.25</td> <td>0.025</td> <td>0.049</td> </tr> <tr> <td>LINK type || IS INCLUD.</td> <td>0.595</td> <td>0.6</td> <td>0.17</td> <td>0.25</td> </tr> <tr> <td>LINK type || VAGUE</td> <td>0.526</td> <td>0.502</td> <td>0.624</td> <td>0.613</td> </tr> <tr> <td>LINK type || Overall</td> <td>0.546</td> <td>0.534</td> <td>0.529</td> <td>0.519</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2001 | 4 | acl2017 | Table 2 shows the detailed comparison to their work. Our system achieves higher performance on ‘AFTER’, ‘VAGUE’, while lower on ‘BEFORE’, ‘INCLUDES’ (5% of all data) and ‘IS INCLUDED’ (4% of all data). It is likely that their rich traditional features help the classifiers to capture more minority-class links. On the whole, our system reaches better ‘Overall’ on both E-E and E-D. As their E-T classifier does not include word embeddings, the E-T results are not listed. | [1, 1, 2, 1, 2] | ['Table 2 shows the detailed comparison to their work.', 'Our system achieves higher performance on ‘AFTER’, ‘VAGUE’, while lower on ‘BEFORE’, ‘INCLUDES’ (5% of all data) and ‘IS INCLUDED’ (4% of all data).', 'It is likely that their rich traditional features help the classifiers to capture more minority-class links.', 'On the whole, our system reaches better ‘Overall’ on both E-E and E-D.', 'As their E-T classifier does not include word embeddings, the E-T results are not listed.'] | [None, ['Our', 'AFTER', 'VAGUE', 'BEFORE', 'INCLUDES', 'IS INCLUD.'], None, ['Our', 'Overall', 'E-E', 'E-D'], None] | 1 |
P17-2007table_1 | Single-source parsing results in terms of average accuracy % over 3 runs. Best results are in bold. | 2 | [['GEO', 'en'], ['GEO', 'de'], ['GEO', 'el'], ['GEO', 'th'], ['GEO', 'avg.'], ['ATIS', 'en'], ['ATIS', 'id'], ['ATIS', 'zh'], ['ATIS', 'avg.']] | 2 | [['SINGLE', '-'], ['MULTI', 'separate'], ['MULTI', 'shared']] | [['84.4', '85', '85.48'], ['70.24', '71.19', '72.86'], ['74.4', '75.12', '75.6'], ['72.86', '72.26', '73.33'], ['75.48', '75.89', '76.82'], ['81.85', '81.4', '81.77'], ['74.85', '74.03', '75.45'], ['73.66', '75.89', '73.96'], ['76.79', '77.11', '77.06']] | row | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['MULTI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SINGLE || -</th> <th>MULTI || separate</th> <th>MULTI || shared</th> </tr> </thead> <tbody> <tr> <td>GEO || en</td> <td>84.4</td> <td>85</td> <td>85.48</td> </tr> <tr> <td>GEO || de</td> <td>70.24</td> <td>71.19</td> <td>72.86</td> </tr> <tr> <td>GEO || el</td> <td>74.4</td> <td>75.12</td> <td>75.6</td> </tr> <tr> <td>GEO || th</td> <td>72.86</td> <td>72.26</td> <td>73.33</td> </tr> <tr> <td>GEO || avg.</td> <td>75.48</td> <td>75.89</td> <td>76.82</td> </tr> <tr> <td>ATIS || en</td> <td>81.85</td> <td>81.4</td> <td>81.77</td> </tr> <tr> <td>ATIS || id</td> <td>74.85</td> <td>74.03</td> <td>75.45</td> </tr> <tr> <td>ATIS || zh</td> <td>73.66</td> <td>75.89</td> <td>73.96</td> </tr> <tr> <td>ATIS || avg.</td> <td>76.79</td> <td>77.11</td> <td>77.06</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2007 | 4 | acl2017 | Table 1 compares the performance of the monolingual sequence-to-tree model (Dong and Lapata, 2016), SINGLE, and our multilingual model, MULTI, with separate and shared output parameters under the single-source setting as described in Section 3.1. On average, both variants of the multilingual model outperform the monolingual model by up to 1.34% average accuracy on GEO. Parameter sharing is shown to be helpful, in particular for GEO. We observe that the average performance increase on ATIS mainly comes from Chinese and Indonesian. We also learn that although including English is often helpful for the other languages, it may affect its individual performance. | [1, 1, 1, 1, 1] | ['Table 1 compares the performance of the monolingual sequence-to-tree model (Dong and Lapata, 2016), SINGLE, and our multilingual model, MULTI, with separate and shared output parameters under the single-source setting as described in Section 3.1.', 'On average, both variants of the multilingual model outperform the monolingual model by up to 1.34% average accuracy on GEO.', 'Parameter sharing is shown to be helpful, in particular for GEO.', 'We observe that the average performance increase on ATIS mainly comes from Chinese and Indonesian.', 'We also learn that although including English is often helpful for the other languages, it may affect its individual performance.'] | [['SINGLE', 'MULTI'], ['MULTI', 'SINGLE', 'GEO'], ['GEO'], ['ATIS', 'zh', 'id'], ['en']] | 1 |
P17-2007table_6 | Single-source parsing results showing the accuracy of the 3 runs. Best results are in bold. | 2 | [['GEO', 'en'], ['GEO', 'de'], ['GEO', 'el'], ['GEO', 'th'], ['ATIS', 'en'], ['ATIS', 'id'], ['ATIS', 'zh']] | 3 | [['SINGLE', '-', '1'], ['SINGLE', '-', '2'], ['SINGLE', '-', '3'], ['MULTI', 'separate', '1'], ['MULTI', 'separate', '2'], ['MULTI', 'separate', '3'], ['MULTI', 'shared', '1'], ['MULTI', 'shared', '2'], ['MULTI', 'shared', '3']] | [['87.14', '83.57', '82.50', '85.71', '83.93', '85.36', '85.36', '83.93', '87.14'], ['70.00', '70.36', '70.36', '71.79', '71.79', '70.00', '73.57', '73.93', '71.07'], ['76.43', '72.50', '74.29', '77.14', '72.14', '76.07', '76.43', '74.64', '75.71'], ['72.50', '73.57', '72.50', '72.14', '72.14', '72.50', '72.50', '71.07', '76.43'], ['84.60', '79.24', '81.70', '82.14', '81.03', '81.03', '82.59', '80.36', '82.37'], ['75.67', '74.55', '74.33', '75.67', '72.54', '73.88', '76.56', '75.45', '74.33'], ['74.33', '73.66', '72.99', '74.11', '76.12', '77.46', '75.67', '72.54', '73.66']] | row | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['SINGLE', 'MULTI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SINGLE || - || 1</th> <th>SINGLE || - || 2</th> <th>SINGLE || - || 3</th> <th>MULTI || separate || 1</th> <th>MULTI || separate || 2</th> <th>MULTI || separate || 3</th> <th>MULTI || shared || 1</th> <th>MULTI || shared || 2</th> <th>MULTI || shared || 3</th> </tr> </thead> <tbody> <tr> <td>GEO || en</td> <td>87.14</td> <td>83.57</td> <td>82.50</td> <td>85.71</td> <td>83.93</td> <td>85.36</td> <td>85.36</td> <td>83.93</td> <td>87.14</td> </tr> <tr> <td>GEO || de</td> <td>70.00</td> <td>70.36</td> <td>70.36</td> <td>71.79</td> <td>71.79</td> <td>70.00</td> <td>73.57</td> <td>73.93</td> <td>71.07</td> </tr> <tr> <td>GEO || el</td> <td>76.43</td> <td>72.50</td> <td>74.29</td> <td>77.14</td> <td>72.14</td> <td>76.07</td> <td>76.43</td> <td>74.64</td> <td>75.71</td> </tr> <tr> <td>GEO || th</td> <td>72.50</td> <td>73.57</td> <td>72.50</td> <td>72.14</td> <td>72.14</td> <td>72.50</td> <td>72.50</td> <td>71.07</td> <td>76.43</td> </tr> <tr> <td>ATIS || en</td> <td>84.60</td> <td>79.24</td> <td>81.70</td> <td>82.14</td> <td>81.03</td> <td>81.03</td> <td>82.59</td> <td>80.36</td> <td>82.37</td> </tr> <tr> <td>ATIS || id</td> <td>75.67</td> <td>74.55</td> <td>74.33</td> <td>75.67</td> <td>72.54</td> <td>73.88</td> <td>76.56</td> <td>75.45</td> <td>74.33</td> </tr> <tr> <td>ATIS || zh</td> <td>74.33</td> <td>73.66</td> <td>72.99</td> <td>74.11</td> <td>76.12</td> <td>77.46</td> <td>75.67</td> <td>72.54</td> <td>73.66</td> </tr> </tbody></table> | Table 6 | table_6 | P17-2007 | 7 | acl2017 | In Table 6, we report the accuracy of the 3 runs for each model and dataset. In both settings, we observe that the best accuracy on both datasets is often achieved by MULTI. This is the same conclusion that we reached when averaging the results over all runs. | [1, 1, 1] | ['In Table 6, we report the accuracy of the 3 runs for each model and dataset.', 'In both settings, we observe that the best accuracy on both datasets is often achieved by MULTI.', 'This is the same conclusion that we reached when averaging the results over all runs.'] | [None, ['MULTI'], None] | 1 |
P17-2010table_1 | Results on RTE performance without (INIT) and with prior compound splitting. ?: significant difference of the performance in comparison to INIT | 2 | [['System', 'INIT'], ['System', 'manual splitting*'], ['System', 'ZvdP2016'], ['System', 'FF2010*'], ['System', 'WH2012']] | 2 | [['-', 'Acc'], ['Entailment', 'P'], ['Entailment', 'R'], ['Entailment', 'F1'], ['Non-entailment', 'P'], ['Non-entailment', 'R'], ['Non-entailment', 'F1']] | [['64.13', '62.50', '74.57', '68.00', '66.67', '53.20', '59.18'], ['67.88', '65.08', '80.20', '71.85', '72.64', '54.99', '62.59'], ['66.63', '64.55', '77.02', '70.23', '69.87', '55.75', '62.02'], ['67.38', '65.48', '76.53', '70.58', '70.19', '57.80', '63.39'], ['66.00', '63.73', '77.75', '70.04', '69.77', '53.71', '60.69']] | column | ['Acc', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['manual splitting*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || Acc</th> <th>Entailment || P</th> <th>Entailment || R</th> <th>Entailment || F1</th> <th>Non-entailment || P</th> <th>Non-entailment || R</th> <th>Non-entailment || F1</th> </tr> </thead> <tbody> <tr> <td>System || INIT</td> <td>64.13</td> <td>62.50</td> <td>74.57</td> <td>68.00</td> <td>66.67</td> <td>53.20</td> <td>59.18</td> </tr> <tr> <td>System || manual splitting*</td> <td>67.88</td> <td>65.08</td> <td>80.20</td> <td>71.85</td> <td>72.64</td> <td>54.99</td> <td>62.59</td> </tr> <tr> <td>System || ZvdP2016</td> <td>66.63</td> <td>64.55</td> <td>77.02</td> <td>70.23</td> <td>69.87</td> <td>55.75</td> <td>62.02</td> </tr> <tr> <td>System || FF2010*</td> <td>67.38</td> <td>65.48</td> <td>76.53</td> <td>70.58</td> <td>70.19</td> <td>57.80</td> <td>63.39</td> </tr> <tr> <td>System || WH2012</td> <td>66.00</td> <td>63.73</td> <td>77.75</td> <td>70.04</td> <td>69.77</td> <td>53.71</td> <td>60.69</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2010 | 3 | acl2017 | Table 1 shows accuracy, precision, recall and F1-score for the entailment and non-entailment class on the RTE-3 dataset. As reflected in the results, reducing the opacity of compounds via the application of a compound splitter improves the subsequent RTE performance. This holds for all compound splitters that we used in our experiments. It is also noticeable that the different compound splitters yield different results in the downstream task, with FF2010 being the most beneficial and significantly outperforming the initial RTE setup without prior compound splitting (INIT) by up to four percentage points in accuracy and F1-score. As expected, manual splitting performs best overall. The performance difference with FF2010 is however not statistically significant. This is not surprising because FF2010 reaches an accuracy of around 90% in intrinsic evaluations (Ziering and van der Plas, 2016) and the small underperformance is leveled out by the small size of the test set. Moreover, manual inspections revealed that FF2010 has a higher recall than manual splitting in the non-entailment class due to its undersplitting which results in less lexical overlap between T and H, pointing to the non-entailment class. | [1, 2, 2, 1, 1, 1, 2, 2] | ['Table 1 shows accuracy, precision, recall and F1-score for the entailment and non-entailment class on the RTE-3 dataset.', 'As reflected in the results, reducing the opacity of compounds via the application of a compound splitter improves the subsequent RTE performance.', 'This holds for all compound splitters that we used in our experiments.', 'It is also noticeable that the different compound splitters yield different results in the downstream task, with FF2010 being the most beneficial and significantly outperforming the initial RTE setup without prior compound splitting (INIT) by up to four percentage points in accuracy and F1-score.', 'As expected, manual splitting performs best overall.', 'The performance difference with FF2010 is however not statistically significant.', 'This is not surprising because FF2010 reaches an accuracy of around 90% in intrinsic evaluations (Ziering and van der Plas, 2016) and the small underperformance is leveled out by the small size of the test set.', 'Moreover, manual inspections revealed that FF2010 has a higher recall than manual splitting in the non-entailment class due to its undersplitting which results in less lexical overlap between T and H, pointing to the non-entailment class.'] | [['Acc', 'P', 'R', 'F1', 'Entailment', 'Non-entailment'], None, None, ['FF2010*', 'INIT', 'Acc', 'F1'], ['manual splitting*'], ['FF2010*'], None, ['FF2010*']] | 1 |
P17-2021table_2 | BLEU results for the low-resource experiments (News Commentary v8) | 3 | [['DE-EN', 'system', 'bpe2bpe'], ['DE-EN', 'system', 'bpe2tree'], ['DE-EN', 'system', 'bpe2bpe ens.'], ['DE-EN', 'system', 'bpe2tree ens.'], ['RU-EN', 'system', 'bpe2bpe'], ['RU-EN', 'system', 'bpe2tree'], ['RU-EN', 'system', 'bpe2bpe ens.'], ['RU-EN', 'system', 'bpe2tree ens.'], ['CS-EN', 'system', 'bpe2bpe'], ['CS-EN', 'system', 'bpe2tree'], ['CS-EN', 'system', 'bpe2bpe ens.'], ['CS-EN', 'system', 'bpe2tree ens.']] | 1 | [['newstest2015'], ['newstest2016']] | [['13.81', '14.16'], ['14.55', '16.13'], ['14.42', '15.07'], ['15.69', '17.21'], ['12.58', '11.37'], ['12.92', '11.94'], ['13.36', '11.91'], ['13.66', '12.89'], ['10.85', '11.23'], ['11.54', '11.65'], ['11.46', '11.77'], ['12.43', '12.68']] | column | ['BLEU', 'BLEU'] | ['bpe2tree'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>newstest2015</th> <th>newstest2016</th> </tr> </thead> <tbody> <tr> <td>DE-EN || system || bpe2bpe</td> <td>13.81</td> <td>14.16</td> </tr> <tr> <td>DE-EN || system || bpe2tree</td> <td>14.55</td> <td>16.13</td> </tr> <tr> <td>DE-EN || system || bpe2bpe ens.</td> <td>14.42</td> <td>15.07</td> </tr> <tr> <td>DE-EN || system || bpe2tree ens.</td> <td>15.69</td> <td>17.21</td> </tr> <tr> <td>RU-EN || system || bpe2bpe</td> <td>12.58</td> <td>11.37</td> </tr> <tr> <td>RU-EN || system || bpe2tree</td> <td>12.92</td> <td>11.94</td> </tr> <tr> <td>RU-EN || system || bpe2bpe ens.</td> <td>13.36</td> <td>11.91</td> </tr> <tr> <td>RU-EN || system || bpe2tree ens.</td> <td>13.66</td> <td>12.89</td> </tr> <tr> <td>CS-EN || system || bpe2bpe</td> <td>10.85</td> <td>11.23</td> </tr> <tr> <td>CS-EN || system || bpe2tree</td> <td>11.54</td> <td>11.65</td> </tr> <tr> <td>CS-EN || system || bpe2bpe ens.</td> <td>11.46</td> <td>11.77</td> </tr> <tr> <td>CS-EN || system || bpe2tree ens.</td> <td>12.43</td> <td>12.68</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2021 | 3 | acl2017 | Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline. We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters. In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment. | [1, 2, 2] | ['Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.', 'We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.', 'In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.'] | [['bpe2tree', 'bpe2bpe'], None, None] | 1 |
P17-2022table_4 | Test Set Results | 2 | [['Classifier', 'SVM'], ['Classifier', 'NRC'], ['Classifier', 'Stanford'], ['Classifier', 'AutoSlog (ASlog)'], ['Classifier', 'Retrained Stanford']] | 2 | [['Pos', 'F1'], ['Neg', 'F1'], ['-', 'Macro F']] | [['0.66', '0.60', '0.64'], ['0.58', '0.69', '0.64'], ['0.54', '0.73', '0.67'], ['0.11', '0.68', '0.53'], ['0.53', '0.73', '0.67']] | column | ['F1', 'F1', 'Macro F'] | ['Retrained Stanford'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pos || F1</th> <th>Neg || F1</th> <th>- || Macro F</th> </tr> </thead> <tbody> <tr> <td>Classifier || SVM</td> <td>0.66</td> <td>0.60</td> <td>0.64</td> </tr> <tr> <td>Classifier || NRC</td> <td>0.58</td> <td>0.69</td> <td>0.64</td> </tr> <tr> <td>Classifier || Stanford</td> <td>0.54</td> <td>0.73</td> <td>0.67</td> </tr> <tr> <td>Classifier || AutoSlog (ASlog)</td> <td>0.11</td> <td>0.68</td> <td>0.53</td> </tr> <tr> <td>Classifier || Retrained Stanford</td> <td>0.53</td> <td>0.73</td> <td>0.67</td> </tr> </tbody></table> | Table 4 | table_4 | P17-2022 | 4 | acl2017 | We present our experimental results and analyze the results in terms of the lexico-functional linguistic patterns we learn. Rows 1-3 of Table 4 show the results for the three baselines, in terms of F-score for each class and the macro F. Stanford outperforms both NRC and SVM, but misses many cases of positive sentiment. Row 4 of Table 4 shows the results for the AutoSlog classifier. Although AutoSlog itself does not perform highly, the patterns that it learns represent a different type of knowledge than what is contained in many sentiment analysis tools. We therefore hypothesized that a cascading classifier, which supplements one of the baseline sentiment classifiers with the lexicofunctional patterns that AutoSlog learns might yield higher performance. Row 5 of Table 4 shows the results for RETRAINED STANFORD. The F-scores for RETRAINED STANFORD are almost identical to the standard Stanford classifier. This may be because our data is a small percentage of the entire number of phrases used in training Stanford. Although RETRAINED STANFORD prioritizes our phrases, it would not make sense to remove the original training data. | [1, 1, 1, 1, 1, 2, 1, 1, 2, 2] | ['We present our experimental results and analyze the results in terms of the lexico-functional linguistic patterns we learn.', 'Rows 1-3 of Table 4 show the results for the three baselines, in terms of F-score for each class and the macro F.', 'Stanford outperforms both NRC and SVM, but misses many cases of positive sentiment.', 'Row 4 of Table 4 shows the results for the AutoSlog classifier.', 'Although AutoSlog itself does not perform highly, the patterns that it learns represent a different type of knowledge than what is contained in many sentiment analysis tools.', 'We therefore hypothesized that a cascading classifier, which supplements one of the baseline sentiment classifiers with the lexicofunctional patterns that AutoSlog learns might yield higher performance.', 'Row 5 of Table 4 shows the results for RETRAINED STANFORD.', 'The F-scores for RETRAINED STANFORD are almost identical to the standard Stanford classifier.', 'This may be because our data is a small percentage of the entire number of phrases used in training Stanford.', 'Although RETRAINED STANFORD prioritizes our phrases, it would not make sense to remove the original training data.'] | [None, ['SVM', 'NRC', 'Stanford', 'F1', 'Macro F'], ['SVM', 'NRC', 'Stanford', 'Pos'], ['AutoSlog (ASlog)'], ['AutoSlog (ASlog)'], ['AutoSlog (ASlog)'], ['Retrained Stanford'], ['Retrained Stanford', 'Stanford'], ['Stanford'], ['Retrained Stanford']] | 1 |
P17-2034table_3 | Mean accuracy and standard deviation results. We report accuracy for the train, development, and both test sets. Three systems use the structured representation. Two systems (and Image Only) use the raw image. | 2 | [['-', 'Majority'], ['-', 'Text only'], ['-', 'Image Only'], ['Structured representation', 'MaxEnt'], ['Structured representation', 'MLP'], ['Structured representation', 'Image features+RNN'], ['Raw image', 'CNN+RNN'], ['Raw image', 'NMN']] | 1 | [['Train'], ['Dev'], ['Test-P'], ['Test-U']] | [['56.37', '55.31', '56.16', '55.43'], ['58.36±0.6', '56.61±0.5', '57.18±0.6', '56.21±0.4'], ['56.79±1.3', '55.35±0.1', '56.05±0.3', '55.33±0.3'], ['99.99', '68.04', '67.68', '67.82'], ['96.15±1.3', '67.50±0.5', '66.28±0.4', '65.32±0.4'], ['59.71±1.0', '57.72±1.4', '57.62±1.3', '56.29±0.9'], ['58.85±0.2', '56.59±0.3', '58.01±0.3', '56.30±0.6'], ['98.37±0.6', '63.06±0.1', '66.12±0.4', '61.99±0.8']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Structured representation', 'Raw image'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train</th> <th>Dev</th> <th>Test-P</th> <th>Test-U</th> </tr> </thead> <tbody> <tr> <td>- || Majority</td> <td>56.37</td> <td>55.31</td> <td>56.16</td> <td>55.43</td> </tr> <tr> <td>- || Text only</td> <td>58.36±0.6</td> <td>56.61±0.5</td> <td>57.18±0.6</td> <td>56.21±0.4</td> </tr> <tr> <td>- || Image Only</td> <td>56.79±1.3</td> <td>55.35±0.1</td> <td>56.05±0.3</td> <td>55.33±0.3</td> </tr> <tr> <td>Structured representation || MaxEnt</td> <td>99.99</td> <td>68.04</td> <td>67.68</td> <td>67.82</td> </tr> <tr> <td>Structured representation || MLP</td> <td>96.15±1.3</td> <td>67.50±0.5</td> <td>66.28±0.4</td> <td>65.32±0.4</td> </tr> <tr> <td>Structured representation || Image features+RNN</td> <td>59.71±1.0</td> <td>57.72±1.4</td> <td>57.62±1.3</td> <td>56.29±0.9</td> </tr> <tr> <td>Raw image || CNN+RNN</td> <td>58.85±0.2</td> <td>56.59±0.3</td> <td>58.01±0.3</td> <td>56.30±0.6</td> </tr> <tr> <td>Raw image || NMN</td> <td>98.37±0.6</td> <td>63.06±0.1</td> <td>66.12±0.4</td> <td>61.99±0.8</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2034 | 5 | acl2017 | We run each experiment ten times and report mean accuracy as well as standard deviation for randomly initialized models. Table 3 shows our results. NMN is the best performing model using images. For models using the structured representation, the MaxEnt model provides the best performance. | [2, 1, 1, 1] | ['We run each experiment ten times and report mean accuracy as well as standard deviation for randomly initialized models.', 'Table 3 shows our results.', 'NMN is the best performing model using images.', 'For models using the structured representation, the MaxEnt model provides the best performance.'] | [None, None, ['Raw image', 'NMN'], ['Structured representation', 'MaxEnt']] | 1 |
P17-2043table_2 | Readability evaluation by human subjects | 3 | [['Method', 'Ext.', 'n=1'], ['Method', 'Ext.', 'n=2'], ['Method', 'Comp.', 'n=1'], ['Method', 'Comp.', 'n=2']] | 1 | [['Score']] | [['4.55'], ['4.58'], ['3.88'], ['4.07']] | column | ['Score'] | ['Ext.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>Method || Ext. || n=1</td> <td>4.55</td> </tr> <tr> <td>Method || Ext. || n=2</td> <td>4.58</td> </tr> <tr> <td>Method || Comp. || n=1</td> <td>3.88</td> </tr> <tr> <td>Method || Comp. || n=2</td> <td>4.07</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2043 | 4 | acl2017 | We conducted human evaluation to compare readability of extractive oracle summaries to that of compressive oracle summaries. We presented the oracle summaries to five human subjects and asked them to rate the summaries using an integer scale from 1 (very poor) to 5 (very good). Table 2 shows the results. Extractive oracle summaries achieved near perfect scores. Although the scores of compressive oracle summaries are inferior to those of extractive oracle summaries, they achieved good enough score, around 4. The results support that our trimming approach based on chunk is effective. | [2, 2, 1, 1, 1, 2] | ['We conducted human evaluation to compare readability of extractive oracle summaries to that of compressive oracle summaries.', 'We presented the oracle summaries to five human subjects and asked them to rate the summaries using an integer scale from 1 (very poor) to 5 (very good).', 'Table 2 shows the results.', 'Extractive oracle summaries achieved near perfect scores.', 'Although the scores of compressive oracle summaries are inferior to those of extractive oracle summaries, they achieved good enough score, around 4.', 'The results support that our trimming approach based on chunk is effective.'] | [['Score'], ['Score'], None, ['Ext.'], ['Comp.'], None] | 1 |
P17-2045table_3 | Results on the atomic and full datasets. | 2 | [['Dataset', 'Theano'], ['Dataset', 'keras'], ['Dataset', 'youtube-dl'], ['Dataset', 'node'], ['Dataset', 'angular'], ['Dataset', 'react'], ['Dataset', 'opencv'], ['Dataset', 'CNTK'], ['Dataset', 'bitcoin'], ['Dataset', 'CoreNLP'], ['Dataset', 'elasticsearch'], ['Dataset', 'guava']] | 3 | [['our model', 'atomic', 'Val. acc'], ['our model', 'atomic', 'BLEU'], ['Moses', 'atomic', 'BLEU']] | [['36.81%', '9.5', '7.1'], ['45.76%', '13.7', '7.8'], ['50.84%', '16.4'], ['52.46%', '7.8', '7.7'], ['44.39%', '13.9', '11.7'], ['49.44%', '11.4', '10.7'], ['50.77%', '11.2', '9.0'], ['48.88%', '17.9', '11.8'], ['50.04%', '17.9', '13.0'], ['63.20%', '28.5', '10.1'], ['36.53%', '11.8', '5.2'], ['65.52%', '29.8', '19.5']] | column | ['Val. acc', 'BLEU', 'BLEU'] | ['our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>our model || atomic || Val. acc</th> <th>our model || atomic || BLEU</th> <th>Moses || atomic || BLEU</th> </tr> </thead> <tbody> <tr> <td>Dataset || Theano</td> <td>36.81%</td> <td>9.5</td> <td>7.1</td> </tr> <tr> <td>Dataset || keras</td> <td>45.76%</td> <td>13.7</td> <td>7.8</td> </tr> <tr> <td>Dataset || youtube-dl</td> <td>50.84%</td> <td>16.4</td> <td>None</td> </tr> <tr> <td>Dataset || node</td> <td>52.46%</td> <td>7.8</td> <td>7.7</td> </tr> <tr> <td>Dataset || angular</td> <td>44.39%</td> <td>13.9</td> <td>11.7</td> </tr> <tr> <td>Dataset || react</td> <td>49.44%</td> <td>11.4</td> <td>10.7</td> </tr> <tr> <td>Dataset || opencv</td> <td>50.77%</td> <td>11.2</td> <td>9.0</td> </tr> <tr> <td>Dataset || CNTK</td> <td>48.88%</td> <td>17.9</td> <td>11.8</td> </tr> <tr> <td>Dataset || bitcoin</td> <td>50.04%</td> <td>17.9</td> <td>13.0</td> </tr> <tr> <td>Dataset || CoreNLP</td> <td>63.20%</td> <td>28.5</td> <td>10.1</td> </tr> <tr> <td>Dataset || elasticsearch</td> <td>36.53%</td> <td>11.8</td> <td>5.2</td> </tr> <tr> <td>Dataset || guava</td> <td>65.52%</td> <td>29.8</td> <td>19.5</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2045 | 4 | acl2017 | As Table 3 shows, our model trained on atomic data outperforms the baseline in all but one project with an average gain of 5 BLEU points. In particular, we observe bigger gains for java projects such as CoreNLP and guava. We hypothesize this is because program differences in Java tend to be longer than the rest. While this impacts on training time, at the same time it allows the model to work with a larger vocabulary space. | [1, 1, 2, 2] | ['As Table 3 shows, our model trained on atomic data outperforms the baseline in all but one project with an average gain of 5 BLEU points.', 'In particular, we observe bigger gains for java projects such as CoreNLP and guava.', 'We hypothesize this is because program differences in Java tend to be longer than the rest.', 'While this impacts on training time, at the same time it allows the model to work with a larger vocabulary space.'] | [['our model', 'Moses', 'atomic', 'BLEU'], ['CoreNLP', 'guava'], None, None] | 1 |
P17-2046table_3 | Final results comparing translated language features (TRANS) to benchmark lexical generalisation features (LEX). BASE+LEX is our implementation of the core Hong et al. classifier. TAC KPB 2015 #1 corresponds to reported results for Hong et al. including semi-supervised learning. TAC KPB 2015 shared task has 38 runs submitted from 14 teams. | 2 | [['System', 'BASE'], ['System', 'BASE+LEX'], ['System', 'BASE+TRANS'], ['System', 'BASE+LEX+TRANS']] | 1 | [['P'], ['R'], ['F']] | [['60.4', '24.1', '34.4'], ['66.8', '42.6', '52.0'], ['59.6', '45.8', '51.8'], ['67.9', '46.2', '55.0']] | column | ['P', 'R', 'F'] | ['BASE+LEX+TRANS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || BASE</td> <td>60.4</td> <td>24.1</td> <td>34.4</td> </tr> <tr> <td>System || BASE+LEX</td> <td>66.8</td> <td>42.6</td> <td>52.0</td> </tr> <tr> <td>System || BASE+TRANS</td> <td>59.6</td> <td>45.8</td> <td>51.8</td> </tr> <tr> <td>System || BASE+LEX+TRANS</td> <td>67.9</td> <td>46.2</td> <td>55.0</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2046 | 5 | acl2017 | Table 3 contains final results on the held-out evaluation data. The final translated language feature set (TRANS) comprises word, character and Cangjie features from Traditional Chinese, Simplified Chinese, Japanese and Korean. TRANS features provide a large F1 improvement of 17.4 over the baseline (BASE), similar to the benchmark lexical generalisation features (LEX). They differ in precision-recall tradeoff, with higher recall but lower precision from TRANS. LEX and TRANS are complementary, giving F1 of 55.0. This is 20.6 points higher than the baseline features alone, and improves both the precision of LEX and the recall of TRANS. | [1, 2, 1, 1, 1] | ['Table 3 contains final results on the held-out evaluation data.', 'The final translated language feature set (TRANS) comprises word, character and Cangjie features from Traditional Chinese, Simplified Chinese, Japanese and Korean.', 'TRANS features provide a large F1 improvement of 17.4 over the baseline (BASE), similar to the benchmark lexical generalisation features (LEX).', 'They differ in precision-recall tradeoff, with higher recall but lower precision from TRANS. LEX and TRANS are complementary, giving F1 of 55.0.', 'This is 20.6 points higher than the baseline features alone, and improves both the precision of LEX and the recall of TRANS.'] | [None, None, ['BASE+TRANS', 'BASE+LEX', 'F'], ['P', 'R', 'BASE+TRANS', 'BASE+LEX+TRANS', 'F'], ['BASE', 'BASE+LEX+TRANS', 'F', 'P', 'BASE+LEX', 'R', 'BASE+TRANS']] | 1 |
P17-2047table_5 | Precision, Recall and F1 of different methods on Yahoo! Answers factoid QA dataset. The Oracle assumes candidate answers are ranked perfectly and its performance is limited by the initial retrieval step. | 2 | [['Method', 'Aqqu'], ['Method', 'Text2KB'], ['Method', 'AskMSR (entities)'], ['Method', 'MemN2N'], ['Method', 'KV MemN2N'], ['Method', 'EviNets (text)'], ['Method', 'EviNets (text+kb)'], ['Method', 'Oracle']] | 1 | [['P'], ['R'], ['F1']] | [['0.116', '0.117', '0.116'], ['0.170', '0.170', '0.170'], ['0.175', '0.319', '0.226'], ['0.072', '0.131', '0.092'], ['0.126', '0.228', '0.162'], ['0.210', '0.383', '0.271'], ['0.226', '0.409', '0.291'], ['0.622', '1.0', '0.767']] | column | ['P', 'R', 'F'] | ['EviNets (text)', 'EviNets (text+kb)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Aqqu</td> <td>0.116</td> <td>0.117</td> <td>0.116</td> </tr> <tr> <td>Method || Text2KB</td> <td>0.170</td> <td>0.170</td> <td>0.170</td> </tr> <tr> <td>Method || AskMSR (entities)</td> <td>0.175</td> <td>0.319</td> <td>0.226</td> </tr> <tr> <td>Method || MemN2N</td> <td>0.072</td> <td>0.131</td> <td>0.092</td> </tr> <tr> <td>Method || KV MemN2N</td> <td>0.126</td> <td>0.228</td> <td>0.162</td> </tr> <tr> <td>Method || EviNets (text)</td> <td>0.210</td> <td>0.383</td> <td>0.271</td> </tr> <tr> <td>Method || EviNets (text+kb)</td> <td>0.226</td> <td>0.409</td> <td>0.291</td> </tr> <tr> <td>Method || Oracle</td> <td>0.622</td> <td>1.0</td> <td>0.767</td> </tr> </tbody></table> | Table 5 | table_5 | P17-2047 | 5 | acl2017 | Table 5 summarizes the results of EviNets and some baseline methods on the created Yahoo! Answers dataset. As we can see, knowledge base data is not enough to answer most of these questions, and a state-of-the-art KBQA system Aqqu gets only 0.116 precision. Adding textual data helps significantly, and Text2KB improves the precision to 0.17, which roughly matches the results of the AskMSR system, that ranks candidate entities by their popularity in the retrieved documents. Using text along with KB evidence gave higher performance metrics, boosting F1 from 0.271 to 0.291. EviNets significantly improves over the baseline approaches, beating AskMSR by 28% and KV MemN2N by almost 80% in F1 score. | [1, 1, 1, 1, 1] | ['Table 5 summarizes the results of EviNets and some baseline methods on the created Yahoo! Answers dataset.', 'As we can see, knowledge base data is not enough to answer most of these questions, and a state-of-the-art KBQA system Aqqu gets only 0.116 precision.', 'Adding textual data helps significantly, and Text2KB improves the precision to 0.17, which roughly matches the results of the AskMSR system, that ranks candidate entities by their popularity in the retrieved documents.', 'Using text along with KB evidence gave higher performance metrics, boosting F1 from 0.271 to 0.291.', 'EviNets significantly improves over the baseline approaches, beating AskMSR by 28% and KV MemN2N by almost 80% in F1 score.'] | [None, ['Aqqu', 'P'], ['Text2KB', 'P'], ['EviNets (text)', 'EviNets (text+kb)', 'F1'], ['EviNets (text+kb)', 'AskMSR (entities)', 'MemN2N', 'F1']] | 1 |
P17-2052table_3 | Results on our corpus. All quantities are macro-averaged. | 4 | [['Level', 'Entity', 'Features', 'Unstructured'], ['Level', 'Entity', 'Features', '+ Pairs'], ['Level', 'Entity', 'Features', '+ Graph'], ['Level', 'Sentence', 'Features', 'Unstructured'], ['Level', 'Sentence', 'Features', '+ Pairs'], ['Level', 'Sentence', 'Features', '+ Graph']] | 1 | [['P'], ['R'], ['F1']] | [['50.0', '67.2', '52.9'], ['53.3', '64.1', '54.3'], ['53.9', '63.9', '54.5'], ['42.6', '58.9', '44.4'], ['46.5', '54.1', '45.6'], ['47.0', '53.6', '45.6']] | column | ['P', 'R', 'F'] | ['+ Pairs', '+ Graph'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Level || Entity || Features || Unstructured</td> <td>50.0</td> <td>67.2</td> <td>52.9</td> </tr> <tr> <td>Level || Entity || Features || + Pairs</td> <td>53.3</td> <td>64.1</td> <td>54.3</td> </tr> <tr> <td>Level || Entity || Features || + Graph</td> <td>53.9</td> <td>63.9</td> <td>54.5</td> </tr> <tr> <td>Level || Sentence || Features || Unstructured</td> <td>42.6</td> <td>58.9</td> <td>44.4</td> </tr> <tr> <td>Level || Sentence || Features || + Pairs</td> <td>46.5</td> <td>54.1</td> <td>45.6</td> </tr> <tr> <td>Level || Sentence || Features || + Graph</td> <td>47.0</td> <td>53.6</td> <td>45.6</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2052 | 4 | acl2017 | Table 3 summarizes our results. Starting with the baseline, we incrementally add the type pair, graph-based, and set size features discussed in 2.1. Adding type pair features results in an appreciable performance gain, while the graph features bring little benefit—potentially because pairwise correlations suffice to summarize the set structure when the number of types is moderately low. | [1, 1, 1] | ['Table 3 summarizes our results.', 'Starting with the baseline, we incrementally add the type pair, graph-based, and set size features discussed in 2.1.', 'Adding type pair features results in an appreciable performance gain, while the graph features bring little benefit—potentially because pairwise correlations suffice to summarize the set structure when the number of types is moderately low.'] | [None, ['Unstructured', '+ Pairs', '+ Graph'], ['+ Pairs', '+ Graph']] | 1 |
P17-2055table_2 | Number of Wikidata entities as subjects (#s) of each predicate (p), and evaluation results on manually annotated randomly selected subjects that have at least an object. | 2 | [['p', 'has part (creative work series)'], ['p', 'contains admin. terr. entity'], ['p', 'spouse'], ['p', 'child'], ['p', 'child (manual ground truth)']] | 2 | [['-', '#s'], ['baseline', 'P'], ['vanilla', 'P'], ['vanilla', 'R'], ['vanilla', 'F1'], ['only-nummod', 'P'], ['only-nummod', 'R'], ['only-nummod', 'F1']] | [['261', '0.050', '0.333', '0.316', '0.324', '0.353', '0.316', '0.333'], ['18000', '0.034', '0.390', '0.188', '0.254', '0.548', '0.200', '0.293'], ['45917', '0', '0.014', '0.011', '0.013', '0.028', '0.017', '0.021'], ['35057', '0.112', '0.151', '0.129', '0.139', '0.320', '0.219', '0.260'], ['6408', '-', '0.374', '0.309', '0.338', '0.452', '0.315', '0.371']] | column | ['#s', 'P', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['only-nummod'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || #s</th> <th>baseline || P</th> <th>vanilla || P</th> <th>vanilla || R</th> <th>vanilla || F1</th> <th>only-nummod || P</th> <th>only-nummod || R</th> <th>only-nummod || F1</th> </tr> </thead> <tbody> <tr> <td>p || has part (creative work series)</td> <td>261</td> <td>0.050</td> <td>0.333</td> <td>0.316</td> <td>0.324</td> <td>0.353</td> <td>0.316</td> <td>0.333</td> </tr> <tr> <td>p || contains admin. terr. entity</td> <td>18000</td> <td>0.034</td> <td>0.390</td> <td>0.188</td> <td>0.254</td> <td>0.548</td> <td>0.200</td> <td>0.293</td> </tr> <tr> <td>p || spouse</td> <td>45917</td> <td>0</td> <td>0.014</td> <td>0.011</td> <td>0.013</td> <td>0.028</td> <td>0.017</td> <td>0.021</td> </tr> <tr> <td>p || child</td> <td>35057</td> <td>0.112</td> <td>0.151</td> <td>0.129</td> <td>0.139</td> <td>0.320</td> <td>0.219</td> <td>0.260</td> </tr> <tr> <td>p || child (manual ground truth)</td> <td>6408</td> <td>-</td> <td>0.374</td> <td>0.309</td> <td>0.338</td> <td>0.452</td> <td>0.315</td> <td>0.371</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2055 | 4 | acl2017 | Table 2 shows the performance of our CRF-based method in finding the correct relation cardinality, evaluated on manually annotated 20 (has part), 100 (admin. terr. entity) and 200 (child and spouse) randomly selected subjects that have at least one object. The random-number baseline achieves a precision of 5% (has part), 3.5% (admin. territ. entity), 0% (spouse) and 11.2% (child). Compared to that, especially using only-nummod, our method gives encouraging results for has part, admin. territ. entity and child, with 30-50% precision and around 30% F1-score. For spouse, the performance is significantly lower, reasons are discussed below. Furthermore, we can observe that using manual ground truth as training data for the child relation can boost performance considerably. Still, the performance is significantly below the stateof-the-art in fact extraction, where child triples can be extracted from Wikipedia text with 96% precision (Palomares et al., 2016). As shown by the last row of Table 2, higher quality of training data can considerably boost the performance of cardinality extraction. | [1, 1, 1, 1, 1, 2, 1] | ['Table 2 shows the performance of our CRF-based method in finding the correct relation cardinality, evaluated on manually annotated 20 (has part), 100 (admin. terr. entity) and 200 (child and spouse) randomly selected subjects that have at least one object.', 'The random-number baseline achieves a precision of 5% (has part), 3.5% (admin. territ. entity), 0% (spouse) and 11.2% (child).', 'Compared to that, especially using only-nummod, our method gives encouraging results for has part, admin. territ. entity and child, with 30-50% precision and around 30% F1-score.', 'For spouse, the performance is significantly lower, reasons are discussed below.', 'Furthermore, we can observe that using manual ground truth as training data for the child relation can boost performance considerably.', 'Still, the performance is significantly below the stateof-the-art in fact extraction, where child triples can be extracted from Wikipedia text with 96% precision (Palomares et al., 2016).', 'As shown by the last row of Table 2, higher quality of training data can considerably boost the performance of cardinality extraction.'] | [None, ['baseline', 'P', 'has part (creative work series)', 'contains admin. terr. entity', 'spouse', 'child'], ['only-nummod', 'P', 'F1'], ['spouse'], ['child (manual ground truth)'], ['child (manual ground truth)'], ['child (manual ground truth)']] | 1 |
P17-2059table_1 | Test-set accuracies obtained; results except the AGT are drawn from (Lei et al., 2015). | 1 | [['AGT'], ['high-order CNN'], ['tree-LSTM'], ['DRNN'], ['PVEC'], ['DCNN'], ['DAN'], ['CNN-MC'], ['CNN'], ['RNTN'], ['NBoW'], ['RNN'], ['SVM']] | 1 | [['Accuracy']] | [['50.5'], ['51.2'], ['51.0'], ['49.8'], ['48.7'], ['48.5'], ['48.2'], ['47.4'], ['47.2'], ['45.7'], ['44.5'], ['43.2'], ['38.3']] | column | ['Accuracy'] | ['AGT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>AGT</td> <td>50.5</td> </tr> <tr> <td>high-order CNN</td> <td>51.2</td> </tr> <tr> <td>tree-LSTM</td> <td>51.0</td> </tr> <tr> <td>DRNN</td> <td>49.8</td> </tr> <tr> <td>PVEC</td> <td>48.7</td> </tr> <tr> <td>DCNN</td> <td>48.5</td> </tr> <tr> <td>DAN</td> <td>48.2</td> </tr> <tr> <td>CNN-MC</td> <td>47.4</td> </tr> <tr> <td>CNN</td> <td>47.2</td> </tr> <tr> <td>RNTN</td> <td>45.7</td> </tr> <tr> <td>NBoW</td> <td>44.5</td> </tr> <tr> <td>RNN</td> <td>43.2</td> </tr> <tr> <td>SVM</td> <td>38.3</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2059 | 3 | acl2017 | Table 1 presents the test-set accuracies obtained by different strategies. Results in Table 1 indicate that the AGT method achieved very competitive accuracy (with 50.5%), when compared to the state-of-the-art results obtained by the tree-LSTM (51.0%) (Tai et al., 2015, Zhu et al., 2015) and high-order CNN approaches (51.2%) (Lei et al., 2015). | [1, 1] | ['Table 1 presents the test-set accuracies obtained by different strategies.', 'Results in Table 1 indicate that the AGT method achieved very competitive accuracy (with 50.5%), when compared to the state-of-the-art results obtained by the tree-LSTM (51.0%) (Tai et al., 2015, Zhu et al., 2015) and high-order CNN approaches (51.2%) (Lei et al., 2015).'] | [None, ['AGT', 'tree-LSTM', 'high-order CNN']] | 1 |
P17-2060table_1 | Translation results (BLEU score) for different machine translation and system combination methods. Jane is a open source machine translation system combination toolkit that uses confusion network decoding. Best and important results per category are highlighted. | 2 | [['System', 'PBMT'], ['System', 'HPMT'], ['System', 'NMT'], ['System', 'Jane (Freitag et al. 2014)'], ['System', 'Multi'], ['System', 'Multi+Source'], ['System', 'Multi+Ensemble'], ['System', 'Multi+Source+Ensemble']] | 1 | [['MT03'], ['MT04'], ['MT05'], ['MT06'], ['Average']] | [['37.47', '41.20', '36.41', '36.03', '37.78'], ['38.05', '41.47', '36.86', '36.04', '38.10'], ['37.91', '38.95', '36.02', '36.65', '37.38'], ['39.83', '42.75', '38.63', '39.10', '40.08'], ['40.64', '44.81', '38.80', '38.26', '40.63'], ['42.16', '45.51', '40.28', '39.03', '41.75'], ['41.67', '45.95', '40.37', '39.02', '41.75'], ['43.55', '47.09', '42.02', '41.10', '43.44']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Multi', 'Multi+Source', 'Multi+Ensemble', 'Multi+Source+Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>System || PBMT</td> <td>37.47</td> <td>41.20</td> <td>36.41</td> <td>36.03</td> <td>37.78</td> </tr> <tr> <td>System || HPMT</td> <td>38.05</td> <td>41.47</td> <td>36.86</td> <td>36.04</td> <td>38.10</td> </tr> <tr> <td>System || NMT</td> <td>37.91</td> <td>38.95</td> <td>36.02</td> <td>36.65</td> <td>37.38</td> </tr> <tr> <td>System || Jane (Freitag et al. 2014)</td> <td>39.83</td> <td>42.75</td> <td>38.63</td> <td>39.10</td> <td>40.08</td> </tr> <tr> <td>System || Multi</td> <td>40.64</td> <td>44.81</td> <td>38.80</td> <td>38.26</td> <td>40.63</td> </tr> <tr> <td>System || Multi+Source</td> <td>42.16</td> <td>45.51</td> <td>40.28</td> <td>39.03</td> <td>41.75</td> </tr> <tr> <td>System || Multi+Ensemble</td> <td>41.67</td> <td>45.95</td> <td>40.37</td> <td>39.02</td> <td>41.75</td> </tr> <tr> <td>System || Multi+Source+Ensemble</td> <td>43.55</td> <td>47.09</td> <td>42.02</td> <td>41.10</td> <td>43.44</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2060 | 3 | acl2017 | We compare our neural combination system with the best individual engines, and the state-of-the-art traditional combination system Jane (Freitag et al., 2014). Table 1 shows the BLEU of different models on development data and test data. The BLEU score of the multi-source neural combination model is 2.53 higher than the best single model HPMT. The source language input gives a further improvement of +1.12 BLEU points. As shown in Table 1, Jane outperforms the best single MT system by 1.92 BLEU points. However, our neural combination system with source language gets an improvement of 1.67 BLEU points over Jane. Furthermore, when augmenting our neural combination system with ensemble decoding 2, it leads to another significant boost of +1.69 BLEU points. | [1, 1, 1, 1, 1, 1, 1] | ['We compare our neural combination system with the best individual engines, and the state-of-the-art traditional combination system Jane (Freitag et al., 2014).', 'Table 1 shows the BLEU of different models on development data and test data.', 'The BLEU score of the multi-source neural combination model is 2.53 higher than the best single model HPMT.', 'The source language input gives a further improvement of +1.12 BLEU points.', 'As shown in Table 1, Jane outperforms the best single MT system by 1.92 BLEU points.', 'However, our neural combination system with source language gets an improvement of 1.67 BLEU points over Jane.', 'Furthermore, when augmenting our neural combination system with ensemble decoding 2, it leads to another significant boost of +1.69 BLEU points.'] | [['System'], None, ['Multi', 'HPMT', 'Average'], ['Multi+Source'], ['Jane (Freitag et al. 2014)'], ['Jane (Freitag et al. 2014)', 'Multi+Ensemble'], ['Multi+Source+Ensemble']] | 1 |
P17-2066table_2 | Experimental results of Japanese caption generation. The numbers in boldface indicate the best score for each evaluation measure. | 1 | [['En-generator → MT'], ['Ja-generator']] | 1 | [['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE_L'], ['CIDEr']] | [['0.565', '0.330', '0.204', '0.127', '0.449', '0.324'], ['0.763', '0.614', '0.492', '0.385', '0.553', '0.883']] | column | ['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE_L', 'CIDEr'] | ['Ja-generator'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE_L</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>En-generator → MT</td> <td>0.565</td> <td>0.330</td> <td>0.204</td> <td>0.127</td> <td>0.449</td> <td>0.324</td> </tr> <tr> <td>Ja-generator</td> <td>0.763</td> <td>0.614</td> <td>0.492</td> <td>0.385</td> <td>0.553</td> <td>0.883</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2066 | 5 | acl2017 | Table 2 summarizes the experimental results. The results show that Ja-generator, that is, the approach in which Japanese captions were used as training data, outperformed En-generator → MT, which was trained without Japanese captions. | [1, 1] | ['Table 2 summarizes the experimental results.', 'The results show that Ja-generator, that is, the approach in which Japanese captions were used as training data, outperformed En-generator → MT, which was trained without Japanese captions.'] | [None, ['Ja-generator', 'En-generator → MT']] | 1 |
P17-2069table_3 | Comparison of DI algorithms. ‡ denotes statistical significance at p < 0.01 in comparison to the method without DI, * denotes statistical significance at p < 0.01 in comparison to standard DI and † denotes statistical significance at p < 0.05 in comparison to standard DI. | 2 | [['APT configuration', 'None'], ['APT configuration', 'Standard DI'], ['APT configuration', 'Offset Inference']] | 2 | [['ML10', 'AN'], ['ML10', 'NN'], ['ML10', 'VO'], ['ML10', 'Avg'], ['ML08', 'VO']] | [['0.35', '0.50', '0.39', '0.41', '0.22'], ['0.48', '0.51', '0.43', '0.47', '0.29'], ['0.49', '0.52', '0.44', '0.48', '0.31']] | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['Offset Inference'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ML10 || AN</th> <th>ML10 || NN</th> <th>ML10 || VO</th> <th>ML10 || Avg</th> <th>ML08 || VO</th> </tr> </thead> <tbody> <tr> <td>APT configuration || None</td> <td>0.35</td> <td>0.50</td> <td>0.39</td> <td>0.41</td> <td>0.22</td> </tr> <tr> <td>APT configuration || Standard DI</td> <td>0.48</td> <td>0.51</td> <td>0.43</td> <td>0.47</td> <td>0.29</td> </tr> <tr> <td>APT configuration || Offset Inference</td> <td>0.49</td> <td>0.52</td> <td>0.44</td> <td>0.48</td> <td>0.31</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2069 | 5 | acl2017 | Table 3 shows that both forms of distributional inference significantly outperform a baseline without DI. On average, offset inference outperforms the method of Kober et al. (2016) by a statistically significant margin on both datasets. | [1, 1] | ['Table 3 shows that both forms of distributional inference significantly outperform a baseline without DI.', 'On average, offset inference outperforms the method of Kober et al. (2016) by a statistically significant margin on both datasets.'] | [None, ['Offset Inference']] | 1 |
P17-2070table_2 | Spearman’s rank correlation performance for the Word Similarity task on SCWS. | 2 | [['Model', 'SGE + C (Mikolov et al. 2013a)'], ['Model', 'MSSG (Neelakantan et al. 2014)'], ['Model', 'HTLE'], ['Model', 'HTLE add'], ['Model', 'STLE']] | 2 | [['Dimension', '100'], ['Dimension', '300'], ['Dimension', '600']] | [['0.59', '0.59', '0.62'], ['0.60', '0.61', '0.64'], ['0.63', '0.56', '0.55'], ['0.61', '0.61', '0.58'], ['0.59', '0.58', '0.55']] | column | ['correlation', 'correlation', 'correlation'] | ['HTLE', 'HTLE add'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension || 100</th> <th>Dimension || 300</th> <th>Dimension || 600</th> </tr> </thead> <tbody> <tr> <td>Model || SGE + C (Mikolov et al. 2013a)</td> <td>0.59</td> <td>0.59</td> <td>0.62</td> </tr> <tr> <td>Model || MSSG (Neelakantan et al. 2014)</td> <td>0.60</td> <td>0.61</td> <td>0.64</td> </tr> <tr> <td>Model || HTLE</td> <td>0.63</td> <td>0.56</td> <td>0.55</td> </tr> <tr> <td>Model || HTLE add</td> <td>0.61</td> <td>0.61</td> <td>0.58</td> </tr> <tr> <td>Model || STLE</td> <td>0.59</td> <td>0.58</td> <td>0.55</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2070 | 5 | acl2017 | Table 2 provides the Spearman’s correlation scores for different models against the human ranking. We see that with dimensions 100 and 300, two of our models obtain improvements over the baseline. The MSSG model of Neelakantan et al. (2014) performs only slightly better than our HLTE model by requiring considerably more parameters (600 vs. 100 embedding size). | [1, 1, 1] | ['Table 2 provides the Spearman’s correlation scores for different models against the human ranking.', 'We see that with dimensions 100 and 300, two of our models obtain improvements over the baseline.', 'The MSSG model of Neelakantan et al. (2014) performs only slightly better than our HLTE model by requiring considerably more parameters (600 vs. 100 embedding size).'] | [None, ['100', '300', 'HTLE', 'HTLE add'], ['MSSG (Neelakantan et al. 2014)', 'HTLE', '600', '100']] | 1 |
P17-2080table_1 | Metric-based Evaluation. SCENE1-A is set to generate generic responses, so it makes no sense to measure it with embedding-based metrics | 2 | [['Model', 'LM'], ['Model', 'HRED'], ['Model', 'SPHRED'], ['Model', 'VHRED'], ['Model', 'SCENE1-A'], ['Model', 'SCENE1-B'], ['Model', 'SCENE2-A'], ['Model', 'SCENE2-B']] | 1 | [['Average'], ['Greedy'], ['Extrema'], ['Accuracy']] | [['0.360', '0.348', '0.310', '-'], ['0.429', '0.466', '0.383', '-'], ['0.468', '0.478', '0.434', '-'], ['0.403', '0.432', '0.374', '-'], ['-', '-', '-', '90.9%'], ['0.426', '0.432', '0.396', '86.9%'], ['0.465', '0.440', '0.428', '99.8%'], ['0.463', '0.437', '0.420', '99.2%']] | column | ['Average', 'Greedy', 'Extrema', 'Accuracy'] | ['SCENE1-A', 'SCENE1-B', 'SCENE2-A', 'SCENE2-B'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average</th> <th>Greedy</th> <th>Extrema</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || LM</td> <td>0.360</td> <td>0.348</td> <td>0.310</td> <td>-</td> </tr> <tr> <td>Model || HRED</td> <td>0.429</td> <td>0.466</td> <td>0.383</td> <td>-</td> </tr> <tr> <td>Model || SPHRED</td> <td>0.468</td> <td>0.478</td> <td>0.434</td> <td>-</td> </tr> <tr> <td>Model || VHRED</td> <td>0.403</td> <td>0.432</td> <td>0.374</td> <td>-</td> </tr> <tr> <td>Model || SCENE1-A</td> <td>-</td> <td>-</td> <td>-</td> <td>90.9%</td> </tr> <tr> <td>Model || SCENE1-B</td> <td>0.426</td> <td>0.432</td> <td>0.396</td> <td>86.9%</td> </tr> <tr> <td>Model || SCENE2-A</td> <td>0.465</td> <td>0.440</td> <td>0.428</td> <td>99.8%</td> </tr> <tr> <td>Model || SCENE2-B</td> <td>0.463</td> <td>0.437</td> <td>0.420</td> <td>99.2%</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2080 | 4 | acl2017 | As can be seen from Table 1, SPHRED outperforms both HRED and LM over all the three embedding-based metrics. This implies separating the single-line context RNN into two independent parts can actually lead to a better context representation. It is worth mentioning the size of context RNN hidden states in SPHRED is only half of that in HRED, but it still behaves better with fewer parameters. Hence it is reasonable to apply this context information to our framework. The last 4 rows in Table 1 display the results of our framework applied in two scenarios mentioned in Section 2.3 and 2.4. SCENE1-A and SCENE1-B correspond to Scenario 1 with the label fixed as 1 and 0. 90.9% of generated responses in SCENE1-A are generic and 86.9% in SCENE1-B are non-generic according to the manually-built rule, which verified the proper effect of the label. SCENE2-A and SCENE2-B correspond to rule 1 and 2 in Scenario 2. Both successfully predict the sentiment with very minor mismatches (0.2% and 0.8%). The high accuracy further demonstrated SPHRED’s capability of maintaining individual context information. We also experimented by substituting the encoder with a normal HRED, the resulting model cannot predict the correct sentiment at all because the context information is highly mingled for both speakers. The embedding based scores of our framework are still comparable with SPHRED and even better than VHRED. Imposing an external label didn’t bring any significant quality decline. | [1, 2, 2, 2, 1, 2, 2, 1, 1, 2, 1, 2] | ['As can be seen from Table 1, SPHRED outperforms both HRED and LM over all the three embedding-based metrics.', 'This implies separating the single-line context RNN into two independent parts can actually lead to a better context representation.', 'It is worth mentioning the size of context RNN hidden states in SPHRED is only half of that in HRED, but it still behaves better with fewer parameters.', 'Hence it is reasonable to apply this context information to our framework.', 'The last 4 rows in Table 1 display the results of our framework applied in two scenarios mentioned in Section 2.3 and 2.4.', 'SCENE1-A and SCENE1-B correspond to Scenario 1 with the label fixed as 1 and 0. 90.9% of generated responses in SCENE1-A are generic and 86.9% in SCENE1-B are non-generic according to the manually-built rule, which verified the proper effect of the label.', 'SCENE2-A and SCENE2-B correspond to rule 1 and 2 in Scenario 2.', 'Both successfully predict the sentiment with very minor mismatches (0.2% and 0.8%).', 'The high accuracy further demonstrated SPHRED’s capability of maintaining individual context information.', 'We also experimented by substituting the encoder with a normal HRED, the resulting model cannot predict the correct sentiment at all because the context information is highly mingled for both speakers.', 'The embedding based scores of our framework are still comparable with SPHRED and even better than VHRED.', 'Imposing an external label didn’t bring any significant quality decline.'] | [['SPHRED', 'HRED', 'LM'], None, ['SPHRED', 'HRED'], None, None, ['SCENE1-A', 'SCENE1-B'], ['SCENE2-A', 'SCENE2-B'], ['SCENE1-A', 'SCENE1-B'], ['SPHRED'], ['HRED'], ['SCENE1-A', 'SCENE1-B', 'SCENE2-A', 'SCENE2-B', 'SPHRED', 'HRED'], None] | 1 |
P17-2081table_4 | Results on SICK after finetuning. The first row is only trained on SICK. * indicates ensemble method. | 2 | [['Pretrained dataset / Previous work', '-'], ['Pretrained dataset / Previous work', 'SQuAD-T'], ['Pretrained dataset / Previous work', 'SQuAD'], ['Pretrained dataset / Previous work', 'SQuAD*'], ['Pretrained dataset / Previous work', 'SNLI'], ['Pretrained dataset / Previous work', 'SQuAD-T + SNLI'], ['Pretrained dataset / Previous work', 'SQuAD + SNLI'], ['Pretrained dataset / Previous work', 'SQuAD + SNLI*'], ['Pretrained dataset / Previous work', 'Yin et al. (2016)'], ['Pretrained dataset / Previous work', 'Lai and Hockenmaier (2014)'], ['Pretrained dataset / Previous work', 'Zhao et al. (2014)'], ['Pretrained dataset / Previous work', 'Jimenez et al. (2014)'], ['Pretrained dataset / Previous work', 'Mou et al. (2016)'], ['Pretrained dataset / Previous work', 'Mou et al. (2016) (pretrained on SNLI)']] | 1 | [['Accuracy']] | [['77.96'], ['81.49'], ['82.86'], ['84.38'], ['83.20'], ['85.00'], ['86.63'], ['88.22'], ['86.2'], ['84.57'], ['83.64'], ['83.05'], ['70.9'], ['77.6']] | column | ['Accuracy'] | ['SQuAD + SNLI*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Pretrained dataset / Previous work || -</td> <td>77.96</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD-T</td> <td>81.49</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD</td> <td>82.86</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD*</td> <td>84.38</td> </tr> <tr> <td>Pretrained dataset / Previous work || SNLI</td> <td>83.20</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD-T + SNLI</td> <td>85.00</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD + SNLI</td> <td>86.63</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD + SNLI*</td> <td>88.22</td> </tr> <tr> <td>Pretrained dataset / Previous work || Yin et al. (2016)</td> <td>86.2</td> </tr> <tr> <td>Pretrained dataset / Previous work || Lai and Hockenmaier (2014)</td> <td>84.57</td> </tr> <tr> <td>Pretrained dataset / Previous work || Zhao et al. (2014)</td> <td>83.64</td> </tr> <tr> <td>Pretrained dataset / Previous work || Jimenez et al. (2014)</td> <td>83.05</td> </tr> <tr> <td>Pretrained dataset / Previous work || Mou et al. (2016)</td> <td>70.9</td> </tr> <tr> <td>Pretrained dataset / Previous work || Mou et al. (2016) (pretrained on SNLI)</td> <td>77.6</td> </tr> </tbody></table> | Table 4 | table_4 | P17-2081 | 5 | acl2017 | Table 4 shows the transfer learning results of BiDAF-T on SICK dataset (Marelli et al., 2014), with various pretraining routines. Note that SNLI (Bowman et al., 2015) is a similar task to SICK and is significantly larger (150K/10K/10K train/dev/test examples). Here we highlight three observations. (a) BiDAF-T pretrained on SQuAD outperforms that without any pretraining by 6% and that pretrained on SQuAD-T by 2%, which demonstrates that the transfer learning from large span-based QA gives a clear improvement. (b) Pretraining on SQuAD+SNLI outperforms pretraining on SNLI only. Given that SNLI is larger than SQuAD, the difference in their performance is a strong indicator that we are benefiting from not only the scale of SQuAD, but also the fine-grained supervision that it provides. (c) We outperform the previous state of the art by 2% with the ensemble of SQuAD+SNLI pretraining routine. It is worth noting that Mou et al. (2016) also shows improvement on SICK by pretraining on SNLI. | [1, 2, 2, 1, 1, 2, 1, 2] | ['Table 4 shows the transfer learning results of BiDAF-T on SICK dataset (Marelli et al., 2014), with various pretraining routines.', 'Note that SNLI (Bowman et al., 2015) is a similar task to SICK and is significantly larger (150K/10K/10K train/dev/test examples).', 'Here we highlight three observations.', '(a) BiDAF-T pretrained on SQuAD outperforms that without any pretraining by 6% and that pretrained on SQuAD-T by 2%, which demonstrates that the transfer learning from large span-based QA gives a clear improvement.', '(b) Pretraining on SQuAD+SNLI outperforms pretraining on SNLI only.', 'Given that SNLI is larger than SQuAD, the difference in their performance is a strong indicator that we are benefiting from not only the scale of SQuAD, but also the fine-grained supervision that it provides.', '(c) We outperform the previous state of the art by 2% with the ensemble of SQuAD+SNLI pretraining routine.', 'It is worth noting that Mou et al. (2016) also shows improvement on SICK by pretraining on SNLI.'] | [None, None, None, ['SQuAD', 'SQuAD-T'], ['SQuAD + SNLI', 'SNLI'], ['SNLI', 'SQuAD'], ['SQuAD + SNLI*'], None] | 1 |
P17-2083table_1 | Comparison of our model variants on the MapTask corpus. | 1 | [['no attn.'], ['traditional'], ['gated attn.']] | 1 | [['without HMM'], ['gate bias HMM'], ['gate all HMM']] | [['60.97%', '64.60%', '63.55%'], ['61.72%', '64.73%', '65.19%'], ['62.21%', '65.94%', '65.94%']] | column | ['Accuracy', 'Accuracy', 'Accuracy'] | ['gated attn.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>without HMM</th> <th>gate bias HMM</th> <th>gate all HMM</th> </tr> </thead> <tbody> <tr> <td>no attn.</td> <td>60.97%</td> <td>64.60%</td> <td>63.55%</td> </tr> <tr> <td>traditional</td> <td>61.72%</td> <td>64.73%</td> <td>65.19%</td> </tr> <tr> <td>gated attn.</td> <td>62.21%</td> <td>65.94%</td> <td>65.94%</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2083 | 4 | acl2017 | Table 1 shows the classification accuracy of the nine variants of our model on the MapTask corpus. Table 1 shows that adding the attention mechanism is beneficial, as the traditional attention models always outperform their non-attention counterparts. The gated attention configurations, in turn, outperform those with the traditional attention mechanism by 0.49%-1.21%. As seen in Table 1, the performance gain from the HMM connection is larger than the gain from the attention mechanism. Without the attention mechanism, the HMM connection brings an increase of 3.63% with the gated bias HMM configuration and 2.58% with the fully gated HMM configuration. With the use of traditional attention, the improvement is 3.01% for the bias HMM configuration and 3.47% for the gated HMM configuration. Finally with the gated attention in place, the two HMM configurations improve the accuracy by 3.73%. | [1, 1, 1, 1, 1, 1, 1] | ['Table 1 shows the classification accuracy of the nine variants of our model on the MapTask corpus.', 'Table 1 shows that adding the attention mechanism is beneficial, as the traditional attention models always outperform their non-attention counterparts.', 'The gated attention configurations, in turn, outperform those with the traditional attention mechanism by 0.49%-1.21%.', 'As seen in Table 1, the performance gain from the HMM connection is larger than the gain from the attention mechanism.', 'Without the attention mechanism, the HMM connection brings an increase of 3.63% with the gated bias HMM configuration and 2.58% with the fully gated HMM configuration.', 'With the use of traditional attention, the improvement is 3.01% for the bias HMM configuration and 3.47% for the gated HMM configuration.', 'Finally with the gated attention in place, the two HMM configurations improve the accuracy by 3.73%.'] | [None, ['gated attn.', 'traditional', 'no attn.'], ['gated attn.', 'traditional'], ['gate bias HMM', 'gate all HMM', 'traditional', 'gated attn.'], ['no attn.', 'gate bias HMM', 'gate all HMM'], ['traditional', 'gate bias HMM', 'gate all HMM'], ['gated attn.', 'gate bias HMM', 'gate all HMM']] | 1 |
P17-2085table_4 | Overall performance (%). R, P, and F represent recall, precision, and F1 score, respectively. | 2 | [['Category', 'President'], ['Category', 'Company'], ['Category', 'University'], ['Category', 'State'], ['Category', 'Character'], ['Category', 'Brand'], ['Category', 'Restaurant'], ['Category', 'Overall']] | 2 | [['Complete', 'R'], ['Complete', 'P'], ['Complete', 'F'], ['Balanced Subset', 'R'], ['Balanced Subset', 'P'], ['Balanced Subset', 'F']] | [['94.6', '89.9', '92.2', '87.2', '80.4', '83.7'], ['86.6', '95.8', '91.0', '90.8', '85.2', '87.9'], ['96.7', '96.4', '96.5', '96.9', '92.0', '94.4'], ['96.2', '92.1', '94.1', '95.0', '58.6', '72.5'], ['92.5', '61.3', '73.7', '92.8', '52.2', '66.8'], ['89.6', '90.2', '89.9', '86.7', '83.2', '84.9'], ['87.0', '81.4', '84.1', '86.9', '88.1', '87.5'], ['95.2', '93.4', '94.3', '93.1', '81.6', '87.0']] | column | ['R', 'P', 'F', 'R', 'P', 'F'] | ['Balanced Subset'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Complete || R</th> <th>Complete || P</th> <th>Complete || F</th> <th>Balanced Subset || R</th> <th>Balanced Subset || P</th> <th>Balanced Subset || F</th> </tr> </thead> <tbody> <tr> <td>Category || President</td> <td>94.6</td> <td>89.9</td> <td>92.2</td> <td>87.2</td> <td>80.4</td> <td>83.7</td> </tr> <tr> <td>Category || Company</td> <td>86.6</td> <td>95.8</td> <td>91.0</td> <td>90.8</td> <td>85.2</td> <td>87.9</td> </tr> <tr> <td>Category || University</td> <td>96.7</td> <td>96.4</td> <td>96.5</td> <td>96.9</td> <td>92.0</td> <td>94.4</td> </tr> <tr> <td>Category || State</td> <td>96.2</td> <td>92.1</td> <td>94.1</td> <td>95.0</td> <td>58.6</td> <td>72.5</td> </tr> <tr> <td>Category || Character</td> <td>92.5</td> <td>61.3</td> <td>73.7</td> <td>92.8</td> <td>52.2</td> <td>66.8</td> </tr> <tr> <td>Category || Brand</td> <td>89.6</td> <td>90.2</td> <td>89.9</td> <td>86.7</td> <td>83.2</td> <td>84.9</td> </tr> <tr> <td>Category || Restaurant</td> <td>87.0</td> <td>81.4</td> <td>84.1</td> <td>86.9</td> <td>88.1</td> <td>87.5</td> </tr> <tr> <td>Category || Overall</td> <td>95.2</td> <td>93.4</td> <td>94.3</td> <td>93.1</td> <td>81.6</td> <td>87.0</td> </tr> </tbody></table> | Table 4 | table_4 | P17-2085 | 4 | acl2017 | As Table 4 demonstrates, our method shows promising results (87.0 F1 score) on the balanced data set. Nevertheless, we notice the low linking precisions for entities in the Character and State lists, which are caused by different reasons. For the Character list, mentions do not suffice to select high-quality seeds, whereas for the State list, features of referential and non-referential mentions are usually similar. | [1, 2, 2] | ['As Table 4 demonstrates, our method shows promising results (87.0 F1 score) on the balanced data set.', 'Nevertheless, we notice the low linking precisions for entities in the Character and State lists, which are caused by different reasons.', 'For the Character list, mentions do not suffice to select high-quality seeds, whereas for the State list, features of referential and non-referential mentions are usually similar.'] | [['Balanced Subset', 'Overall', 'F'], None, None] | 1 |
P17-2095table_1 | Results of comparing several segmentation strategies. | 2 | [['# SEG', 'UNSEG'], ['# SEG', 'MORPH'], ['# SEG', 'cCNN'], ['# SEG', 'CHAR'], ['# SEG', 'BPE']] | 2 | [['Arabic-to-English', 'tst11'], ['Arabic-to-English', 'tst12'], ['Arabic-to-English', 'tst13'], ['Arabic-to-English', 'tst14'], ['Arabic-to-English', 'AVG.'], ['English-to-Arabic', 'tst11'], ['English-to-Arabic', 'tst12'], ['English-to-Arabic', 'tst13'], ['English-to-Arabic', 'tst14'], ['English-to-Arabic', 'AVG.']] | [['25.7', '28.2', '27.3', '23.9', '26.3', '15.8', '17.1', '18.1', '15.5', '16.6'], ['29.2', '33', '32.9', '28.3', '30.9', '16.5', '18.8', '20.4', '17.2', '18.2'], ['29', '32', '32.5', '27.8', '30.2', '14.3', '12.8', '13.6', '12.6', '13.3'], ['28.8', '31.8', '32.5', '27.8', '30.2', '15.3', '17.1', '18', '15.3', '16.4'], ['29.7', '32.5', '33.6', '28.4', '31.1', '17.5', '18', '20', '16.6', '18']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['# SEG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Arabic-to-English || tst11</th> <th>Arabic-to-English || tst12</th> <th>Arabic-to-English || tst13</th> <th>Arabic-to-English || tst14</th> <th>Arabic-to-English || AVG.</th> <th>English-to-Arabic || tst11</th> <th>English-to-Arabic || tst12</th> <th>English-to-Arabic || tst13</th> <th>English-to-Arabic || tst14</th> <th>English-to-Arabic || AVG.</th> </tr> </thead> <tbody> <tr> <td># SEG || UNSEG</td> <td>25.7</td> <td>28.2</td> <td>27.3</td> <td>23.9</td> <td>26.3</td> <td>15.8</td> <td>17.1</td> <td>18.1</td> <td>15.5</td> <td>16.6</td> </tr> <tr> <td># SEG || MORPH</td> <td>29.2</td> <td>33</td> <td>32.9</td> <td>28.3</td> <td>30.9</td> <td>16.5</td> <td>18.8</td> <td>20.4</td> <td>17.2</td> <td>18.2</td> </tr> <tr> <td># SEG || cCNN</td> <td>29</td> <td>32</td> <td>32.5</td> <td>27.8</td> <td>30.2</td> <td>14.3</td> <td>12.8</td> <td>13.6</td> <td>12.6</td> <td>13.3</td> </tr> <tr> <td># SEG || CHAR</td> <td>28.8</td> <td>31.8</td> <td>32.5</td> <td>27.8</td> <td>30.2</td> <td>15.3</td> <td>17.1</td> <td>18</td> <td>15.3</td> <td>16.4</td> </tr> <tr> <td># SEG || BPE</td> <td>29.7</td> <td>32.5</td> <td>33.6</td> <td>28.4</td> <td>31.1</td> <td>17.5</td> <td>18</td> <td>20</td> <td>16.6</td> <td>18</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2095 | 3 | acl2017 | Table 1 presents MT results using various segmentation strategies. Compared to the UNSEG system, the MORPH system improved translation quality by 4.6 and 1.6 BLEU points in Arabic-to-English and English-to-Arabic systems, respectively. The results also improved by up to 3 BLEU points for cCNN and CHAR systems in the Arabic-to-English direction. However, the performance is lower by at least 0.6 BLEU points compared to the MORPH system. In the English-to-Arabic direction, where cCNN and CHAR are applied on the target side, the performance dropped significantly. In the case of CHAR, mapping one source word to many target characters makes it harder for NMT to learn a good model. This is in line with our finding on using a lower value of OP for BPE segmentation (see paragraph Analyzing the effect of OP). Surprisingly, the cCNN system results were inferior to the UNSEG system for English-to-Arabic. A possible explanation is that the decoder’s predictions are still done at word level even when using the cCNN model (which encodes the target input during training but not the output). In practice, this can lead to generating unknown words. Indeed, in the Ar-toEn case cCNN significantly reduces the unknown words in the test sets, while in the English-to-Arabic case the number of unknown words remains roughly the same between UNSEG and cCNN. The BPE system outperformed all other systems in the Ar-to-En direction and is lower than MORPH by only 0.2 BLEU points in the opposite direction. This shows that machine translation involving the Arabic language can achieve competitive results with data-driven segmentation. | [1, 1, 1, 1, 1, 2, 2, 1, 2, 2, 1, 1, 2] | ['Table 1 presents MT results using various segmentation strategies.', 'Compared to the UNSEG system, the MORPH system improved translation quality by 4.6 and 1.6 BLEU points in Arabic-to-English and English-to-Arabic systems, respectively.', 'The results also improved by up to 3 BLEU points for cCNN and CHAR systems in the Arabic-to-English direction.', 'However, the performance is lower by at least 0.6 BLEU points compared to the MORPH system.', 'In the English-to-Arabic direction, where cCNN and CHAR are applied on the target side, the performance dropped significantly.', 'In the case of CHAR, mapping one source word to many target characters makes it harder for NMT to learn a good model.', 'This is in line with our finding on using a lower value of OP for BPE segmentation (see paragraph Analyzing the effect of OP).', 'Surprisingly, the cCNN system results were inferior to the UNSEG system for English-to-Arabic.', 'A possible explanation is that the decoder’s predictions are still done at word level even when using the cCNN model (which encodes the target input during training but not the output).', 'In practice, this can lead to generating unknown words.', 'Indeed, in the Ar-toEn case cCNN significantly reduces the unknown words in the test sets, while in the English-to-Arabic case the number of unknown words remains roughly the same between UNSEG and cCNN.', 'The BPE system outperformed all other systems in the Ar-to-En direction and is lower than MORPH by only 0.2 BLEU points in the opposite direction.', 'This shows that machine translation involving the Arabic language can achieve competitive results with data-driven segmentation.'] | [None, ['UNSEG', 'MORPH', 'Arabic-to-English'], ['cCNN', 'CHAR', 'Arabic-to-English'], ['MORPH', 'Arabic-to-English'], ['English-to-Arabic', 'cCNN', 'CHAR'], ['CHAR'], ['BPE'], ['cCNN', 'UNSEG', 'English-to-Arabic'], ['cCNN'], ['cCNN'], ['Arabic-to-English', 'cCNN', 'English-to-Arabic', 'UNSEG'], ['BPE', 'Arabic-to-English', 'MORPH'], None] | 1 |
P17-2096table_4 | Comparison with previous models. Results with * are from (Cai and Zhao, 2016).4 | 2 | [['Models', '(Zhao and Kit 2008c)'], ['Models', '(Chen et al. 2015a)'], ['Models', '(Chen et al. 2015b)'], ['Models', '(Ma and Hinrichs 2015)'], ['Models', '(Zhang et al. 2016)'], ['Models', '(Liu et al. 2016)'], ['Models', '(Cai and Zhao 2016)'], ['Models', 'Our results']] | 2 | [['PKU', 'F1 + pre-train'], ['PKU', 'F1'], ['PKU', 'Training (hours)'], ['PKU', 'Test (sec.)'], ['MSR', 'F1 + pre-train'], ['MSR', 'F1'], ['MSR', 'Training (hours)'], ['MSR', 'Test (sec.)']] | [['-', '95.4', '-', '-', '-', '97.6', '-', '-'], ['94.5', '94.4', '50', '105', '95.4', '95.1', '100', '120'], ['94.8', '94.3', '58', '105', '95.6', '95.0', '117', '120'], ['-', '95.1', '1.5', '24', '-', '96.6', '3', '28'], ['95.1', '-', '6', '110', '97.0', '-', '13', '125'], ['93.91', '-', '-', '-', '95.21', '-', '-', '-'], ['95.5', '95.2', '48', '95', '96.5', '96.4', '96', '105'], ['95.8', '95.4', '3', '25', '97.1', '97.0', '6', '30']] | column | ['F1 + pre-train', 'F1', 'Training (hours)', 'Test (sec.)', 'F1 + pre-train', 'F1', 'Training (hours)', 'Test (sec.)'] | ['Our results'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PKU || F1 + pre-train</th> <th>PKU || F1</th> <th>PKU || Training (hours)</th> <th>PKU || Test (sec.)</th> <th>MSR || F1 + pre-train</th> <th>MSR || F1</th> <th>MSR || Training (hours)</th> <th>MSR || Test (sec.)</th> </tr> </thead> <tbody> <tr> <td>Models || (Zhao and Kit 2008c)</td> <td>-</td> <td>95.4</td> <td>-</td> <td>-</td> <td>-</td> <td>97.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || (Chen et al. 2015a)</td> <td>94.5</td> <td>94.4</td> <td>50</td> <td>105</td> <td>95.4</td> <td>95.1</td> <td>100</td> <td>120</td> </tr> <tr> <td>Models || (Chen et al. 2015b)</td> <td>94.8</td> <td>94.3</td> <td>58</td> <td>105</td> <td>95.6</td> <td>95.0</td> <td>117</td> <td>120</td> </tr> <tr> <td>Models || (Ma and Hinrichs 2015)</td> <td>-</td> <td>95.1</td> <td>1.5</td> <td>24</td> <td>-</td> <td>96.6</td> <td>3</td> <td>28</td> </tr> <tr> <td>Models || (Zhang et al. 2016)</td> <td>95.1</td> <td>-</td> <td>6</td> <td>110</td> <td>97.0</td> <td>-</td> <td>13</td> <td>125</td> </tr> <tr> <td>Models || (Liu et al. 2016)</td> <td>93.91</td> <td>-</td> <td>-</td> <td>-</td> <td>95.21</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || (Cai and Zhao 2016)</td> <td>95.5</td> <td>95.2</td> <td>48</td> <td>95</td> <td>96.5</td> <td>96.4</td> <td>96</td> <td>105</td> </tr> <tr> <td>Models || Our results</td> <td>95.8</td> <td>95.4</td> <td>3</td> <td>25</td> <td>97.1</td> <td>97.0</td> <td>6</td> <td>30</td> </tr> </tbody></table> | Table 4 | table_4 | P17-2096 | 5 | acl2017 | Table 4 compares our final results (greedy search is adopted by setting k=1) to prior neural models. Pre-training character embeddings on large scale unlabeled corpus (not limited to the training corpus) has been shown helpful for extra performance improvement. The results with or without pre trained character embeddings are listed separately for following the strict closed test setting of SIGHAN Bakeoff in which no linguistic resource other than training corpus is allowed. We also show the state of the art results in (Zhao and Kit, 2008b) of traditional methods. The comparison shows our neural word segmenter outperforms all state of the art neural systems with much less computational cost. | [1, 1, 2, 1, 1] | ['Table 4 compares our final results (greedy search is adopted by setting k=1) to prior neural models.', 'Pre-training character embeddings on large scale unlabeled corpus (not limited to the training corpus) has been shown helpful for extra performance improvement.', 'The results with or without pre trained character embeddings are listed separately for following the strict closed test setting of SIGHAN Bakeoff in which no linguistic resource other than training corpus is allowed.', 'We also show the state of the art results in (Zhao and Kit, 2008b) of traditional methods.', 'The comparison shows our neural word segmenter outperforms all state of the art neural systems with much less computational cost.'] | [None, ['F1 + pre-train'], ['F1 + pre-train'], ['(Zhao and Kit 2008c)'], ['Our results']] | 1 |
P17-2097table_3 | Final results. ∗ = estimate from 100; see Section 6.1. ‡ = from Mostafazadeh et al. (2016). | 1 | [['DSSM‡'], ['UW (Schwartz et al. 2017b)'], ['UW (ending only)'], ['trigram LM (estimated from stories)'], ['trigram LM (estimated from endings)'], ['Our model (HIER ENCPLOTEND ATT)'], ['Our model (ending only)'], ['Human‡ (story + ending)'], ['Human (ending only)']] | 1 | [['val'], ['test']] | [['60.4', '58.5'], ['-', '75.2'], ['-', '72.4'], ['52.4', '53.6'], ['53.8', '54.6'], ['-', '74.7'], ['-', '72.5'], ['100', '100'], ['78', '-']] | column | ['accuracy', 'accuracy'] | ['Our model (HIER ENCPLOTEND ATT)', 'Our model (ending only)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>val</th> <th>test</th> </tr> </thead> <tbody> <tr> <td>DSSM‡</td> <td>60.4</td> <td>58.5</td> </tr> <tr> <td>UW (Schwartz et al. 2017b)</td> <td>-</td> <td>75.2</td> </tr> <tr> <td>UW (ending only)</td> <td>-</td> <td>72.4</td> </tr> <tr> <td>trigram LM (estimated from stories)</td> <td>52.4</td> <td>53.6</td> </tr> <tr> <td>trigram LM (estimated from endings)</td> <td>53.8</td> <td>54.6</td> </tr> <tr> <td>Our model (HIER ENCPLOTEND ATT)</td> <td>-</td> <td>74.7</td> </tr> <tr> <td>Our model (ending only)</td> <td>-</td> <td>72.5</td> </tr> <tr> <td>Human‡ (story + ending)</td> <td>100</td> <td>100</td> </tr> <tr> <td>Human (ending only)</td> <td>78</td> <td>-</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2097 | 4 | acl2017 | Table 3 shows final results. We report the best result from Mostafazadeh et al. (2016), the best result from the concurrently held LSDSem shared task (Schwartz et al., 2017b), and our final system configuration (with decisions tuned via cross validation as shown in Tables 1-2, then using the model with the best held-out fold accuracy). Our model achieves 74.7%, which is close to the state of the art result of 75.2%. We also report the results of stripping away the plots and running our system on just the endings (“ending only”). We use the FLAT BiLSTM model on the ending followed by the feed forward scoring function, using the same loss as above for training. We again use 5 fold cross validation on the validation set and choose the model with the highest held out fold accuracy. We achieve 72.5%, matching the similar ending-only result of Schwartz et al. (2017b). | [1, 2, 1, 1, 2, 2, 1] | ['Table 3 shows final results.', 'We report the best result from Mostafazadeh et al. (2016), the best result from the concurrently held LSDSem shared task (Schwartz et al., 2017b), and our final system configuration (with decisions tuned via cross validation as shown in Tables 1-2, then using the model with the best held-out fold accuracy).', 'Our model achieves 74.7%, which is close to the state of the art result of 75.2%.', 'We also report the results of stripping away the plots and running our system on just the endings (“ending only”).', 'We use the FLAT BiLSTM model on the ending followed by the feed forward scoring function, using the same loss as above for training.', 'We again use 5 fold cross validation on the validation set and choose the model with the highest held out fold accuracy.', 'We achieve 72.5%, matching the similar ending-only result of Schwartz et al. (2017b).'] | [None, ['UW (Schwartz et al. 2017b)', 'Our model (HIER ENCPLOTEND ATT)', 'Our model (ending only)'], ['Our model (HIER ENCPLOTEND ATT)', 'UW (Schwartz et al. 2017b)'], ['Our model (ending only)'], None, None, ['Our model (ending only)', 'UW (Schwartz et al. 2017b)']] | 1 |
P17-2100table_1 | Results of our model and baseline systems. Our models achieve substantial improvement of all ROUGE scores over baseline systems. (W: Word level; C: Character level). | 2 | [['Model', 'RNN (W) (Hu et al. 2015)'], ['Model', 'RNN (C) (Hu et al. 2015)'], ['Model', 'RNN context (W) (Hu et al. 2015)'], ['Model', 'RNN context (C) (Hu et al. 2015)'], ['Model', 'RNN context + SRB (C)'], ['Model', '+Attention (C)']] | 1 | [['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']] | [['17.7', '8.5', '15.8'], ['21.5', '8.9', '18.6'], ['26.8', '16.1', '24.1'], ['29.9', '17.4', '27.2'], ['32.1', '18.9', '29.2'], ['33.3', '20.0', '30.1']] | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['RNN context + SRB (C)', '+Attention (C)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || RNN (W) (Hu et al. 2015)</td> <td>17.7</td> <td>8.5</td> <td>15.8</td> </tr> <tr> <td>Model || RNN (C) (Hu et al. 2015)</td> <td>21.5</td> <td>8.9</td> <td>18.6</td> </tr> <tr> <td>Model || RNN context (W) (Hu et al. 2015)</td> <td>26.8</td> <td>16.1</td> <td>24.1</td> </tr> <tr> <td>Model || RNN context (C) (Hu et al. 2015)</td> <td>29.9</td> <td>17.4</td> <td>27.2</td> </tr> <tr> <td>Model || RNN context + SRB (C)</td> <td>32.1</td> <td>18.9</td> <td>29.2</td> </tr> <tr> <td>Model || +Attention (C)</td> <td>33.3</td> <td>20.0</td> <td>30.1</td> </tr> </tbody></table> | Table 1 | table_1 | P17-2100 | 4 | acl2017 | We compare our model with above baseline systems, including RNN and RNN context. We refer to our proposed Semantic Relevance Based neural model as SRB. Besides, SRB with a gated attention encoder is denoted as +Attention. Table 1 shows the results of our models and baseline systems. We can see SRB outperforms both RNN and RNN context in the F-score of ROUGE-1, ROUGE-2 and ROUGE-L. It concludes that SRB generates more key words and phrases. With a gated attention encoder, SRB achieves a better performance with 33.3 F-score of ROUGE-1, 20.0 ROUGE-2 and 30.1 ROUGE-L. | [2, 2, 2, 1, 1, 2, 1] | ['We compare our model with above baseline systems, including RNN and RNN context.', 'We refer to our proposed Semantic Relevance Based neural model as SRB.', 'Besides, SRB with a gated attention encoder is denoted as +Attention.', 'Table 1 shows the results of our models and baseline systems.', 'We can see SRB outperforms both RNN and RNN context in the F-score of ROUGE-1, ROUGE-2 and ROUGE-L.', 'It concludes that SRB generates more key words and phrases.', 'With a gated attention encoder, SRB achieves a better performance with 33.3 F-score of ROUGE-1, 20.0 ROUGE-2 and 30.1 ROUGE-L.'] | [['Model'], ['RNN context + SRB (C)'], ['+Attention (C)'], None, ['RNN context + SRB (C)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['RNN context + SRB (C)'], ['+Attention (C)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']] | 1 |
P17-2100table_2 | Results of our model and state-of-the-art systems. COPYNET incorporates copying mechanism to solve out-of-vocabulary problem, so its has higher ROUGE scores. Our model does not incorporate this mechanism currently. In the future work, we will implement this technic to further improve the performance. (Word: Word level; Char: Character level; R-1: F-score of ROUGE1; R-2: F-score of ROUGE-2; R-L: F-score of ROUGE-L) | 4 | [['Model', 'RNN context (Hu et al. 2015)', 'level', 'Word'], ['Model', 'RNN context (Hu et al. 2015)', 'level', 'Char'], ['Model', 'COPYNET (Gu et al. 2016)', 'level', 'Word'], ['Model', 'COPYNET (Gu et al. 2016)', 'level', 'Char'], ['Model', 'this work', 'level', 'Char']] | 1 | [['R-1'], ['R-2'], ['R-L']] | [['26.8', '16.1', '24.1'], ['29.9', '17.4', '27.2'], ['35.0', '22.3', '32.0'], ['34.4', '21.6', '31.3'], ['33.3', '20.0', '30.1']] | column | ['R-1', 'R-2', 'R-L'] | ['this work'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || RNN context (Hu et al. 2015) || level || Word</td> <td>26.8</td> <td>16.1</td> <td>24.1</td> </tr> <tr> <td>Model || RNN context (Hu et al. 2015) || level || Char</td> <td>29.9</td> <td>17.4</td> <td>27.2</td> </tr> <tr> <td>Model || COPYNET (Gu et al. 2016) || level || Word</td> <td>35.0</td> <td>22.3</td> <td>32.0</td> </tr> <tr> <td>Model || COPYNET (Gu et al. 2016) || level || Char</td> <td>34.4</td> <td>21.6</td> <td>31.3</td> </tr> <tr> <td>Model || this work || level || Char</td> <td>33.3</td> <td>20.0</td> <td>30.1</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2100 | 4 | acl2017 | Table 2 summarizes the results of our model and state of the art systems. COPYNET has the highest socres, because it incorporates copying mechanism to deals with out of vocabulary word problem. In this paper, we do not implement this mechanism in our model. In the future work, we will try to incorporates copying mechanism to our model to solve the out of vocabulary problem. | [1, 1, 2, 2] | ['Table 2 summarizes the results of our model and state of the art systems.', 'COPYNET has the highest socres, because it incorporates copying mechanism to deals with out of vocabulary word problem.', 'In this paper, we do not implement this mechanism in our model.', 'In the future work, we will try to incorporates copying mechanism to our model to solve the out of vocabulary problem.'] | [None, ['COPYNET (Gu et al. 2016)'], ['this work'], ['this work']] | 1 |
P17-2102table_2 | Classification results: predicting suspicion and verified posts reported as A – accuracy, AP – average precision, ROC – the area under the receiver operator characteristics curve, and inferring types of suspicious news reported using F1 micro and F1 macro scores. | 3 | [['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'Tweets'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', ' + network'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', ' + cues'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'ALL'], ['Features', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)', 'Tweets'], ['Features', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)', ' + network'], ['Features', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)', ' + cues'], ['Features', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)', 'ALL'], ['Features', 'RECURRENT NEURAL NETWORK', 'Tweets'], ['Features', 'RECURRENT NEURAL NETWORK', ' + network'], ['Features', 'RECURRENT NEURAL NETWORK', ' + cues'], ['Features', 'RECURRENT NEURAL NETWORK', ' + syntax'], ['Features', 'RECURRENT NEURAL NETWORK', 'ALL'], ['Features', 'CONVOLUTIONAL NEURAL NETWORK', 'Tweets'], ['Features', 'CONVOLUTIONAL NEURAL NETWORK', ' + network'], ['Features', 'CONVOLUTIONAL NEURAL NETWORK', ' + cues'], ['Features', 'CONVOLUTIONAL NEURAL NETWORK', 'ALL']] | 2 | [['BINARY', 'A'], ['BINARY', 'ROC'], ['BINARY', 'AP'], ['MULTI-CLASS', 'F1'], ['MULTI-CLASS', 'F1 macro']] | [['0.65', '0.70', '0.68', '0.82', '0.40'], ['0.72', '0.80', '0.82', '0.88', '0.57'], ['0.69', '0.74', '0.73', '0.83', '0.46'], ['0.75', '0.84', '0.84', '0.88', '0.59'], ['0.72', '0.81', '0.81', '0.84', '0.48'], ['0.78', '0.87', '0.88', '0.88', '0.59'], ['0.75', '0.85', '0.85', '0.86', '0.49'], ['0.79', '0.88', '0.89', '0.89', '0.59'], ['0.78', '0.87', '0.88', '0.90', '0.63'], ['0.83', '0.91', '0.92', '0.92', '0.71'], ['0.93', '0.98', '0.99', '0.90', '0.63'], ['0.93', '0.96', '0.96', '0.90', '0.64'], ['0.95', '0.99', '0.99', '0.91', '0.66'], ['0.76', '0.85', '0.87', '0.91', '0.63'], ['0.81', '0.9', '0.91', '0.92', '0.70'], ['0.93', '0.98', '0.98', '0.90', '0.61'], ['0.95', '0.98', '0.99', '0.91', '0.64']] | column | ['A', 'ROC', 'AP', 'F1', 'F1 macro'] | ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BINARY || A</th> <th>BINARY || ROC</th> <th>BINARY || AP</th> <th>MULTI-CLASS || F1</th> <th>MULTI-CLASS || F1 macro</th> </tr> </thead> <tbody> <tr> <td>Features || BASELINE 1: LOGISTIC REGRESSION (DOC2VEC) || Tweets</td> <td>0.65</td> <td>0.70</td> <td>0.68</td> <td>0.82</td> <td>0.40</td> </tr> <tr> <td>Features || BASELINE 1: LOGISTIC REGRESSION (DOC2VEC) || + network</td> <td>0.72</td> <td>0.80</td> <td>0.82</td> <td>0.88</td> <td>0.57</td> </tr> <tr> <td>Features || BASELINE 1: LOGISTIC REGRESSION (DOC2VEC) || + cues</td> <td>0.69</td> <td>0.74</td> <td>0.73</td> <td>0.83</td> <td>0.46</td> </tr> <tr> <td>Features || BASELINE 1: LOGISTIC REGRESSION (DOC2VEC) || ALL</td> <td>0.75</td> <td>0.84</td> <td>0.84</td> <td>0.88</td> <td>0.59</td> </tr> <tr> <td>Features || BASELINE 2: LOGISTIC REGRESSION (TFIDF) || Tweets</td> <td>0.72</td> <td>0.81</td> <td>0.81</td> <td>0.84</td> <td>0.48</td> </tr> <tr> <td>Features || BASELINE 2: LOGISTIC REGRESSION (TFIDF) || + network</td> <td>0.78</td> <td>0.87</td> <td>0.88</td> <td>0.88</td> <td>0.59</td> </tr> <tr> <td>Features || BASELINE 2: LOGISTIC REGRESSION (TFIDF) || + cues</td> <td>0.75</td> <td>0.85</td> <td>0.85</td> <td>0.86</td> <td>0.49</td> </tr> <tr> <td>Features || BASELINE 2: LOGISTIC REGRESSION (TFIDF) || ALL</td> <td>0.79</td> <td>0.88</td> <td>0.89</td> <td>0.89</td> <td>0.59</td> </tr> <tr> <td>Features || RECURRENT NEURAL NETWORK || Tweets</td> <td>0.78</td> <td>0.87</td> <td>0.88</td> <td>0.90</td> <td>0.63</td> </tr> <tr> <td>Features || RECURRENT NEURAL NETWORK || + network</td> <td>0.83</td> <td>0.91</td> <td>0.92</td> <td>0.92</td> <td>0.71</td> </tr> <tr> <td>Features || RECURRENT NEURAL NETWORK || + cues</td> <td>0.93</td> <td>0.98</td> <td>0.99</td> <td>0.90</td> <td>0.63</td> </tr> <tr> <td>Features || RECURRENT NEURAL NETWORK || + syntax</td> <td>0.93</td> <td>0.96</td> <td>0.96</td> <td>0.90</td> <td>0.64</td> </tr> <tr> <td>Features || RECURRENT NEURAL NETWORK || ALL</td> <td>0.95</td> <td>0.99</td> <td>0.99</td> <td>0.91</td> <td>0.66</td> </tr> <tr> <td>Features || CONVOLUTIONAL NEURAL NETWORK || Tweets</td> <td>0.76</td> <td>0.85</td> <td>0.87</td> <td>0.91</td> <td>0.63</td> </tr> <tr> <td>Features || CONVOLUTIONAL NEURAL NETWORK || + network</td> <td>0.81</td> <td>0.9</td> <td>0.91</td> <td>0.92</td> <td>0.70</td> </tr> <tr> <td>Features || CONVOLUTIONAL NEURAL NETWORK || + cues</td> <td>0.93</td> <td>0.98</td> <td>0.98</td> <td>0.90</td> <td>0.61</td> </tr> <tr> <td>Features || CONVOLUTIONAL NEURAL NETWORK || ALL</td> <td>0.95</td> <td>0.98</td> <td>0.99</td> <td>0.91</td> <td>0.64</td> </tr> </tbody></table> | Table 2 | table_2 | P17-2102 | 4 | acl2017 | Table 2 presents classification results for Task 1 (binary) suspicious vs. verified news posts and Task 2 (multi-class) four types of suspicious tweets e.g., propaganda, hoaxes, satire and clickbait. We report performance for different model and feature combinations. We find that our neural network models (both CNNs and RNNs) significantly outperform logistic regression baselines learned from all feature combinations. The accuracy improvement for the binary task is 0.2 and F1 macro boost for the multi-class task is 0.07. We also observe that all models learned from network and tweet text signals outperform models trained exclusively on tweets. We report 0.05 accuracy improvement for Task 1, and 0.02 F1 boost for Task 2. Adding linguistic cues to basic tweet representations significantly improves results across all models. Finally, by combining basic content with network and linguistic features via late fusion, our neural network models achieve best results in binary experiments. Interestingly, models perform best in the multiclass case when trained on tweet embeddings and fused network features alone. Syntax and grammar features have been predictive of deception in the product review domain (Feng et al., 2012, Perez Rosas and Mihalcea, 2015). However, unlike earlier work we find that fusing these features into our models significantly decreases performance by 0.02 accuracy for the binary task and 0.02 F1 for multi-class. This may be explained by the domain differences between reviews and tweets which are shorter, more noisy and difficult to parse. | [1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2] | ['Table 2 presents classification results for Task 1 (binary) suspicious vs. verified news posts and Task 2 (multi-class) four types of suspicious tweets e.g., propaganda, hoaxes, satire and clickbait.', 'We report performance for different model and feature combinations.', 'We find that our neural network models (both CNNs and RNNs) significantly outperform logistic regression baselines learned from all feature combinations.', 'The accuracy improvement for the binary task is 0.2 and F1 macro boost for the multi-class task is 0.07.', 'We also observe that all models learned from network and tweet text signals outperform models trained exclusively on tweets.', 'We report 0.05 accuracy improvement for Task 1, and 0.02 F1 boost for Task 2.', 'Adding linguistic cues to basic tweet representations significantly improves results across all models.', 'Finally, by combining basic content with network and linguistic features via late fusion, our neural network models achieve best results in binary experiments.', 'Interestingly, models perform best in the multiclass case when trained on tweet embeddings and fused network features alone.', 'Syntax and grammar features have been predictive of deception in the product review domain (Feng et al., 2012, Perez Rosas and Mihalcea, 2015).', 'However, unlike earlier work we find that fusing these features into our models significantly decreases performance by 0.02 accuracy for the binary task and 0.02 F1 for multi-class.', 'This may be explained by the domain differences between reviews and tweets which are shorter, more noisy and difficult to parse.'] | [None, None, ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)'], ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK', 'BINARY', 'F1 macro', 'MULTI-CLASS'], None, ['Tweets'], [' + cues'], ['ALL', 'BINARY'], [' + network', 'MULTI-CLASS'], [' + syntax'], [' + syntax', 'BINARY', 'F1', 'MULTI-CLASS'], [' + syntax']] | 1 |
P17-2103table_3 | Performance of Classifiers | 2 | [['Performance', 'Precision'], ['Performance', 'Recall'], ['Performance', 'F1']] | 1 | [['CF Parser'], ['Rules Only'], ['SVM']] | [['0.7131', '0.5864', '0.2381'], ['0.8365', '0.9134', '0.9135'], ['0.7699', '0.7143', '0.3777']] | row | ['Precision', 'Recall', 'F1'] | ['CF Parser'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CF Parser</th> <th>Rules Only</th> <th>SVM</th> </tr> </thead> <tbody> <tr> <td>Performance || Precision</td> <td>0.7131</td> <td>0.5864</td> <td>0.2381</td> </tr> <tr> <td>Performance || Recall</td> <td>0.8365</td> <td>0.9134</td> <td>0.9135</td> </tr> <tr> <td>Performance || F1</td> <td>0.7699</td> <td>0.7143</td> <td>0.3777</td> </tr> </tbody></table> | Table 3 | table_3 | P17-2103 | 4 | acl2017 | Thus, in order to make the classifier robust to the imbalanced dataset, we designed a rule based model with counter-factual forms, which resulted in significantly higher F1 than statistical model. Moreover the rule based model captures positive samples of all possible forms which might not exist in the training set. A combined approach gives the best result. As Table 3 shows our whole pipeline (‘CF Parser’ in Table 3) obtained the best overall performance with the combination of both approaches. | [2, 2, 1, 1] | ['Thus, in order to make the classifier robust to the imbalanced dataset, we designed a rule based model with counter-factual forms, which resulted in significantly higher F1 than statistical model.', 'Moreover the rule based model captures positive samples of all possible forms which might not exist in the training set.', 'A combined approach gives the best result.', 'As Table 3 shows our whole pipeline (‘CF Parser’ in Table 3) obtained the best overall performance with the combination of both approaches.'] | [['Rules Only'], ['Rules Only'], ['CF Parser'], ['CF Parser']] | 1 |
P18-1001table_4 | Word similarity evaluation on foreign languages. | 2 | [['FR', 'WS353'], ['DE', 'GUR350'], ['DE', 'GUR65'], ['IT', 'WS353'], ['IT', 'SL-999']] | 1 | [['FASTTEXT'], ['w2g'], ['w2gm'], ['pft-g'], ['pft-gm']] | [['38.2', '16.73', '20.09', '41', '41.3'], ['70', '65.01', '69.26', '77.6', '78.2'], ['81', '74.94', '76.89', '81.8', '85.2'], ['57.1', '56.02', '61.09', '60.2', '62.5'], ['29.3', '29.44', '34.91', '29.3', '33.7']] | column | ['similarity', 'similarity', 'similarity', 'similarity', 'similarity'] | ['FASTTEXT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FASTTEXT</th> <th>w2g</th> <th>w2gm</th> <th>pft-g</th> <th>pft-gm</th> </tr> </thead> <tbody> <tr> <td>FR || WS353</td> <td>38.2</td> <td>16.73</td> <td>20.09</td> <td>41</td> <td>41.3</td> </tr> <tr> <td>DE || GUR350</td> <td>70</td> <td>65.01</td> <td>69.26</td> <td>77.6</td> <td>78.2</td> </tr> <tr> <td>DE || GUR65</td> <td>81</td> <td>74.94</td> <td>76.89</td> <td>81.8</td> <td>85.2</td> </tr> <tr> <td>IT || WS353</td> <td>57.1</td> <td>56.02</td> <td>61.09</td> <td>60.2</td> <td>62.5</td> </tr> <tr> <td>IT || SL-999</td> <td>29.3</td> <td>29.44</td> <td>34.91</td> <td>29.3</td> <td>33.7</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1001 | 8 | acl2018 | Table 4 shows the Spearmanfs correlation results of our models. We outperform FASTTEXT on many word similarity benchmarks. Our results are also significantly better than the dictionary-based models, W2G and W2GM. We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents. We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation. For example, piano in Italian can mean gfloorh or gslowh. These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean gslowlyh whereas the other component is close to piani(floors), istrutturazione (renovation) or infrastruttre (infrastructure). | [1, 1, 1, 1, 2, 2, 2] | ['Table 4 shows the Spearman\x81fs correlation results of our models.', 'We outperform FASTTEXT on many word similarity benchmarks.', 'Our results are also significantly better than the dictionary-based models, W2G and W2GM.', 'We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.', 'We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.', 'For example, piano in Italian can mean \x81gfloor\x81h or \x81gslow\x81h.', 'These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \x81gslowly\x81h whereas the other component is close to piani(floors), istrutturazione (renovation) or infrastruttre (infrastructure).'] | [None, ['FASTTEXT'], ['w2g', 'w2gm'], ['w2g', 'w2gm'], None, None, None] | 1 |
P18-1002table_1 | Comparison with baselines and nonce2vec (Herbelot and Baroni, 2017) on few-shot embedding tasks. Performance on the chimeras task is measured using the Spearman correlation with human ratings. Note that the additive baseline requires removing stop-words in order to improve with more data. | 2 | [['Method', 'word2vec'], ['Method', 'additive'], ['Method', 'additive, no stop words'], ['Method', 'nonce2vec'], ['Method', 'a la carte']] | 2 | [['Nonce (Herbelot and Baroni, 2017)', 'Mean Recip. Rank'], ['Nonce (Herbelot and Baroni, 2017)', 'Med. Rank'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 2 Sent.'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 4 Sent.'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 6 Sent.']] | [['0.00007', '111012', '0.1459', '0.2457', '0.2498'], ['0.00945', '3381', '0.3627', '0.3701', '0.3595'], ['0.03686', '861', '0.3376', '0.3624', '0.408'], ['0.04907', '623', '0.332', '0.3668', '0.389'], ['0.07058', '165.5', '0.3634', '0.3844', '0.3941']] | column | ['Mean Recip. Rank', 'Med.Rank', 'Spearman correlation 2 Sent.', 'Spearman correlation 4 Sent.', 'Spearman correlation 6 Sent.'] | ['nonce2vec'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Nonce (Herbelot and Baroni, 2017) || Mean Recip. Rank</th> <th>Nonce (Herbelot and Baroni, 2017) || Med. Rank</th> <th>Chimera (Lazaridou et al., 2017) || Spearman correlation 2 Sent.</th> <th>Chimera (Lazaridou et al., 2017) || Spearman correlation 4 Sent.</th> <th>Chimera (Lazaridou et al., 2017) || Spearman correlation 6 Sent.</th> </tr> </thead> <tbody> <tr> <td>Method || word2vec</td> <td>0.00007</td> <td>111012</td> <td>0.1459</td> <td>0.2457</td> <td>0.2498</td> </tr> <tr> <td>Method || additive</td> <td>0.00945</td> <td>3381</td> <td>0.3627</td> <td>0.3701</td> <td>0.3595</td> </tr> <tr> <td>Method || additive, no stop words</td> <td>0.03686</td> <td>861</td> <td>0.3376</td> <td>0.3624</td> <td>0.408</td> </tr> <tr> <td>Method || nonce2vec</td> <td>0.04907</td> <td>623</td> <td>0.332</td> <td>0.3668</td> <td>0.389</td> </tr> <tr> <td>Method || a la carte</td> <td>0.07058</td> <td>165.5</td> <td>0.3634</td> <td>0.3844</td> <td>0.3941</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1002 | 6 | acl2018 | We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences. From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger performance for more sentences. Since the `a la carte algorithm explicitly trains the transform to match the true word embedding rather than human similarity measures, it is perhaps not surprising that our approach is much more dominant on the definitional nonce task. | [2, 1, 1] | ['We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences.', 'From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger performance for more sentences.', 'Since the `a la carte algorithm explicitly trains the transform to match the true word embedding rather than human similarity measures, it is perhaps not surprising that our approach is much more dominant on the definitional nonce task.'] | [None, ['nonce2vec'], ['a la carte']] | 1 |
P18-1002table_4 | Performance of document embeddings built using `a la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al., 2017) shown for comparison. Top three results are bolded and the best word-level performance is underlined. | 2 | [['Representation', 'BonG'], ['Representation', 'BonG'], ['Representation', 'BonG'], ['Representation', 'a la carte'], ['Representation', 'a la carte'], ['Representation', 'a la carte'], ['Representation', 'Sent2Vec'], ['Representation', 'DisC'], ['Representation', 'skip-thoughts'], ['Representation', 'SDAE'], ['Representation', 'CNN-LSTM'], ['Representation', 'MC-QT'], ['Representation', 'byte mLSTM']] | 1 | [['MR'], ['CR'], ['SUBJ'], ['MPQA'], ['TREC'], ['SST (±1)'], ['SST'], ['IMDB']] | [['77.1', '77', '91', '85.1', '86.8', '80.7', '36.8', '88.3'], ['77.8', '78.1', '91.8', '85.8', '90', '80.9', '39', '90'], ['77.8', '78.3', '91.4', '85.6', '89.8', '80.1', '42.3', '89.8'], ['79.8', '81.3', '92.6', '87.4', '85.6', '84.1', '46.7', '89'], ['81.3', '83.7', '93.5', '87.6', '89', '85.8', '47.8', '90.3'], ['81.8', '84.3', '93.8', '87.6', '89', '86.7', '48.1', '90.9'], ['76.3', '79.1', '91.2', '87.2', '85.8', '80.2', '31', '85.5'], ['80.1', '81.5', '92.6', '87.9', '90', '85.5', '46.7', '89.6'], ['80.3', '83.8', '94.2', '88.9', '93', '85.1', '45.8', '-'], ['74.6', '78', '90.8', '86.9', '78.4', '-', '-', '-'], ['77.8', '82', '93.6', '89.4', '92.6', '-', '-', '-'], ['82.4', '86', '94.8', '90.2', '92.4', '87.6', '-', '-'], ['86.8', '90.6', '94.7', '88.8', '90.4', '91.7', '54.6', '92.2']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['a la carte'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>CR</th> <th>SUBJ</th> <th>MPQA</th> <th>TREC</th> <th>SST (±1)</th> <th>SST</th> <th>IMDB</th> </tr> </thead> <tbody> <tr> <td>Representation || BonG</td> <td>77.1</td> <td>77</td> <td>91</td> <td>85.1</td> <td>86.8</td> <td>80.7</td> <td>36.8</td> <td>88.3</td> </tr> <tr> <td>Representation || BonG</td> <td>77.8</td> <td>78.1</td> <td>91.8</td> <td>85.8</td> <td>90</td> <td>80.9</td> <td>39</td> <td>90</td> </tr> <tr> <td>Representation || BonG</td> <td>77.8</td> <td>78.3</td> <td>91.4</td> <td>85.6</td> <td>89.8</td> <td>80.1</td> <td>42.3</td> <td>89.8</td> </tr> <tr> <td>Representation || a la carte</td> <td>79.8</td> <td>81.3</td> <td>92.6</td> <td>87.4</td> <td>85.6</td> <td>84.1</td> <td>46.7</td> <td>89</td> </tr> <tr> <td>Representation || a la carte</td> <td>81.3</td> <td>83.7</td> <td>93.5</td> <td>87.6</td> <td>89</td> <td>85.8</td> <td>47.8</td> <td>90.3</td> </tr> <tr> <td>Representation || a la carte</td> <td>81.8</td> <td>84.3</td> <td>93.8</td> <td>87.6</td> <td>89</td> <td>86.7</td> <td>48.1</td> <td>90.9</td> </tr> <tr> <td>Representation || Sent2Vec</td> <td>76.3</td> <td>79.1</td> <td>91.2</td> <td>87.2</td> <td>85.8</td> <td>80.2</td> <td>31</td> <td>85.5</td> </tr> <tr> <td>Representation || DisC</td> <td>80.1</td> <td>81.5</td> <td>92.6</td> <td>87.9</td> <td>90</td> <td>85.5</td> <td>46.7</td> <td>89.6</td> </tr> <tr> <td>Representation || skip-thoughts</td> <td>80.3</td> <td>83.8</td> <td>94.2</td> <td>88.9</td> <td>93</td> <td>85.1</td> <td>45.8</td> <td>-</td> </tr> <tr> <td>Representation || SDAE</td> <td>74.6</td> <td>78</td> <td>90.8</td> <td>86.9</td> <td>78.4</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Representation || CNN-LSTM</td> <td>77.8</td> <td>82</td> <td>93.6</td> <td>89.4</td> <td>92.6</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Representation || MC-QT</td> <td>82.4</td> <td>86</td> <td>94.8</td> <td>90.2</td> <td>92.4</td> <td>87.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Representation || byte mLSTM</td> <td>86.8</td> <td>90.6</td> <td>94.7</td> <td>88.8</td> <td>90.4</td> <td>91.7</td> <td>54.6</td> <td>92.2</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1002 | 9 | acl2018 | In Table 4 we display the result of running cross-validated, (cid:96)2-regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question classification (Li and Roth, 2002), SST sentiment classification (binary and fine-grained) (Socher et al., 2013), and IMDB movie reviews (Maas et al., 2011). The first four are evaluated using tenfold cross-validation, while the others have train-test splits. Despite the simplicity of our embeddings (a concatenation over sums of a la carte n-gram vectors), we find that our results are very competitive with many recent unsupervised methods, achieving the best word-level results on two of the tested datasets. | [1, 2, 1] | ['In Table 4 we display the result of running cross-validated, (cid:96)2-regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question classification (Li and Roth, 2002), SST sentiment classification (binary and fine-grained) (Socher et al., 2013), and IMDB movie reviews (Maas et al., 2011).', 'The first four are evaluated using tenfold cross-validation, while the others have train-test splits.', 'Despite the simplicity of our embeddings (a concatenation over sums of a la carte n-gram vectors), we find that our results are very competitive with many recent unsupervised methods, achieving the best word-level results on two of the tested datasets.'] | [['MR', 'CR', 'SUBJ', 'MPQA', 'TREC', 'SST', 'IMDB'], None, ['a la carte']] | 1 |
P18-1003table_1 | Results for the relation induction task. | 1 | [['Acc'], ['Pre'], ['Rec'], ['F1']] | 2 | [['Google Analogy', 'Diff'], ['Google Analogy', ' Conc'], ['Google Analogy', 'Avg'], ['Google Analogy', 'R1ik'], ['Google Analogy', 'R2ik'], ['Google Analogy', 'R3ik'], ['Google Analogy', 'R4ik']] | [['90', '89', '89.9', '90', '92.3', '90.9', '90.4'], ['81.6', '78.7', '80.8', '79.9', '87.1', '83.2', '81.1'], ['82.6', '83.9', '83.9', '86', '84.8', '84.8', '85.5'], ['82.1', '81.2', '82.3', '82.8', '85.9', '84', '83.3']] | column | ['Diff', 'Conc', 'Avg', 'R1ik', 'R2ik', 'R3ik', 'R4ik'] | ['R2ik'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Google Analogy || Diff</th> <th>Google Analogy || Conc</th> <th>Google Analogy || Avg</th> <th>Google Analogy || R1ik</th> <th>Google Analogy || R2ik</th> <th>Google Analogy || R3ik</th> <th>Google Analogy || R4ik</th> </tr> </thead> <tbody> <tr> <td>Acc</td> <td>90</td> <td>89</td> <td>89.9</td> <td>90</td> <td>92.3</td> <td>90.9</td> <td>90.4</td> </tr> <tr> <td>Pre</td> <td>81.6</td> <td>78.7</td> <td>80.8</td> <td>79.9</td> <td>87.1</td> <td>83.2</td> <td>81.1</td> </tr> <tr> <td>Rec</td> <td>82.6</td> <td>83.9</td> <td>83.9</td> <td>86</td> <td>84.8</td> <td>84.8</td> <td>85.5</td> </tr> <tr> <td>F1</td> <td>82.1</td> <td>81.2</td> <td>82.3</td> <td>82.8</td> <td>85.9</td> <td>84</td> <td>83.3</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1003 | 6 | acl2018 | The results are summarized in Table 1 in terms of accuracy and (macro-averaged) precision, recall and F1 score. As can be observed, our model outperforms the baselines, with the R2 ik variant outperforming the others. | [1, 1] | ['The results are summarized in Table 1 in terms of accuracy and (macro-averaged) precision, recall and F1 score.', 'As can be observed, our model outperforms the baselines, with the R2 ik variant outperforming the others.'] | [None, ['R2ik']] | 1 |
P18-1004table_2 | Performance (ρ) on SL and SV for ERCNT models trained with different constraints. | 2 | [['Constraints (ER-CNT model)', 'Synonyms only'], ['Constraints (ER-CNT model)', 'Antonyms only'], ['Constraints (ER-CNT model)', 'Synonyms + Antonyms']] | 1 | [['SL'], ['SV']] | [['0.465', '0.339'], ['0.451', '0.317'], ['0.582', '0.439']] | column | ['SL', 'SV'] | ['Constraints (ER-CNT model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SL</th> <th>SV</th> </tr> </thead> <tbody> <tr> <td>Constraints (ER-CNT model) || Synonyms only</td> <td>0.465</td> <td>0.339</td> </tr> <tr> <td>Constraints (ER-CNT model) || Antonyms only</td> <td>0.451</td> <td>0.317</td> </tr> <tr> <td>Constraints (ER-CNT model) || Synonyms + Antonyms</td> <td>0.582</td> <td>0.439</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1004 | 7 | acl2018 | In Table 2 we show the specialization performance of the ER-CNT models (H = 5, É = 0.3),using different types of constraints on SimLex999 (SL) and SimVerb-3500 (SV). We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively. Clearly, we obtain the best specialization when combining synonyms and antonyms. Note, however, that using@only synonyms or only antonyms also improves
over the original distributional space. | [1, 1, 1, 2] | ['In Table 2 we show the specialization performance of the ER-CNT models (H = 5, \x83É = 0.3),using different types of constraints on SimLex999 (SL) and SimVerb-3500 (SV).', 'We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.', 'Clearly, we obtain the best specialization when combining synonyms and antonyms.', 'Note, however, that using\x81@only synonyms or only antonyms also improves\r\nover the original distributional space.'] | [['Constraints (ER-CNT model)', 'SL', 'SV'], ['Synonyms only', 'Antonyms only', 'Synonyms + Antonyms'], ['Synonyms + Antonyms'], None] | 1 |
P18-1005table_2 | The translation performance on English-German, English-French and Chinese-to-English test sets. The results of (Lample et al., 2017) are copied directly from their paper. We do not present the results of (Artetxe et al., 2017b) since we use different training sets. | 1 | [['Supervised'], ['Word-by-word'], ['Lample et al. (2017)'], ['The proposed approach']] | 1 | [['en-de'], ['de-en'], ['en-fr'], ['fr-en'], ['zh-en']] | [['24.07', '26.99', '30.5', '30.21', '40.02'], ['5.85', '9.34', '3.6', '6.8', '5.09'], ['9.64', '13.33', '15.05', '14.31', '-'], ['10.86', '14.62', '16.97', '15.58', '14.52']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['The proposed approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-de</th> <th>de-en</th> <th>en-fr</th> <th>fr-en</th> <th>zh-en</th> </tr> </thead> <tbody> <tr> <td>Supervised</td> <td>24.07</td> <td>26.99</td> <td>30.5</td> <td>30.21</td> <td>40.02</td> </tr> <tr> <td>Word-by-word</td> <td>5.85</td> <td>9.34</td> <td>3.6</td> <td>6.8</td> <td>5.09</td> </tr> <tr> <td>Lample et al. (2017)</td> <td>9.64</td> <td>13.33</td> <td>15.05</td> <td>14.31</td> <td>-</td> </tr> <tr> <td>The proposed approach</td> <td>10.86</td> <td>14.62</td> <td>16.97</td> <td>15.58</td> <td>14.52</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1005 | 7 | acl2018 | Table 2 shows the BLEU scores on English-German, English-French and English-to-Chinese test sets. As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English-to-French translation. This shows that the proposed model only trained with monolingual data effectively learns to use the context information and
the internal structure of each language. Compared to the work of (Lample et al., 2017), our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task. We believe that the unsupervised NMT is very promising. However, there is still a large room for improvement compared to the supervised upper bound. The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction. | [1, 1, 1, 1, 1, 1, 1] | ['Table 2 shows the BLEU scores on English-German, English-French and English-to-Chinese test sets.', 'As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English-to-French translation.', 'This shows that the proposed model only trained with monolingual data effectively learns to use the context information and\r\nthe internal structure of each language.', 'Compared to the work of (Lample et al., 2017), our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.', 'We believe that the unsupervised NMT is very promising.', 'However, there is still a large room for improvement compared to the supervised upper bound.', 'The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.'] | [['en-de', 'en-fr', 'zh-en'], ['The proposed approach', 'en-de', 'en-fr'], ['The proposed approach'], ['The proposed approach', 'Lample et al. (2017)', 'en-fr'], ['The proposed approach'], ['The proposed approach', 'Supervised'], ['The proposed approach', 'Supervised']] | 1 |
P18-1007table_5 | Comparison of different segmentation algorithms (WMT14 en!de) | 2 | [['Model', 'Word'], ['Model', 'Character (512 nodes)'], ['Model', 'Mixed Word/Character'], ['Model', 'BPE'], ['Model', 'Unigram w/o SR (l = 1)'], ['Model', 'Unigram w/ SR (l = 64 alpha = 0.1)']] | 1 | [['BLEU']] | [['23.12'], ['22.62'], ['24.17'], ['24.53'], ['24.5'], ['25.04']] | column | ['BLEU'] | ['Unigram w/ SR (l = 64 alpha = 0.1)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Word</td> <td>23.12</td> </tr> <tr> <td>Model || Character (512 nodes)</td> <td>22.62</td> </tr> <tr> <td>Model || Mixed Word/Character</td> <td>24.17</td> </tr> <tr> <td>Model || BPE</td> <td>24.53</td> </tr> <tr> <td>Model || Unigram w/o SR (l = 1)</td> <td>24.5</td> </tr> <tr> <td>Model || Unigram w/ SR (l = 64 alpha = 0.1)</td> <td>25.04</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1007 | 8 | acl2018 | Table 5 shows the comparison on different segmentation algorithms: word, character, mixed word/character (Wu et al., 2016), BPE (Sennrich et al., 2016) and our unigram model with or without subword regularization. The BLEU scores of word, character and mixed word/character models are cited from (Wu et al.,2016). As German is a morphologically rich language and needs a huge vocabulary for word models, subword-based algorithms perform a gain of more than 1 BLEU point than word model. Among subword-based algorithms, the unigram language model with subword regularization achieved the best BLEU score (25.04), which demonstrates the effectiveness of multiple subword segmentations. | [1, 1, 1, 1] | ['Table 5 shows the comparison on different segmentation algorithms: word, character, mixed word/character (Wu et al., 2016), BPE (Sennrich et al., 2016) and our unigram model with or without subword regularization.', 'The BLEU scores of word, character and mixed word/character models are cited from (Wu et al.,2016).', 'As German is a morphologically rich language and needs a huge vocabulary for word models, subword-based algorithms perform a gain of more than 1 BLEU point than word model.', 'Among subword-based algorithms, the unigram language model with subword regularization achieved the best BLEU score (25.04), which demonstrates the effectiveness of multiple subword segmentations.'] | [['Word', 'Character (512 nodes)', 'Mixed Word/Character', 'BPE', 'Unigram w/o SR (l = 1)', 'Unigram w/ SR (l = 64 alpha = 0.1)'], ['Word', 'Character (512 nodes)', 'Mixed Word/Character'], ['Unigram w/o SR (l = 1)', 'Unigram w/ SR (l = 64 alpha = 0.1)'], ['Unigram w/ SR (l = 64 alpha = 0.1)']] | 1 |
P18-1008table_2 | Results on WMT14 En→De. Note that Transformer models are trained using 16 GPUs, while ConvS2S and RNMT+ are trained using 32 GPUs. | 2 | [['Model', 'GNMT'], ['Model', 'ConvS2S'], ['Model', 'Trans. Base'], ['Model', 'Trans. Big'], ['Model', 'RNMT+']] | 1 | [['Test BLEU'], ['Epochs'], ['Training\nTime']] | [['24.67', '-', '-'], ['25.01 \x81}0.17', '38', '20h'], ['27.26 \x81} 0.15', '38', '17h'], ['27.94 \x81} 0.18', '26.9', '48h'], ['28.49 \x81} 0.05', '24.6', '40h']] | column | ['Test BLEU', 'Epochs', 'Training Time'] | ['RNMT+'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test BLEU</th> <th>Epochs</th> <th>TrainingTime</th> </tr> </thead> <tbody> <tr> <td>Model || GNMT</td> <td>24.67</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || ConvS2S</td> <td>25.01 }0.17</td> <td>38</td> <td>20h</td> </tr> <tr> <td>Model || Trans. Base</td> <td>27.26 } 0.15</td> <td>38</td> <td>17h</td> </tr> <tr> <td>Model || Trans. Big</td> <td>27.94 } 0.18</td> <td>26.9</td> <td>48h</td> </tr> <tr> <td>Model || RNMT+</td> <td>28.49 } 0.05</td> <td>24.6</td> <td>40h</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1008 | 6 | acl2018 | Table 2 shows our results on the WMTf14 En¨De task. The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points. RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49. In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task. | [1, 1, 1, 1] | ['Table 2 shows our results on the WMT\x81f14 En\x81¨De task.', 'The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.', 'RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.', 'In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.'] | [None, ['Trans. Base', 'GNMT', 'ConvS2S', 'Trans. Big'], ['RNMT+', 'Trans. Big'], ['RNMT+', 'Trans. Big']] | 1 |
P18-1008table_4 | Ablation results of RNMT+ and the Transformer Big model on WMT’14 En → Fr. We report average BLEU scores on the test set. An asterisk ’*’ indicates an unstable training run (training halts due to non-finite elements). | 2 | [['Model', 'Baseline'], ['Model', '-Label Smoothing'], ['Model', '-Multi-head Attention'], ['Model', '-Layer Norm.'], ['Model', '-Sync. Training']] | 1 | [['RNMT+'], ['Trans. Big']] | [['41', '40.73'], ['40.33', '40.49'], ['40.44', '39.83'], ['*', '*'], ['39.68', '*']] | column | ['BLEU', 'BLEU'] | ['RNMT+', 'Trans. Big'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RNMT+</th> <th>Trans. Big</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>41</td> <td>40.73</td> </tr> <tr> <td>Model || -Label Smoothing</td> <td>40.33</td> <td>40.49</td> </tr> <tr> <td>Model || -Multi-head Attention</td> <td>40.44</td> <td>39.83</td> </tr> <tr> <td>Model || -Layer Norm.</td> <td>*</td> <td>*</td> </tr> <tr> <td>Model || -Sync. Training</td> <td>39.68</td> <td>*</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1008 | 7 | acl2018 | From Table 4 we draw the following conclusions about the four techniques:. We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models. ? Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models. ? Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used. Removing layer normalization results in unstable training runs for both models. Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case. To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters. ? Synchronous training Removing synchronous training has different effects on RNMT+ and Transformer. For RNMT+, it results in a significant quality drop, while for the Transformer Big model, it causes the model to become unstable. We also notice that synchronous training is only successful when coupled with a tailored learning rate schedule that has a warmup stage at the beginning (cf. Eq. 1 for RNMT+ and Eq. 2 for Transformer). For RNMT+, removing this warmup stage during synchronous training causes the model to become unstable. | [1, 1, 1, 2, 1, 2, 1, 2, 2] | ['From Table 4 we draw the following conclusions about the four techniques:.', 'We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.', '? Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.', '? Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used. Removing layer normalization results in unstable training runs for both models.', 'Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.', 'To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.', '? Synchronous training Removing synchronous training has different effects on RNMT+ and Transformer. For RNMT+, it results in a significant quality drop, while for the Transformer Big model, it causes the model to become unstable.', 'We also notice that synchronous training is only successful when coupled with a tailored learning rate schedule that has a warmup stage at the beginning (cf. Eq. 1 for RNMT+ and Eq. 2 for Transformer).', ' For RNMT+, removing this warmup stage during synchronous training causes the model to become unstable.'] | [['-Label Smoothing', '-Multi-head Attention', '-Layer Norm.', '-Sync. Training'], ['-Label Smoothing', 'RNMT+', 'Trans. Big'], ['-Multi-head Attention', 'RNMT+', 'Trans. Big'], ['-Layer Norm.'], ['-Layer Norm.', 'RNMT+', 'Trans. Big'], ['-Layer Norm.'], ['-Sync. Training', 'RNMT+', 'Trans. Big'], ['-Sync. Training'], ['-Sync. Training', 'RNMT+']] | 1 |
P18-1009table_3 | Performance of our model and AttentiveNER (Shimaoka et al., 2017) on the new entity typing benchmark, using same training data. We show results for both development and test sets. | 2 | [['Model', 'AttentiveNER'], ['Model', 'Our Model']] | 2 | [['Dev', 'MRR'], ['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'MRR'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['0.221', '53.7', '15', '23.5', '0.223', '54.2', '15.2', '23.7'], ['0.229', '48.1', '23.2', '31.3', '0.234', '47.1', '24.2', '32']] | column | ['MRR', 'P', 'R', 'F1', 'MRR', 'P', 'R', 'F1'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || MRR</th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || MRR</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Model || AttentiveNER</td> <td>0.221</td> <td>53.7</td> <td>15</td> <td>23.5</td> <td>0.223</td> <td>54.2</td> <td>15.2</td> <td>23.7</td> </tr> <tr> <td>Model || Our Model</td> <td>0.229</td> <td>48.1</td> <td>23.2</td> <td>31.3</td> <td>0.234</td> <td>47.1</td> <td>24.2</td> <td>32</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1009 | 6 | acl2018 | Results Table 3 shows the performance of our model and our reimplementation of AttentiveNER. Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision. The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones. | [1, 1, 1] | ['Results Table 3 shows the performance of our model and our reimplementation of AttentiveNER.', 'Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.', 'The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.'] | [['Our Model', 'AttentiveNER'], ['Our Model'], ['MRR', 'Our Model', 'AttentiveNER']] | 1 |
P18-1009table_4 | Results on the development set for different type granularity and for different supervision data with our model. In each row, we remove a single source of supervision. Entity linking (EL) includes supervision from both KB and Wikipedia definitions. The numbers in the first row are example counts for each type granularity. | 2 | [['Train Data', 'All'], ['Train Data', '-Crowd'], ['Train Data', '-Head'], ['Train Data', '-EL']] | 2 | [['Total', 'MRR'], ['Total', 'P'], ['Total', 'R'], ['Total', 'F1'], ['General', 'P'], ['General', 'R'], ['General', 'F1'], ['Fine', 'P'], ['Fine', 'R'], ['Fine', 'F1'], ['Ultra-Fine', 'P'], ['Ultra-Fine', 'R'], ['Ultra-Fine', 'F1']] | [['0.229', '48.1', '23.2', '31.3', '60.3', '61.6', '61', '40.4', '38.4', '39.4', '42.8', '8.8', '14.6'], ['0.173', '40.1', '14.8', '21.6', '53.7', '45.6', '49.3', '20.8', '18.5', '19.6', '54.4', '4.6', '8.4'], ['0.22', '50.3', '19.6', '28.2', '58.8', '62.8', '60.7', '44.4', '29.8', '35.6', '46.2', '4.7', '8.5'], ['0.225', '48.4', '22.3', '30.6', '62.2', '60.1', '61.2', '40.3', '26.1', '31.7', '41.4', '9.9', '16']] | column | ['MRR', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Ultra-Fine'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Total || MRR</th> <th>Total || P</th> <th>Total || R</th> <th>Total || F1</th> <th>General || P</th> <th>General || R</th> <th>General || F1</th> <th>Fine || P</th> <th>Fine || R</th> <th>Fine || F1</th> <th>Ultra-Fine || P</th> <th>Ultra-Fine || R</th> <th>Ultra-Fine || F1</th> </tr> </thead> <tbody> <tr> <td>Train Data || All</td> <td>0.229</td> <td>48.1</td> <td>23.2</td> <td>31.3</td> <td>60.3</td> <td>61.6</td> <td>61</td> <td>40.4</td> <td>38.4</td> <td>39.4</td> <td>42.8</td> <td>8.8</td> <td>14.6</td> </tr> <tr> <td>Train Data || -Crowd</td> <td>0.173</td> <td>40.1</td> <td>14.8</td> <td>21.6</td> <td>53.7</td> <td>45.6</td> <td>49.3</td> <td>20.8</td> <td>18.5</td> <td>19.6</td> <td>54.4</td> <td>4.6</td> <td>8.4</td> </tr> <tr> <td>Train Data || -Head</td> <td>0.22</td> <td>50.3</td> <td>19.6</td> <td>28.2</td> <td>58.8</td> <td>62.8</td> <td>60.7</td> <td>44.4</td> <td>29.8</td> <td>35.6</td> <td>46.2</td> <td>4.7</td> <td>8.5</td> </tr> <tr> <td>Train Data || -EL</td> <td>0.225</td> <td>48.4</td> <td>22.3</td> <td>30.6</td> <td>62.2</td> <td>60.1</td> <td>61.2</td> <td>40.3</td> <td>26.1</td> <td>31.7</td> <td>41.4</td> <td>9.9</td> <td>16</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1009 | 6 | acl2018 | Table 4 shows the performance breakdown for different type granularity and different supervision. Overall, as seen in previous work on finegrained NER literature (Gillick et al., 2014; Ren et al., 2016a), finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types. All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact. Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction. The low general type performance is partially because of nominal/pronoun mentions (e.g. gith), and because of the large type inventory (sometimes glocationh and gplaceh are annotated interchangeably). | [1, 1, 1, 1, 1] | ['Table 4 shows the performance breakdown for different type granularity and different supervision.', 'Overall, as seen in previous work on finegrained NER literature (Gillick et al., 2014; Ren et al., 2016a), finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.', 'All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.', 'Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.', 'The low general type performance is partially because of nominal/pronoun mentions (e.g. \x81git\x81h), and because of the large type inventory (sometimes \x81glocation\x81h and \x81gplace\x81h are annotated interchangeably).'] | [None, ['General', 'Fine', 'Ultra-Fine'], ['All', '-Crowd'], ['-Head', 'Ultra-Fine', '-EL', 'Fine'], ['General']] | 1 |
P18-1009table_6 | Results on the OntoNotes fine-grained entity typing test set. The first two models (AttentiveNER++ and AFET) use only KB-based supervision. LNR uses a filtered version of the KBbased training set. Our model uses all our distant supervision sources. | 1 | [['AttentiveNER++'], ['AFET (Ren et al., 2016a)'], ['LNR (Ren et al., 2016b)'], ['Ours (ONTO+WIKI+HEAD)']] | 1 | [['Acc.'], ['Ma-F1'], ['Mi-F1']] | [['51.7', '70.9', '64.9'], ['55.1', '71.1', '64.7'], ['57.2', '71.5', '66.1'], ['59.5', '76.8', '71.8']] | column | ['Acc.', 'Ma-F1', 'Mi-F1'] | ['Ours (ONTO+WIKI+HEAD)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Ma-F1</th> <th>Mi-F1</th> </tr> </thead> <tbody> <tr> <td>AttentiveNER++</td> <td>51.7</td> <td>70.9</td> <td>64.9</td> </tr> <tr> <td>AFET (Ren et al., 2016a)</td> <td>55.1</td> <td>71.1</td> <td>64.7</td> </tr> <tr> <td>LNR (Ren et al., 2016b)</td> <td>57.2</td> <td>71.5</td> <td>66.1</td> </tr> <tr> <td>Ours (ONTO+WIKI+HEAD)</td> <td>59.5</td> <td>76.8</td> <td>71.8</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1009 | 8 | acl2018 | Results Table 6 shows the overall performance on the test set. Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result. | [1, 1] | ['Results Table 6 shows the overall performance on the test set.', 'Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.'] | [None, ['Ours (ONTO+WIKI+HEAD)']] | 1 |
P18-1010table_6 | MAP of entity-level typing in Wikipedia data using TypeNet. The second column shows results using 5% of the total data. The last column shows results using the full set of 344,246 entities. | 2 | [['Model\t', 'CNN'], ['Model\t', 'CNN + hierarchy'], ['Model\t', 'CNN + transitive'], ['Model\t', 'CNN + hierarchy + transitive'], ['Model\t', 'CNN+Complex'], ['Model\t', 'CNN+Complex + hierarchy'], ['Model\t', 'CNN+Complex + transitive'], ['Model\t', 'CNN+Complex + hierarchy + transitive']] | 1 | [['Low Data'], ['Full Data']] | [['51.72', '68.15'], ['54.82', '75.56'], ['57.68', '77.21'], ['58.74', '78.59'], ['50.51', '69.83'], ['55.3', '72.86'], ['53.71', '72.18'], ['58.81', '77.21']] | column | ['MAP', 'MAP'] | ['CNN', 'CNN+Complex'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Low Data</th> <th>Full Data</th> </tr> </thead> <tbody> <tr> <td>Model\t || CNN</td> <td>51.72</td> <td>68.15</td> </tr> <tr> <td>Model\t || CNN + hierarchy</td> <td>54.82</td> <td>75.56</td> </tr> <tr> <td>Model\t || CNN + transitive</td> <td>57.68</td> <td>77.21</td> </tr> <tr> <td>Model\t || CNN + hierarchy + transitive</td> <td>58.74</td> <td>78.59</td> </tr> <tr> <td>Model\t || CNN+Complex</td> <td>50.51</td> <td>69.83</td> </tr> <tr> <td>Model\t || CNN+Complex + hierarchy</td> <td>55.3</td> <td>72.86</td> </tr> <tr> <td>Model\t || CNN+Complex + transitive</td> <td>53.71</td> <td>72.18</td> </tr> <tr> <td>Model\t || CNN+Complex + hierarchy + transitive</td> <td>58.81</td> <td>77.21</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1010 | 7 | acl2018 | Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset. We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime. We also see that both models get an improvement when adding an explicit hierarchy loss, even before adding in the transitive closure. The transitive closure itself gives an additional increase in performance to both models. In both of these cases, the basic CNN model improves by a greater amount than CNN+Complex. | [1, 1, 1, 1, 1] | ['Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset.', 'We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime.', 'We also see that both models get an improvement when adding an explicit hierarchy loss, even before adding in the transitive closure.', 'The transitive closure itself gives an additional increase in performance to both models.', 'In both of these cases, the basic CNN model improves by a greater amount than CNN+Complex.'] | [None, ['CNN', 'CNN+Complex', 'Full Data'], ['CNN + hierarchy', 'CNN+Complex + hierarchy'], ['CNN + transitive', 'CNN+Complex + transitive'], ['CNN', 'CNN+Complex']] | 1 |
P18-1018table_3 | Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (“Exact” means no coarsening). “Labels” refers to the number of distinct labels that annotators could have provided at that level of coarsening. Excludes tokens where at least one annotator assigned a nonsemantic label. involved in developing the guidelines and learning the scheme solely from reading the manual. Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers. Results. In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators. APPROXIMATOR, COTHEME, COST, INSTEADOF, INTERVAL, RATEUNIT, and SPECIES were not used by any annotator. To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators. Despite varying exposure to the scheme, there is no obvious relationship between annotators’ backgrounds and their agreement rates.11 | 1 | [['Exact'], ['Depth-3'], ['Depth-2'], ['Depth-1']] | 1 | [['Role'], ['Function']] | [['74.40%', '81.30%'], ['75.00%', '81.80%'], ['79.90%', '87.40%'], ['92.60%', '93.90%']] | column | ['agreement', 'agreement'] | ['Role', 'Function'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Labels</th> <th>Role</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Exact</td> <td>47</td> <td>74.40%</td> <td>81.30%</td> </tr> <tr> <td>Depth-3</td> <td>43</td> <td>75.00%</td> <td>81.80%</td> </tr> <tr> <td>Depth-2</td> <td>26</td> <td>79.90%</td> <td>87.40%</td> </tr> <tr> <td>Depth-1</td> <td>3</td> <td>92.60%</td> <td>93.90%</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1018 | 6 | acl2018 | Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators. Average agreement is 74.4% on the scene role and 81.3% on the function (row 1). Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter. This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic (3.3). The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows2-4 in table 3; see also confusion matrix in supplement). Results show that most confusions are local with respect to the hierarchy. | [1, 1, 1, 2, 2, 1] | ['Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.', 'Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).', 'Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.', 'This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic (\x81\x983.3).', 'The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows2-4 in table 3; see also confusion matrix in supplement).', 'Results show that most confusions are local with respect to the hierarchy.'] | [None, ['Exact', 'Role', 'Function'], ['Role', 'Function'], None, ['Depth-3', 'Depth-2', 'Depth-1'], ['Depth-3', 'Depth-2', 'Depth-1']] | 1 |
P18-1022table_5 | Performance of predicting veracity. | 3 | [['Features', 'Generic classifier', 'Style'], ['Features', 'Generic classifier', 'Topic'], ['Features', 'Orientation-specific classifier', 'Style'], ['Features', 'Orientation-specific classifier', 'Topic'], ['Features', '-', 'All-fake'], ['Features', '-', 'All-real']] | 2 | [['Accuracy', 'all'], ['Precision', 'fake'], ['Precision', 'real'], ['Recall', 'fake'], ['Recall', 'real'], ['F1', 'fake'], ['F1', 'real']] | [['0.55', '0.42', '0.62', '0.41', '0.64', '0.41', '0.63'], ['0.52', '0.41', '0.62', '0.48', '0.55', '0.44', '0.58'], ['0.55', '0.43', '0.64', '0.49', '0.59', '0.46', '0.61'], ['0.58', '0.46', '0.65', '0.45', '0.66', '0.46', '0.66'], ['0.39', '0.39', '-', '1', '0', '0.56', '-'], ['0.61', '-', '0.61', '0', '1', '-', '0.76']] | column | ['Accuracy', 'Precision', 'Precision', 'Recall', 'Recall', 'F1', 'F1'] | ['Orientation-specific classifier'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || all</th> <th>Precision || fake</th> <th>Precision || real</th> <th>Recall || fake</th> <th>Recall || real</th> <th>F1 || fake</th> <th>F1 || real</th> </tr> </thead> <tbody> <tr> <td>Features || Generic classifier || Style</td> <td>0.55</td> <td>0.42</td> <td>0.62</td> <td>0.41</td> <td>0.64</td> <td>0.41</td> <td>0.63</td> </tr> <tr> <td>Features || Generic classifier || Topic</td> <td>0.52</td> <td>0.41</td> <td>0.62</td> <td>0.48</td> <td>0.55</td> <td>0.44</td> <td>0.58</td> </tr> <tr> <td>Features || Orientation-specific classifier || Style</td> <td>0.55</td> <td>0.43</td> <td>0.64</td> <td>0.49</td> <td>0.59</td> <td>0.46</td> <td>0.61</td> </tr> <tr> <td>Features || Orientation-specific classifier || Topic</td> <td>0.58</td> <td>0.46</td> <td>0.65</td> <td>0.45</td> <td>0.66</td> <td>0.46</td> <td>0.66</td> </tr> <tr> <td>Features || - || All-fake</td> <td>0.39</td> <td>0.39</td> <td>-</td> <td>1</td> <td>0</td> <td>0.56</td> <td>-</td> </tr> <tr> <td>Features || - || All-real</td> <td>0.61</td> <td>-</td> <td>0.61</td> <td>0</td> <td>1</td> <td>-</td> <td>0.76</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1022 | 8 | acl2018 | Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation. Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall. While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F Measure. We conclude that style-based fake news classification simply does not work in general. | [1, 1, 1, 2] | ['Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.', 'Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.', 'While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F Measure.', 'We conclude that style-based fake news classification simply does not work in general.'] | [['Generic classifier', 'Orientation-specific classifier'], ['Generic classifier', 'Orientation-specific classifier'], ['Orientation-specific classifier'], ['Style', 'fake']] | 1 |
P18-1026table_1 | Results for AMR generation on the test set. All score differences between our models and the corresponding baselines are significantly different (p<0.05). “(-s)” means input without scope marking. KIYCZ17, PKH16, SPZWG17 and FDSC16 are respectively the results reported in Konstas et al. (2017), Pourdamghani et al. (2016), Song et al. (2017) and Flanigan et al. (2016). | 2 | [['Single models', 's2s'], ['Single models', 's2s (-s)'], ['Single models', 'g2s'], ['Ensembles', 's2s'], ['Ensembles', 's2s (-s)'], ['Ensembles', 'g2s'], ['Previous work (early AMR treebank versions)', 'KIYCZ17'], ['Previous work (as above + unlabelled data)', 'KIYCZ17'], ['Previous work (as above + unlabelled data)', 'PKH16'], ['Previous work (as above + unlabelled data)', 'SPZWG17'], ['Previous work (as above + unlabelled data)', 'FDSC16']] | 1 | [['BLEU'], ['CHRF++'], ['#params']] | [['21.7', '49.1', '28.4M'], ['18.4', '46.3', ' 28.4M'], ['23.3', '50.4', '28.3M'], ['26.6', '52.5', '142M'], ['22', '48.9', '142M'], ['27.5', '53.5', '141M'], ['22', '-', '-'], ['33.8', '-', '-'], ['26.9', '-', '-'], ['25.6', '-', '-'], ['22', '-', '-']] | column | ['BLEU', 'BLEU', 'BLEU'] | ['g2s'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>CHRF++</th> <th>#params</th> </tr> </thead> <tbody> <tr> <td>Single models || s2s</td> <td>21.7</td> <td>49.1</td> <td>28.4M</td> </tr> <tr> <td>Single models || s2s (-s)</td> <td>18.4</td> <td>46.3</td> <td>28.4M</td> </tr> <tr> <td>Single models || g2s</td> <td>23.3</td> <td>50.4</td> <td>28.3M</td> </tr> <tr> <td>Ensembles || s2s</td> <td>26.6</td> <td>52.5</td> <td>142M</td> </tr> <tr> <td>Ensembles || s2s (-s)</td> <td>22</td> <td>48.9</td> <td>142M</td> </tr> <tr> <td>Ensembles || g2s</td> <td>27.5</td> <td>53.5</td> <td>141M</td> </tr> <tr> <td>Previous work (early AMR treebank versions) || KIYCZ17</td> <td>22</td> <td>-</td> <td>-</td> </tr> <tr> <td>Previous work (as above + unlabelled data) || KIYCZ17</td> <td>33.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>Previous work (as above + unlabelled data) || PKH16</td> <td>26.9</td> <td>-</td> <td>-</td> </tr> <tr> <td>Previous work (as above + unlabelled data) || SPZWG17</td> <td>25.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Previous work (as above + unlabelled data) || FDSC16</td> <td>22</td> <td>-</td> <td>-</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1026 | 5 | acl2018 | Table 1 shows the results on the test set. For the s2s models, we also report results without the scope marking procedure of Konstas et al.(2017). Our approach significantly outperforms the s2s baselines both with individual models and ensembles, while using a comparable number of parameters. In particular, we obtain these results without relying on scoping heuristics. Table 1 also show BLEU scores reported in previous work. These results are not strictly comparable because they used different training set versions and/or employ additional unlabelled corpora; nonetheless some insights can be made. In particular, our g2s ensemble performs better than many previous models that combine a smaller training set with a large unlabelled corpus. It is also most informative to compare our s2s model with Konstas et al.(2017), since this baseline is very similar to theirs. We expected our single model baseline to outperform theirs since we use a larger training set but we obtained similar performance. We speculate that better results could be obtained by more careful tuning, but nevertheless we believe such tuning would also benefit our proposed g2s architecture. | [1, 2, 1, 1, 1, 2, 2, 1, 1, 2] | ['Table 1 shows the results on the test set.', 'For the s2s models, we also report results without the scope marking procedure of Konstas et al.(2017).', 'Our approach significantly outperforms the s2s baselines both with individual models and ensembles, while using a comparable number of parameters.', 'In particular, we obtain these results without relying on scoping heuristics.', 'Table 1 also show BLEU scores reported in previous work.', 'These results are not strictly comparable because they used different training set versions and/or employ additional unlabelled corpora; nonetheless some insights can be made.', 'In particular, our g2s ensemble performs better than many previous models that combine a smaller training set with a large unlabelled corpus.', 'It is also most informative to compare our s2s model with Konstas et al.(2017), since this baseline is very similar to theirs.', 'We expected our single model baseline to outperform theirs since we use a larger training set but we obtained similar performance.', 'We speculate that better results could be obtained by more careful tuning, but nevertheless we believe such tuning would also benefit our proposed g2s architecture.'] | [None, ['s2s'], ['g2s', 's2s'], ['g2s', 's2s'], ['BLEU'], ['BLEU'], ['g2s'], ['g2s', 's2s'], ['g2s', 's2s'], ['g2s']] | 1 |
P18-1028table_1 | Test classification accuracy (and the number of parameters used). The bottom part shows our ablation results: SoPa: our full model. SoPams1: running with max-sum semiring (rather than max-product), with the identity function as our encoder E (see Equation 3). sl: self-loops, ✏: ✏ transitions. The final row is equivalent to a one-layer CNN. | 2 | [['Mode', 'Hard'], ['Mode', 'DAN'], ['Mode', 'BiLSTM'], ['Mode', 'CNN'], ['Mode', 'SoPa']] | 1 | [['ROC'], ['SST'], ['Amazon']] | [['62.2 (4K)', '75.5 (6K)', '88.5 (67K)'], ['64.3 (91K)', '83.1 (91K)', '85.4 (91K)'], ['65.2 (844K)', '84.8 (1.5M)', '90.8 (844K)'], ['64.3 (155K)', '82.2 (62K)', '90.2 (305K)'], ['66.5 (255K)', '85.6 (255K)', '90.5 (256K)']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['SoPa'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROC</th> <th>SST</th> <th>Amazon</th> </tr> </thead> <tbody> <tr> <td>Mode || Hard</td> <td>62.2 (4K)</td> <td>75.5 (6K)</td> <td>88.5 (67K)</td> </tr> <tr> <td>Mode || DAN</td> <td>64.3 (91K)</td> <td>83.1 (91K)</td> <td>85.4 (91K)</td> </tr> <tr> <td>Mode || BiLSTM</td> <td>65.2 (844K)</td> <td>84.8 (1.5M)</td> <td>90.8 (844K)</td> </tr> <tr> <td>Mode || CNN</td> <td>64.3 (155K)</td> <td>82.2 (62K)</td> <td>90.2 (305K)</td> </tr> <tr> <td>Mode || SoPa</td> <td>66.5 (255K)</td> <td>85.6 (255K)</td> <td>90.5 (256K)</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1028 | 7 | acl2018 | Table 1 shows our main experimental results. In two of the cases (SST and ROC), SoPa outperforms all models. On Amazon, SoPa performs within 0.3 points of CNN and BiLSTM, and outperforms the other two baselines. The table also shows the number of parameters used by each model for each task. Given enough data, models with more parameters should be expected to perform better. However, SoPa performs better or roughly the same as a BiLSTM, which has 3 to 6 times as many parameters. | [1, 1, 1, 1, 2, 1] | ['Table 1 shows our main experimental results.', 'In two of the cases (SST and ROC), SoPa outperforms all models.', 'On Amazon, SoPa performs within 0.3 points of CNN and BiLSTM, and outperforms the other two baselines.', 'The table also shows the number of parameters used by each model for each task.', 'Given enough data, models with more parameters should be expected to perform better.', 'However, SoPa performs better or roughly the same as a BiLSTM, which has 3 to 6 times as many parameters.'] | [None, ['SoPa', 'SST', 'ROC'], ['SoPa', 'CNN', 'BiLSTM', 'Amazon'], ['Hard', 'DAN', 'BiLSTM', 'CNN', 'SoPa'], None, ['SoPa', 'BiLSTM']] | 1 |
P18-1030table_2 | Movie review DEV results of S-LSTM | 2 | [['Model', '+0 dummy node'], ['Model', '+1 dummy node'], ['Model', '+2 dummy node'], ['Model', 'Hidden size 100'], ['Model', 'Hidden size 200'], ['Model', 'Hidden size 300'], ['Model', 'Hidden size 600'], ['Model', 'Hidden size 900'], ['Model', 'Without s /s'], ['Model', 'With s /s']] | 1 | [['Time (s)'], ['Acc'], ['# Param']] | [['56', '81.76', '7216K'], ['65', '82.64', '8768K'], ['76', '82.24', '10321K'], ['42', '81.75', '4891K'], ['54', '82.04', '6002K'], ['65', '82.64', '8768K'], ['175', '81.84', '17648K'], ['235', '81.66', '33942K'], ['63', '82.36', '8768K'], ['65', '82.64', '8768K']] | column | ['Time(s)', 'Acc', '#Param'] | ['Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Time (s)</th> <th>Acc</th> <th># Param</th> </tr> </thead> <tbody> <tr> <td>Model || +0 dummy node</td> <td>56</td> <td>81.76</td> <td>7216K</td> </tr> <tr> <td>Model || +1 dummy node</td> <td>65</td> <td>82.64</td> <td>8768K</td> </tr> <tr> <td>Model || +2 dummy node</td> <td>76</td> <td>82.24</td> <td>10321K</td> </tr> <tr> <td>Model || Hidden size 100</td> <td>42</td> <td>81.75</td> <td>4891K</td> </tr> <tr> <td>Model || Hidden size 200</td> <td>54</td> <td>82.04</td> <td>6002K</td> </tr> <tr> <td>Model || Hidden size 300</td> <td>65</td> <td>82.64</td> <td>8768K</td> </tr> <tr> <td>Model || Hidden size 600</td> <td>175</td> <td>81.84</td> <td>17648K</td> </tr> <tr> <td>Model || Hidden size 900</td> <td>235</td> <td>81.66</td> <td>33942K</td> </tr> <tr> <td>Model || Without s /s</td> <td>63</td> <td>82.36</td> <td>8768K</td> </tr> <tr> <td>Model || With s /s</td> <td>65</td> <td>82.64</td> <td>8768K</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1030 | 5 | acl2018 | Hyperparameters: Table 2 shows the development results of various S-LSTM settings, where Time refers to training time per epoch. Without the sentence-level node, the accuracy of S-LSTM drops to 81.76%, demonstrating the necessity of global information exchange. Adding one additional sentence-level node as described in Section 3.2 does not lead to accuracy improvements, although the number of parameters and decoding time increase accordingly. As a result, we use only 1 sentence-level node for the remaining experiments. The accuracies of S-LSTM increases as the hidden layer size for each node increases from 100 to 300, but does not further increase when the size increases beyond 300. We fix the hidden size to 300 accordingly. Without using hsi and h=si, the performance of S-LSTM drops from 82.64% to 82.36%, showing the effectiveness of having these additional nodes. Hyperparameters for BiLSTM models are also set according to the development data, which we omit here. State transition. In Table 2, the number of recurrent state transition steps of S-LSTM is decided according to the best development performance. | [1, 1, 1, 2, 1, 2, 1, 0, 2] | ['Hyperparameters: Table 2 shows the development results of various S-LSTM settings, where Time refers to training time per epoch.', 'Without the sentence-level node, the accuracy of S-LSTM drops to 81.76%, demonstrating the necessity of global information exchange.', 'Adding one additional sentence-level node as described in Section 3.2 does not lead to accuracy improvements, although the number of parameters and decoding time increase accordingly.', 'As a result, we use only 1 sentence-level node for the remaining experiments.', 'The accuracies of S-LSTM increases as the hidden layer size for each node increases from 100 to 300, but does not further increase when the size increases beyond 300.', 'We fix the hidden size to 300 accordingly.', 'Without using hsi and h=si, the performance of S-LSTM drops from 82.64% to 82.36%, showing the effectiveness of having these additional nodes.', 'Hyperparameters for BiLSTM models are also set according to the development data, which we omit here.', 'State transition. In Table 2, the number of recurrent state transition steps of S-LSTM is decided according to the best development performance.'] | [['Time (s)'], ['+0 dummy node'], ['+1 dummy node', 'Time (s)', '# Param'], ['+1 dummy node'], ['Hidden size 100', 'Hidden size 200', 'Hidden size 300'], ['Hidden size 300'], ['Without s /s', 'With s /s'], None, None] | 1 |
P18-1030table_3 | Movie review development results | 2 | [['Model', 'LSTM'], ['Model', 'BiLSTM'], ['Model', '2 stacked BiLSTM'], ['Model', '3 stacked BiLSTM'], ['Model', '4 stacked BiLSTM'], ['Model', 'S-LSTM'], ['Model', 'CNN'], ['Model', '2 stacked CNN'], ['Model', '3 stacked CNN'], ['Model', '4 stacked CNN'], ['Model', 'Transformer (N=6)'], ['Model', 'Transformer (N=8)'], ['Model', 'Transformer (N=10)']] | 1 | [['Time (s)'], ['Acc'], ['# Param']] | [['67', '80.72', '5,977K'], ['106', '81.73', '7,059K'], ['207', '81.97', '9,221K'], ['310', '81.53', '11,383K'], ['411', '81.37', '13,546K'], ['65', '82.64*', '8,768K'], ['34', '80.35', '5,637K'], ['40', '80.97', '5,717K'], ['47', '81.46', '5,808K'], ['51', '81.39', '5,855K'], ['138', '81.03', '7,234K'], ['174', '81.86', '7,615K'], ['214', '81.63', '8,004K']] | column | ['Time(s)', 'Acc', '#Param'] | ['S-LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Time (s)</th> <th>Acc</th> <th># Param</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM</td> <td>67</td> <td>80.72</td> <td>5,977K</td> </tr> <tr> <td>Model || BiLSTM</td> <td>106</td> <td>81.73</td> <td>7,059K</td> </tr> <tr> <td>Model || 2 stacked BiLSTM</td> <td>207</td> <td>81.97</td> <td>9,221K</td> </tr> <tr> <td>Model || 3 stacked BiLSTM</td> <td>310</td> <td>81.53</td> <td>11,383K</td> </tr> <tr> <td>Model || 4 stacked BiLSTM</td> <td>411</td> <td>81.37</td> <td>13,546K</td> </tr> <tr> <td>Model || S-LSTM</td> <td>65</td> <td>82.64*</td> <td>8,768K</td> </tr> <tr> <td>Model || CNN</td> <td>34</td> <td>80.35</td> <td>5,637K</td> </tr> <tr> <td>Model || 2 stacked CNN</td> <td>40</td> <td>80.97</td> <td>5,717K</td> </tr> <tr> <td>Model || 3 stacked CNN</td> <td>47</td> <td>81.46</td> <td>5,808K</td> </tr> <tr> <td>Model || 4 stacked CNN</td> <td>51</td> <td>81.39</td> <td>5,855K</td> </tr> <tr> <td>Model || Transformer (N=6)</td> <td>138</td> <td>81.03</td> <td>7,234K</td> </tr> <tr> <td>Model || Transformer (N=8)</td> <td>174</td> <td>81.86</td> <td>7,615K</td> </tr> <tr> <td>Model || Transformer (N=10)</td> <td>214</td> <td>81.63</td> <td>8,004K</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1030 | 6 | acl2018 | As shown in Table 3, BiLSTM gives significantly better accuracies compared to uni-directional LSTM, with the
training time per epoch growing from 67 seconds to 106 seconds. Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds. 3 layers of stacked BiLSTM does not further improve the results. In contrast, S-LSTM gives a development result of 82.64%, which is significantly better compared to 2-layer stacked BiLSTM, with a smaller number of model parameters and a shorter time of 65 seconds. We additionally make comparisons with stacked CNNs and hierarchical attention (Vaswani et al., 2017), shown in Table 3 (the CNN and Transformer rows), where N indicates the number of attention layers. CNN is the most efficient among all models compared, with the smallest model size. On the other hand, a 3-layer stacked CNN gives an accuracy of 81.46%, which is also the lowest compared with BiLSTM, hierarchical attention and S-LSTM. The best performance of hierarchical attention is between single-layer and two-layer BiLSTMs in terms of both accuracy and efficiency. S-LSTM gives significantly better accuracies compared with both CNN and hierarchical attention. | [1, 1, 1, 1, 2, 1, 1, 1, 1] | ['As shown in Table 3, BiLSTM gives significantly better accuracies compared to uni-directional LSTM, with the\r\ntraining time per epoch growing from 67 seconds to 106 seconds.', 'Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds.', '3 layers of stacked BiLSTM does not further improve the results.', 'In contrast, S-LSTM gives a development result of 82.64%, which is significantly better compared to 2-layer stacked BiLSTM, with a smaller number of model parameters and a shorter time of 65 seconds.', 'We additionally make comparisons with stacked CNNs and hierarchical attention (Vaswani et al., 2017), shown in Table 3 (the CNN and Transformer rows), where N indicates the number of attention layers.', 'CNN is the most efficient among all models compared, with the smallest model size.', 'On the other hand, a 3-layer stacked CNN gives an accuracy of 81.46%, which is also the lowest compared with BiLSTM, hierarchical attention and S-LSTM.', 'The best performance of hierarchical attention is between single-layer and two-layer BiLSTMs in terms of both accuracy and efficiency.', 'S-LSTM gives significantly better accuracies compared with both CNN and hierarchical attention.'] | [['BiLSTM', 'LSTM', 'Time (s)'], ['2 stacked BiLSTM', 'Time (s)'], ['3 stacked BiLSTM', 'Time (s)'], ['S-LSTM', '2 stacked BiLSTM', '# Param', 'Acc', 'Time (s)'], ['CNN', '2 stacked CNN', '3 stacked CNN', '4 stacked CNN', 'Transformer (N=6)', 'Transformer (N=8)', 'Transformer (N=10)'], ['CNN'], ['3 stacked CNN', 'Acc', 'BiLSTM', 'Transformer (N=6)', 'Transformer (N=8)', 'Transformer (N=10)', 'S-LSTM'], ['Transformer (N=6)', 'Transformer (N=8)', 'Transformer (N=10)', 'Acc', 'Time (s)'], ['S-LSTM', 'Acc', 'CNN', 'Transformer (N=6)', 'Transformer (N=8)', 'Transformer (N=10)']] | 1 |
P18-1034table_1 | Performances of different approaches on the WikiSQL dataset. Two evaluation metrics are logical form accuracy (Acclf ) and execution accuracy (Accex). Our model is abbreviated as (STAMP). | 2 | [['Methods', 'Attentional Seq2Seq'], ['Methods', 'Aug.PntNet (Zhong et al. 2017)'], ['Methods', 'Aug.PntNet (re-implemented by us)'], ['Methods', 'Seq2SQL (no RL) (Zhong et al. 2017)'], ['Methods', 'Seq2SQL (Zhong et al. 2017)'], ['Methods', 'SQLNet (Xu et al. 2017)'], ['Methods', 'Guo and Gao (2018)'], ['Methods', 'STAMP (w/o cell)'], ['Methods', 'STAMP (w/o column-cell relation)'], ['Methods', 'STAMP'], ['Methods', 'STAMP+RL']] | 2 | [['Dev', 'Acclf'], ['Dev', 'Accex'], ['Test', 'Acclf'], ['Test', 'Accex']] | [['23.3%', '37.0%', '23.4%', '35.9%'], ['44.1%', '53.8%', '43.3%', '53.3%'], ['51.5%', '58.9%', '52.1%', '59.2%'], ['48.2%', '58.1%', '47.4%', '57.1%'], ['49.5%', '60.8%', '48.3%', '59.4%'], ['-', '69.8%', '-', '68.0%'], ['-', '71.1%', '-', '69.0%'], ['58.6%', '67.8%', '58.0%', '67.4%'], ['59.3%', '71.8%', '58.4%', '70.6%'], ['61.5%', '74.8%', '60.7%', '74.4%'], ['61.7%', '75.1%', '61.0%', '74.6%']] | column | ['Acclf', 'Accex', 'Acclf', 'Accex'] | ['STAMP (w/o cell)', 'STAMP (w/o column-cell relation)', 'STAMP', 'STAMP+RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || Acclf</th> <th>Dev || Accex</th> <th>Test || Acclf</th> <th>Test || Accex</th> </tr> </thead> <tbody> <tr> <td>Methods || Attentional Seq2Seq</td> <td>23.3%</td> <td>37.0%</td> <td>23.4%</td> <td>35.9%</td> </tr> <tr> <td>Methods || Aug.PntNet (Zhong et al. 2017)</td> <td>44.1%</td> <td>53.8%</td> <td>43.3%</td> <td>53.3%</td> </tr> <tr> <td>Methods || Aug.PntNet (re-implemented by us)</td> <td>51.5%</td> <td>58.9%</td> <td>52.1%</td> <td>59.2%</td> </tr> <tr> <td>Methods || Seq2SQL (no RL) (Zhong et al. 2017)</td> <td>48.2%</td> <td>58.1%</td> <td>47.4%</td> <td>57.1%</td> </tr> <tr> <td>Methods || Seq2SQL (Zhong et al. 2017)</td> <td>49.5%</td> <td>60.8%</td> <td>48.3%</td> <td>59.4%</td> </tr> <tr> <td>Methods || SQLNet (Xu et al. 2017)</td> <td>-</td> <td>69.8%</td> <td>-</td> <td>68.0%</td> </tr> <tr> <td>Methods || Guo and Gao (2018)</td> <td>-</td> <td>71.1%</td> <td>-</td> <td>69.0%</td> </tr> <tr> <td>Methods || STAMP (w/o cell)</td> <td>58.6%</td> <td>67.8%</td> <td>58.0%</td> <td>67.4%</td> </tr> <tr> <td>Methods || STAMP (w/o column-cell relation)</td> <td>59.3%</td> <td>71.8%</td> <td>58.4%</td> <td>70.6%</td> </tr> <tr> <td>Methods || STAMP</td> <td>61.5%</td> <td>74.8%</td> <td>60.7%</td> <td>74.4%</td> </tr> <tr> <td>Methods || STAMP+RL</td> <td>61.7%</td> <td>75.1%</td> <td>61.0%</td> <td>74.6%</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1034 | 6 | acl2018 | Our model is abbreviated as (STAMP), which is short for Syntaxand TableAware seMantic Parser. The STAMP model in Table 1 stands for the model we describe in 4.2 plus 4.3. STAMP+RL is the model that is fine-tuned with the reinforcement learning strategy as described in 4.4. We implement a simplified version of our approach (w/o cell), in which WHERE values come from the question. Thus, this setting differs from Aug.PntNet in the generation of WHERE column. We also study the influence of the relation-cell relation (w/o column-cell relation) through removing the enhanced column vector, which is calculated by weighted averaging cell vectors. From Table 1, we can see that STAMP performs better than existing systems on WikiSQL. Incorporating RL strategy does not significantly improve the performance. Our simplified model, STAMP (w/o cell), achieves better accuracy than Aug.PntNet, which further reveals the effects of the column channel. Results also demonstrate the effects of incorporating the column-cell relation, removing which leads to about 4% performance drop in terms of Accex. | [2, 2, 2, 2, 2, 2, 1, 1, 1, 1] | ['Our model is abbreviated as (STAMP), which is short for Syntaxand TableAware seMantic Parser.', 'The STAMP model in Table 1 stands for the model we describe in \x81\x984.2 plus \x81\x984.3.', 'STAMP+RL is the model that is fine-tuned with the reinforcement learning strategy as described in \x81\x984.4.', 'We implement a simplified version of our approach (w/o cell), in which WHERE values come from the question.', 'Thus, this setting differs from Aug.PntNet in the generation of WHERE column.', 'We also study the influence of the relation-cell relation (w/o column-cell relation) through removing the enhanced column vector, which is calculated by weighted averaging cell vectors.', 'From Table 1, we can see that STAMP performs better than existing systems on WikiSQL.', 'Incorporating RL strategy does not significantly improve the performance.', 'Our simplified model, STAMP (w/o cell), achieves better accuracy than Aug.PntNet, which further reveals the effects of the column channel.', 'Results also demonstrate the effects of incorporating the column-cell relation, removing which leads to about 4% performance drop in terms of Accex.'] | [['STAMP'], ['STAMP'], ['STAMP+RL'], ['STAMP (w/o cell)'], ['Aug.PntNet (Zhong et al. 2017)'], ['STAMP (w/o column-cell relation)'], ['STAMP (w/o cell)', 'STAMP (w/o column-cell relation)', 'STAMP', 'STAMP+RL'], ['STAMP+RL'], ['STAMP (w/o cell)', 'Aug.PntNet (Zhong et al. 2017)'], ['STAMP (w/o column-cell relation)', 'Accex']] | 1 |
P18-1039table_6 | Performances on two datasets. “LF” means that the model generates latent intermediate forms instead of equation systems. “AttReg” means attention regularization. “Iter” means iterative labeling. “n/a” means that the model does not run on the dataset. | 2 | [['Models', 'Wang et al. (2017)'], ['Models', 'Seq2Seq Equ'], ['Models', 'Seq2Seq LF'], ['Models', 'Seq2Seq LF+AttReg'], ['Models', 'Seq2Seq LF+AttReg+Iter'], ['Models', 'Shi et al. (2015)'], ['Models', 'Huang et al. (2017)']] | 2 | [['NumWord', '(Linear)'], ['NumWord', '(ALL)'], ['Dolphin18K', '(Linear)']] | [['19.70%', '14.60%', '10.20%'], ['26.80%', '20.10%', '13.10%'], ['50.80%', '45.20%', '13.90%'], ['56.70%', '54.00%', '15.10%'], ['61.60%', '57.10%', '16.80%'], ['63.60%', '60.20%', 'n/a'], ['20.80%', 'n/a', '28.40%']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Seq2Seq LF+AttReg+Iter'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NumWord || (Linear)</th> <th>NumWord || (ALL)</th> <th>Dolphin18K || (Linear)</th> </tr> </thead> <tbody> <tr> <td>Models || Wang et al. (2017)</td> <td>19.70%</td> <td>14.60%</td> <td>10.20%</td> </tr> <tr> <td>Models || Seq2Seq Equ</td> <td>26.80%</td> <td>20.10%</td> <td>13.10%</td> </tr> <tr> <td>Models || Seq2Seq LF</td> <td>50.80%</td> <td>45.20%</td> <td>13.90%</td> </tr> <tr> <td>Models || Seq2Seq LF+AttReg</td> <td>56.70%</td> <td>54.00%</td> <td>15.10%</td> </tr> <tr> <td>Models || Seq2Seq LF+AttReg+Iter</td> <td>61.60%</td> <td>57.10%</td> <td>16.80%</td> </tr> <tr> <td>Models || Shi et al. (2015)</td> <td>63.60%</td> <td>60.20%</td> <td>n/a</td> </tr> <tr> <td>Models || Huang et al. (2017)</td> <td>20.80%</td> <td>n/a</td> <td>28.40%</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1039 | 7 | acl2018 | Overall results are shown in Table 6. From the table, we can see that our final model (Seq2Seq LF+AttReg+Iter) outperforms the neural-based baseline models (Wang et al. (2017) 4 and Seq2Seq Equ). On Number word problem dataset, our model already outperforms the state-of-the-art feature-based model (Huang et al., 2017) by 40.8% and is comparable to the ruled-based model (Shi et al., 2015). Advantage of intermediate forms:. From the first two rows, we can see that the seq2seq model which is trained to generate intermediate forms (Seq2Seq LF) greatly outperforms the same model trained to generate equations(Seq2Seq Equ). The use of intermediate forms helps more on NumWord than on Dolphin18K. This result is expected as the Dolphin18K dataset is more challenging, containing many other types of difficulties discussed in Section 6.3. | [1, 1, 1, 0, 1, 1, 2] | ['Overall results are shown in Table 6.', 'From the table, we can see that our final model (Seq2Seq LF+AttReg+Iter) outperforms the neural-based baseline models (Wang et al. (2017) 4 and Seq2Seq Equ).', 'On Number word problem dataset, our model already outperforms the state-of-the-art feature-based model (Huang et al., 2017) by 40.8% and is comparable to the ruled-based model (Shi et al., 2015).', 'Advantage of intermediate forms:.', 'From the first two rows, we can see that the seq2seq model which is trained to generate intermediate forms (Seq2Seq LF) greatly outperforms the same model trained to generate equations(Seq2Seq Equ).', 'The use of intermediate forms helps more on NumWord than on Dolphin18K.', 'This result is expected as the Dolphin18K dataset is more challenging, containing many other types of difficulties discussed in Section 6.3.'] | [None, ['Seq2Seq LF+AttReg+Iter', 'Wang et al. (2017)', 'Seq2Seq Equ'], ['NumWord', 'Seq2Seq LF+AttReg+Iter', 'Huang et al. (2017)', 'Shi et al. (2015)'], None, ['Seq2Seq LF', 'Seq2Seq Equ'], ['NumWord', 'Dolphin18K'], ['Dolphin18K']] | 1 |
P18-1044table_7 | The comparisons of Gen+Adv with Gen and the data augmentation model (Gen+Aug). ‡ denotes that the improvement is statistically significant at p < 0.05, compared with Gen+Aug. | 1 | [['Gen'], ['Gen+Aug'], ['Gen+Adv']] | 1 | [['Case'], ['Zero']] | [['91.5', '56.2'], ['91.2', '57'], ['92.0\x81ö', '58.4\x81ö']] | column | ['F1', 'F1'] | ['Gen+Adv'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case</th> <th>Zero</th> </tr> </thead> <tbody> <tr> <td>Gen</td> <td>91.5</td> <td>56.2</td> </tr> <tr> <td>Gen+Aug</td> <td>91.2</td> <td>57</td> </tr> <tr> <td>Gen+Adv</td> <td>92.0ö</td> <td>58.4ö</td> </tr> </tbody></table> | Table 7 | table_7 | P18-1044 | 8 | acl2018 | Table 7 shows the results of the data augmentation model and the GAN-based model. Our Gen+Adv model performs better than the data augmented model. Note that our data augmentation model does not use raw corpora directly. | [1, 1, 2] | ['Table 7 shows the results of the data augmentation model and the GAN-based model.', 'Our Gen+Adv model performs better than the data augmented model.', 'Note that our data augmentation model does not use raw corpora directly.'] | [['Gen', 'Gen+Aug', 'Gen+Adv'], ['Gen+Adv', 'Gen+Aug'], ['Gen+Aug']] | 1 |
P18-1047table_2 | Results of different models in NYT dataset and WebNLG dataset. | 2 | [['Model', 'NovelTagging'], ['Model', 'OneDecoder'], ['Model', 'MultiDecoder']] | 2 | [['NYT', 'Precision'], ['NYT', 'Recall'], ['NYT', 'F1'], ['WebNLG', 'Precision'], ['WebNLG', 'Recall'], ['WebNLG', 'F1']] | [['0.624', '0.317', '0.42', '0.525', '0.193', '0.283'], ['0.594', '0.531', '0.56', '0.322', '0.289', '0.305'], ['0.61', '0.566', '0.587', '0.377', '0.364', '0.371']] | column | ['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1'] | ['MultiDecoder'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || Precision</th> <th>NYT || Recall</th> <th>NYT || F1</th> <th>WebNLG || Precision</th> <th>WebNLG || Recall</th> <th>WebNLG || F1</th> </tr> </thead> <tbody> <tr> <td>Model || NovelTagging</td> <td>0.624</td> <td>0.317</td> <td>0.42</td> <td>0.525</td> <td>0.193</td> <td>0.283</td> </tr> <tr> <td>Model || OneDecoder</td> <td>0.594</td> <td>0.531</td> <td>0.56</td> <td>0.322</td> <td>0.289</td> <td>0.305</td> </tr> <tr> <td>Model || MultiDecoder</td> <td>0.61</td> <td>0.566</td> <td>0.587</td> <td>0.377</td> <td>0.364</td> <td>0.371</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1047 | 8 | acl2018 | Table 2 shows the Precision, Recall and F1 value of NovelTagging model (Zheng et al., 2017) and our OneDecoder and MultiDecoder models. As we can see, in NYT dataset, our MultiDecoder model achieves the best F1 score, which is 0.587. There is 39.8% improvement compared with the NovelTagging model, which is 0.420. Besides, our OneDecoder model also outperforms the NovelTagging model. In the WebNLG dataset, MultiDecoder model achieves the highest F1 score (0.371). MultiDecoder and OneDecoder models outperform the NovelTagging model with 31.1% and 7.8% improvements, respectively. These observations verify the effectiveness of our models. We can also observe that, in both NYT and WebNLG dataset, the NovelTagging model achieves the highest precision value and lowest recall value. By contrast, our models are much more balanced. | [1, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 2 shows the Precision, Recall and F1 value of NovelTagging model (Zheng et al., 2017) and our OneDecoder and MultiDecoder models.', 'As we can see, in NYT dataset, our MultiDecoder model achieves the best F1 score, which is 0.587.', 'There is 39.8% improvement compared with the NovelTagging model, which is 0.420.', 'Besides, our OneDecoder model also outperforms the NovelTagging model.', 'In the WebNLG dataset, MultiDecoder model achieves the highest F1 score (0.371).', 'MultiDecoder and OneDecoder models outperform the NovelTagging model with 31.1% and 7.8% improvements, respectively.', 'These observations verify the effectiveness of our models.', 'We can also observe that, in both NYT and WebNLG dataset, the NovelTagging model achieves the highest precision value and lowest recall value.', 'By contrast, our models are much more balanced.'] | [['Precision', 'Recall', 'F1', 'NovelTagging', 'OneDecoder', 'MultiDecoder'], ['NYT', 'MultiDecoder', 'F1'], ['NYT', 'MultiDecoder', 'NovelTagging', 'F1'], ['NYT', 'OneDecoder', 'NovelTagging'], ['WebNLG', 'MultiDecoder', 'F1'], ['MultiDecoder', 'OneDecoder', 'NovelTagging'], ['MultiDecoder', 'OneDecoder'], ['NYT', 'WebNLG', 'NovelTagging', 'Precision', 'Recall'], ['MultiDecoder', 'OneDecoder', 'Precision', 'Recall', 'F1']] | 1 |
P18-1048table_1 | Trigger identification performance | 2 | [['Method', 'Joint (Local+Global)'], ['Method', 'MSEP-EMD'], ['Method', 'DM-CNN'], ['Method', 'DM-CNN*'], ['Method', 'Bi-RNN'], ['Method', 'Hybrid: Bi-LSTM+CNN'], ['Method', 'SELF: Bi-LSTM+GAN']] | 1 | [['P (%)'], ['R (%)'], ['F (%)']] | [['76.9', '65', '70.4'], ['75.6', '69.8', '72.6'], ['80.4', '67.7', '73.5'], ['79.7', '69.6', '74.3'], ['68.5', '75.7', '71.9'], ['80.8', '71.5', '75.9'], ['75.3', '78.8', '77']] | column | ['P (%)', 'R (%)', 'F (%)'] | ['SELF: Bi-LSTM+GAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F (%)</th> </tr> </thead> <tbody> <tr> <td>Method || Joint (Local+Global)</td> <td>76.9</td> <td>65</td> <td>70.4</td> </tr> <tr> <td>Method || MSEP-EMD</td> <td>75.6</td> <td>69.8</td> <td>72.6</td> </tr> <tr> <td>Method || DM-CNN</td> <td>80.4</td> <td>67.7</td> <td>73.5</td> </tr> <tr> <td>Method || DM-CNN*</td> <td>79.7</td> <td>69.6</td> <td>74.3</td> </tr> <tr> <td>Method || Bi-RNN</td> <td>68.5</td> <td>75.7</td> <td>71.9</td> </tr> <tr> <td>Method || Hybrid: Bi-LSTM+CNN</td> <td>80.8</td> <td>71.5</td> <td>75.9</td> </tr> <tr> <td>Method || SELF: Bi-LSTM+GAN</td> <td>75.3</td> <td>78.8</td> <td>77</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1048 | 6 | acl2018 | Table 1 shows the trigger identification performance. It can be observed that SELF outperforms other models, with a performance gain of no less than 1.1% F-score. Frankly, the performance mainly benefits from the higher recall (78.8%). But in fact the relatively comparable precision (75.3%) to the recall reinforces the advantages. By contrast, although most of the compared models achieve much higher precision over SELF, they suffer greatly from the substantial gaps between precision and recall. The advantage is offset by the greater loss of recall. | [1, 1, 1, 1, 1, 1] | ['Table 1 shows the trigger identification performance.', 'It can be observed that SELF outperforms other models, with a performance gain of no less than 1.1% F-score.', 'Frankly, the performance mainly benefits from the higher recall (78.8%).', 'But in fact the relatively comparable precision (75.3%) to the recall reinforces the advantages.', 'By contrast, although most of the compared models achieve much higher precision over SELF, they suffer greatly from the substantial gaps between precision and recall.', 'The advantage is offset by the greater loss of recall.'] | [None, ['SELF: Bi-LSTM+GAN', 'F (%)'], ['SELF: Bi-LSTM+GAN', 'R (%)'], ['SELF: Bi-LSTM+GAN', 'R (%)', 'P (%)'], ['Joint (Local+Global)', 'MSEP-EMD', 'DM-CNN', 'Bi-RNN', 'Hybrid: Bi-LSTM+CNN', 'P (%)', 'R (%)'], ['Joint (Local+Global)', 'MSEP-EMD', 'DM-CNN', 'Bi-RNN', 'Hybrid: Bi-LSTM+CNN', 'P (%)', 'R (%)']] | 1 |
P18-1049table_1 | Results on the test set. The GCL models use the same hyperparameters, if possible. The two models on the top do not use neural networks. The results in the two lower blocks all use double-check. “Two more hidden layers” means adding two dense layers on top of the pre-trained model without using GCL. The last row corresponds to connecting the output layer of a pre-trained model to GCL layers with stateless controller. | 2 | [['Model', 'CAEVO (not NN model)'], ['Model', 'CATENA (not NN model)'], ['Model', 'Cheng et al. 2017'], ['Model', 'Meng et al. 2017'], ['Model', 'pairwise'], ['Model', 'Two more hidden layers'], ['Model', 'GCL w/ state-tracking controller'], ['Model', 'GCL w/ stateless controller'], ['Model', 'GCL w/ pre-trained output layer']] | 1 | [['Micro-F1'], ['Macro-F1']] | [['0.507', '-'], ['0.511', '-'], ['0.5203', '-'], ['-', '0.519'], ['0.535', '0.528'], ['0.539', '0.532'], ['0.545', '0.538'], ['0.546', '0.538'], ['0.541', '0.536']] | column | ['Micro-F1', 'Macro-F1'] | ['GCL w/ state-tracking controller', 'GCL w/ stateless controller', 'GCL w/ pre-trained output layer'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro-F1</th> <th>Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Model || CAEVO (not NN model)</td> <td>0.507</td> <td>-</td> </tr> <tr> <td>Model || CATENA (not NN model)</td> <td>0.511</td> <td>-</td> </tr> <tr> <td>Model || Cheng et al. 2017</td> <td>0.5203</td> <td>-</td> </tr> <tr> <td>Model || Meng et al. 2017</td> <td>-</td> <td>0.519</td> </tr> <tr> <td>Model || pairwise</td> <td>0.535</td> <td>0.528</td> </tr> <tr> <td>Model || Two more hidden layers</td> <td>0.539</td> <td>0.532</td> </tr> <tr> <td>Model || GCL w/ state-tracking controller</td> <td>0.545</td> <td>0.538</td> </tr> <tr> <td>Model || GCL w/ stateless controller</td> <td>0.546</td> <td>0.538</td> </tr> <tr> <td>Model || GCL w/ pre-trained output layer</td> <td>0.541</td> <td>0.536</td> </tr> </tbody></table> | Table 1 | table_1 | P18-1049 | 7 | acl2018 | The middle block of Table 1 shows the performance of the pairwise model after applying double-checking. Since all pairs are flipped, double-checking combines results from (ei , ej ) and (ej , ei), picking the label with the higher probability score, which typically boosts performance. The results without double-checking show similar trends. The bottom block of Table 1 presents the results, showing that all models from the present paper outperform existing models from the literature. One may argue the combined system adds more hidden layers over a pre-trained model, which contributes to the improvement in performance. We show a comparison to a baseline model which adds two dense layers on top of the pairwise model, without the GCL. The configuration of the two layers is the same as we used for the GCL models. The result shows that the performance is slightly higher than what we get from the pairwise model, but the difference is smaller than what we get from GCL models -suggesting that the performance improvement with GCL models is not just due to more parameters. We also tried adding an LSTM layer on top of the pre-trained model, and found the system cannot converge. | [1, 2, 2, 1, 2, 1, 2, 1, 2] | ['The middle block of Table 1 shows the performance of the pairwise model after applying double-checking.', 'Since all pairs are flipped, double-checking combines results from (ei , ej ) and (ej , ei), picking the label with the higher probability score, which typically boosts performance.', 'The results without double-checking show similar trends.', 'The bottom block of Table 1 presents the results, showing that all models from the present paper outperform existing models from the literature.', 'One may argue the combined system adds more hidden layers over a pre-trained model, which contributes to the improvement in performance.', 'We show a comparison to a baseline model which adds two dense layers on top of the pairwise model, without the GCL.', 'The configuration of the two layers is the same as we used for the GCL models.', 'The result shows that the performance is slightly higher than what we get from the pairwise model, but the difference is smaller than what we get from GCL models -suggesting that the performance improvement with GCL models is not just due to more parameters.', 'We also tried adding an LSTM layer on top of the pre-trained model, and found the system cannot converge.'] | [['pairwise'], ['pairwise'], ['pairwise'], ['GCL w/ state-tracking controller', 'GCL w/ pre-trained output layer', 'GCL w/ stateless controller'], ['Two more hidden layers', 'GCL w/ pre-trained output layer'], ['Model'], ['GCL w/ state-tracking controller', 'GCL w/ stateless controller', 'GCL w/ pre-trained output layer'], ['pairwise', 'GCL w/ state-tracking controller', 'GCL w/ stateless controller', 'GCL w/ pre-trained output layer'], ['GCL w/ pre-trained output layer']] | 1 |
P18-1050table_5 | Results on TimeBank corpus | 2 | [['Models', 'Choubey and Huang (2017)'], ['Models', 'Choubey and Huang (2017) + CP score']] | 1 | [['Acc.(%)']] | [['51.2'], ['52.3']] | column | ['Acc.(%)'] | ['Choubey and Huang (2017) + CP score'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.(%)</th> </tr> </thead> <tbody> <tr> <td>Models || Choubey and Huang (2017)</td> <td>51.2</td> </tr> <tr> <td>Models || Choubey and Huang (2017) + CP score</td> <td>52.3</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1050 | 8 | acl2018 | To facilitate direct comparisons, we used the same state-of-the-art temporal relation classification system as described in our previous work Choubey and Huang (2017) and considered all the 14 relations in classification. Choubey and Huang (2017) forms three sequences (i.e., word forms, POS tags, and dependency relations) of context words that align with the dependency path between two event mentions and uses three bidirectional LSTMs to get the embedding of each sequence. The final fully connected layer maps the concatenated embeddings of all sequences to 14 fine-grained temporal relations. We applied the same model here, but if an event pair appears in our learned list of event pairs, we concatenated the CP score of the event pair as additional evidence in the final layer. To be consistent with Choubey and Huang (2017), we used the same train/test splitting, the same parameters for the neural network and only considered intra-sentence event pairs. Table 5 shows that by incorporating our learned event knowledge, the overall prediction accuracy was improved by 1.1%. Not surprisingly, out of the 14 temporal relations, the performance on the relation before was improved the most by 4.9%. | [2, 2, 2, 2, 2, 1, 2] | ['To facilitate direct comparisons, we used the same state-of-the-art temporal relation classification system as described in our previous work Choubey and Huang (2017) and considered all the 14 relations in classification.', 'Choubey and Huang (2017) forms three sequences (i.e., word forms, POS tags, and dependency relations) of context words that align with the dependency path between two event mentions and uses three bidirectional LSTMs to get the embedding of each sequence.', 'The final fully connected layer maps the concatenated embeddings of all sequences to 14 fine-grained temporal relations.', 'We applied the same model here, but if an event pair appears in our learned list of event pairs, we concatenated the CP score of the event pair as additional evidence in the final layer.', 'To be consistent with Choubey and Huang (2017), we used the same train/test splitting, the same parameters for the neural network and only considered intra-sentence event pairs.', 'Table 5 shows that by incorporating our learned event knowledge, the overall prediction accuracy was improved by 1.1%.', 'Not surprisingly, out of the 14 temporal relations, the performance on the relation before was improved the most by 4.9%.'] | [['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)'], None, ['Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], None] | 1 |
P18-1053table_4 | Performance of our model with different random seeds. | 1 | [['Our model']] | 1 | [['Min F'], ['Median F'], ['Max F'], ['σ']] | [['56.5', '57.1', '57.5', '0.00253']] | column | ['Min F', 'Median F', 'Max F', 'σ'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Min F</th> <th>Median F</th> <th>Max F</th> <th>σ</th> </tr> </thead> <tbody> <tr> <td>Our model</td> <td>56.5</td> <td>57.1</td> <td>57.5</td> <td>0.00253</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1053 | 7 | acl2018 | Table 4 shows the performance of our model with different random seeds on the test dataset. We report the minimum, the maximum, the median F-scores results and the standard deviation ƒÐof F-scores. We run the model with 38 different random seeds. The maximum F-score is 57.5% and the minimum on is 56.5%. | [1, 1, 2, 1] | ['Table 4 shows the performance of our model with different random seeds on the test dataset.', 'We report the minimum, the maximum, the median F-scores results and the standard deviation ƒÐof F-scores.', 'We run the model with 38 different random seeds.', 'The maximum F-score is 57.5% and the minimum on is 56.5%.'] | [['Our model'], ['Min F', 'Median F', 'Max F', 'σ'], None, ['Max F', 'Min F']] | 1 |
P18-1061table_2 | Full length ROUGE F1 evaluation (%) on CNN/Daily Mail test set. Results with ‡ mark are taken from the corresponding papers. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% confidence interval of at most ±0.22 as reported by the official ROUGE script. The improvement is statistically significant with respect to the results with superscript mark. | 2 | [['Models', 'LEAD3'], ['Models', 'TEXTRANK'], ['Models', 'CRSUM'], ['Models', 'NN-SE'], ['Models', 'PGN ‡'], ['Models', 'LEAD3 ‡ *'], ['Models', 'SUMMARUNNER ‡ *'], ['Models', 'NEUSUM']] | 1 | [['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']] | [['40.24-', '17.70-', '36.45-'], ['40.20-', '17.56-', '36.44-'], ['40.52-', '18.08-', '36.81-'], ['41.13-', '18.59-', '37.40-'], ['39.53-', '17.28-', '36.38-'], ['39.2', '15.7', '35.5'], ['39.6', '16.2', '35.3'], ['41.59', '19.01', '37.98']] | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['NEUSUM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Models || LEAD3</td> <td>40.24-</td> <td>17.70-</td> <td>36.45-</td> </tr> <tr> <td>Models || TEXTRANK</td> <td>40.20-</td> <td>17.56-</td> <td>36.44-</td> </tr> <tr> <td>Models || CRSUM</td> <td>40.52-</td> <td>18.08-</td> <td>36.81-</td> </tr> <tr> <td>Models || NN-SE</td> <td>41.13-</td> <td>18.59-</td> <td>37.40-</td> </tr> <tr> <td>Models || PGN ‡</td> <td>39.53-</td> <td>17.28-</td> <td>36.38-</td> </tr> <tr> <td>Models || LEAD3 ‡ *</td> <td>39.2</td> <td>15.7</td> <td>35.5</td> </tr> <tr> <td>Models || SUMMARUNNER ‡ *</td> <td>39.6</td> <td>16.2</td> <td>35.3</td> </tr> <tr> <td>Models || NEUSUM</td> <td>41.59</td> <td>19.01</td> <td>37.98</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1061 | 7 | acl2018 | We use the official ROUGE script4 (version 1.5.5) to evaluate the summarization output. Table 2 summarizes the results on CNN/Daily Mail data set using full length ROUGE-F15 evaluation. It includes two unsupervised baselines, LEAD3 and TEXTRANK. The table also includes three stateof-the-art neural network based extractive models, i.e., CRSUM, NN-SE and SUMMARUNNER. In addition, we report the state-of-the-art abstractive PGN model. The result of SUMMARUNNER is on the anonymized dataset and not strictly comparable to our results on the non-anonymized version dataset. Therefore, we also include the result of LEAD3 on the anonymized dataset as a reference. NEUSUM achieves 19.01 ROUGE-2 F1 score on the CNN/Daily Mail dataset. Compared to the unsupervised baseline methods, NEUSUM performs better by a large margin. In terms of ROUGE2 F1, NEUSUM outperforms the strong baseline LEAD3 by 1.31 points. NEUSUM also outperforms the neural network based models. Compared to the state-of-the-art extractive model NNSE (Cheng and Lapata, 2016), NEUSUM performs significantly better in terms of ROUGE-1, ROUGE2 and ROUGE-L F1 scores. | [2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1] | ['We use the official ROUGE script4 (version 1.5.5) to evaluate the summarization output.', 'Table 2 summarizes the results on CNN/Daily Mail data set using full length ROUGE-F15 evaluation.', 'It includes two unsupervised baselines, LEAD3 and TEXTRANK.', 'The table also includes three stateof-the-art neural network based extractive models, i.e., CRSUM, NN-SE and SUMMARUNNER.', 'In addition, we report the state-of-the-art abstractive PGN model.', 'The result of SUMMARUNNER is on the anonymized dataset and not strictly comparable to our results on the non-anonymized version dataset.', 'Therefore, we also include the result of LEAD3 on the anonymized dataset as a reference.', 'NEUSUM achieves 19.01 ROUGE-2 F1 score on the CNN/Daily Mail dataset.', 'Compared to the unsupervised baseline methods, NEUSUM performs better by a large margin.', 'In terms of ROUGE2 F1, NEUSUM outperforms the strong baseline LEAD3 by 1.31 points.', 'NEUSUM also outperforms the neural network based models.', 'Compared to the state-of-the-art extractive model NNSE (Cheng and Lapata, 2016), NEUSUM performs significantly better in terms of ROUGE-1, ROUGE2 and ROUGE-L F1 scores.'] | [None, ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['LEAD3', 'TEXTRANK'], ['CRSUM', 'NN-SE', 'SUMMARUNNER ‡ *'], ['PGN ‡'], ['SUMMARUNNER ‡ *'], ['LEAD3 ‡ *'], ['NEUSUM', 'ROUGE-2'], ['NEUSUM', 'LEAD3', 'TEXTRANK', 'CRSUM', 'NN-SE', 'SUMMARUNNER ‡ *', 'PGN ‡'], ['NEUSUM', 'ROUGE-2', 'LEAD3'], ['NEUSUM', 'LEAD3', 'TEXTRANK', 'CRSUM', 'NN-SE', 'SUMMARUNNER ‡ *', 'PGN ‡'], ['NEUSUM', 'NN-SE', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']] | 1 |
P18-1063table_5 | Speed comparison with See et al. (2017). | 2 | [['Models', '(See et al., 2017)'], ['Models', 'rnn-ext + abs + RL'], ['Models', 'rnn-ext + abs + RL + rerank']] | 2 | [['Speed', 'total time (hr)'], ['Speed', 'words / sec']] | [['12.9', '14.8'], ['0.68', '361.3'], ['2.00 (1.46 +0.54)', '109.8']] | column | ['total time (hr)', 'words / sec'] | ['rnn-ext + abs + RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Speed || total time (hr)</th> <th>Speed || words / sec</th> </tr> </thead> <tbody> <tr> <td>Models || (See et al., 2017)</td> <td>12.9</td> <td>14.8</td> </tr> <tr> <td>Models || rnn-ext + abs + RL</td> <td>0.68</td> <td>361.3</td> </tr> <tr> <td>Models || rnn-ext + abs + RL + rerank</td> <td>2.00 (1.46 +0.54)</td> <td>109.8</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1063 | 9 | acl2018 | In Table 5, we show the substantial test-time speed-up of our model compared to See et al. (2017).18. We calculate the total decoding time for producing all summaries for the test set.19. Due to the fact that the main test-time speed bottleneck of RNN language generation model is that the model is constrained to generate one word at a time, the total decoding time is dependent on the number of total words generated; we hence also report the decoded words per second for a fair comparison. Our model without reranking is extremely fast. From Table 5 we can see that we achieve a speed up of 18x in time and 24x in word generation rate. Even after adding the (optional) reranker, we still maintain a 6-7x speed-up (and hence a user can choose to use the reranking component depending on their downstream applicationfs speed requirements).20. | [1, 2, 2, 1, 1, 1] | ['In Table 5, we show the substantial test-time speed-up of our model compared to See et al. (2017).18.', 'We calculate the total decoding time for producing all summaries for the test set.19.', 'Due to the fact that the main test-time speed bottleneck of RNN language generation model is that the model is constrained to generate one word at a time, the total decoding time is dependent on the number of total words generated; we hence also report the decoded words per second for a fair comparison.', 'Our model without reranking is extremely fast.', 'From Table 5 we can see that we achieve a speed up of 18x in time and 24x in word generation rate.', 'Even after adding the (optional) reranker, we still maintain a 6-7x speed-up (and hence a user can choose to use the reranking component depending on their downstream application\x81fs speed requirements).20.'] | [['total time (hr)', 'words / sec'], None, None, ['rnn-ext + abs + RL'], ['rnn-ext + abs + RL', 'total time (hr)', 'words / sec'], ['rnn-ext + abs + RL + rerank', 'total time (hr)', 'words / sec']] | 1 |
P18-1064table_4 | Gigaword Human Evaluation: pairwise comparison between our 3-way multi-task (MTL) model w.r.t. our baseline. | 2 | [['Models', 'MTL wins'], ['Models', 'Baseline wins'], ['Models', 'Non-distinguish']] | 1 | [['Relevance'], ['Readability'], ['Total']] | [['33', '32', '65'], ['22', '22', '44'], ['45', '46', '91']] | column | ['Relevance', 'Readability', 'Total'] | ['MTL wins'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Relevance</th> <th>Readability</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Models || MTL wins</td> <td>33</td> <td>32</td> <td>65</td> </tr> <tr> <td>Models || Baseline wins</td> <td>22</td> <td>22</td> <td>44</td> </tr> <tr> <td>Models || Non-distinguish</td> <td>45</td> <td>46</td> <td>91</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1064 | 7 | acl2018 | We also show human evaluation results on the Gigaword dataset in Table 4 (again based on pairwise comparisons for 100 samples), where we see that our MTL model is better than our state-of-theart baseline on both relevance and readability.7. | [1] | ['We also show human evaluation results on the Gigaword dataset in Table 4 (again based on pairwise comparisons for 100 samples), where we see that our MTL model is better than our state-of-theart baseline on both relevance and readability.7.'] | [['MTL wins', 'Baseline wins']] | 1 |
P18-1064table_6 | Performance of our pointer-based entailment generation (EG) models compared with previous SotA work. M, C, R, B are short for Meteor, CIDEr-D, ROUGE-L, and BLEU-4, resp. | 2 | [['Models', 'Pasunuru&Bansal (2017)'], ['Models', 'Our 1-layer pointer EG'], ['Models', 'Our 2-layer pointer EG']] | 1 | [['M'], ['C'], ['R'], ['B']] | [['29.6', '117.8', '62.4', '40.6'], ['32.4', '139.3', '65.1', '43.6'], ['32.3', '140.0', '64.4', '43.7']] | column | ['M', 'C', 'R', 'B'] | ['Our 1-layer pointer EG', 'Our 2-layer pointer EG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>M</th> <th>C</th> <th>R</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>Models || Pasunuru&Bansal (2017)</td> <td>29.6</td> <td>117.8</td> <td>62.4</td> <td>40.6</td> </tr> <tr> <td>Models || Our 1-layer pointer EG</td> <td>32.4</td> <td>139.3</td> <td>65.1</td> <td>43.6</td> </tr> <tr> <td>Models || Our 2-layer pointer EG</td> <td>32.3</td> <td>140.0</td> <td>64.4</td> <td>43.7</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1064 | 8 | acl2018 | Table 6 compares our model’s performance to Pasunuru and Bansal (2017). Our pointer mechanism gives a performance boost, since the entailment generation task involves copying from the given premise sentence, whereas the 2-layer model seems comparable to the 1-layer model. Also, the supplementary shows some output examples from our entailment generation model. | [1, 1, 2] | [' Table 6 compares our model’s performance to Pasunuru and Bansal (2017).', 'Our pointer mechanism gives a performance boost, since the entailment generation task involves copying from the given premise sentence, whereas the 2-layer model seems comparable to the 1-layer model.', 'Also, the supplementary shows some output examples from our entailment generation model.'] | [['Our 1-layer pointer EG', 'Our 2-layer pointer EG', 'Pasunuru&Bansal (2017)'], ['Our 2-layer pointer EG', 'Our 1-layer pointer EG'], ['Our 1-layer pointer EG', 'Our 2-layer pointer EG']] | 1 |
P18-1064table_9 | Entailment classification results of our baseline vs. EG-multi-task model (p < 0.001). | 2 | [['Models', 'Baseline'], ['Models', 'Multi-Task (EG)']] | 1 | [['Average Entailment Probability']] | [['0.907'], ['0.912']] | column | ['Average Entailment Probability'] | ['Multi-Task (EG)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average Entailment Probability</th> </tr> </thead> <tbody> <tr> <td>Models || Baseline</td> <td>0.907</td> </tr> <tr> <td>Models || Multi-Task (EG)</td> <td>0.912</td> </tr> </tbody></table> | Table 9 | table_9 | P18-1064 | 8 | acl2018 | We employ a state-of-the-art entailment classifier (Chen et al., 2017), and calculate the average of the entailment probability of each of the output summaryfs sentences being entailed by the input source document. We do this for output summaries of our baseline and 2-way-EG multi-task model (with entailment generation). As can be seen in Table 9, our multi-task model improves upon the baseline in the aspect of being entailed by the source document (with statistical significance p < 0.001). | [2, 2, 1] | ['We employ a state-of-the-art entailment classifier (Chen et al., 2017), and calculate the average of the entailment probability of each of the output summary\x81fs sentences being entailed by the input source document.', ' We do this for output summaries of our baseline and 2-way-EG multi-task model (with entailment generation).', 'As can be seen in Table 9, our multi-task model improves upon the baseline in the aspect of being entailed by the source document (with statistical significance p < 0.001).'] | [['Average Entailment Probability'], ['Baseline', 'Multi-Task (EG)'], ['Multi-Task (EG)', 'Baseline']] | 1 |
P18-1067table_6 | Overview of Macro-weighted Average F1 Scores of SVM and PSL Models. The top portion of the table shows the results of the three baselines. The bottom portion shows a subset of the PSL models (parentheses indicate features added onto the previous models). | 2 | [['MODEL', 'SVM BOW'], ['MODEL', 'PSL BOW'], ['MODEL', 'MAJORITY VOTE'], ['MODEL', 'M1 (UNIGRAMS)'], ['MODEL', 'M3 (+ POLITICAL INFO)'], ['MODEL', 'M5 (+ FRAMES)'], ['MODEL', 'M9 (+ BIGRAMS)'], ['MODEL', 'M13 (ALL FEATURES)']] | 1 | [['MFD'], ['AR']] | [['18.7', '-'], ['21.88', '-'], ['12.5', '10.86'], ['7.17', '8.68'], ['22.01', '30.45'], ['28.94', '37.44'], ['67.93', '66.5'], ['72.49', '69.38']] | column | ['F1', 'F1'] | ['M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MFD</th> <th>AR</th> </tr> </thead> <tbody> <tr> <td>MODEL || SVM BOW</td> <td>18.7</td> <td>-</td> </tr> <tr> <td>MODEL || PSL BOW</td> <td>21.88</td> <td>-</td> </tr> <tr> <td>MODEL || MAJORITY VOTE</td> <td>12.5</td> <td>10.86</td> </tr> <tr> <td>MODEL || M1 (UNIGRAMS)</td> <td>7.17</td> <td>8.68</td> </tr> <tr> <td>MODEL || M3 (+ POLITICAL INFO)</td> <td>22.01</td> <td>30.45</td> </tr> <tr> <td>MODEL || M5 (+ FRAMES)</td> <td>28.94</td> <td>37.44</td> </tr> <tr> <td>MODEL || M9 (+ BIGRAMS)</td> <td>67.93</td> <td>66.5</td> </tr> <tr> <td>MODEL || M13 (ALL FEATURES)</td> <td>72.49</td> <td>69.38</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1067 | 7 | acl2018 | Table 6 shows an overview of the average results of our supervised experiments for five of the PSL models. The first column lists the SVM or PSL model. The second column presents the results of a given model when using the MFD as the source of the unigrams for the initial model (M1). The final column shows the results when the AR unigrams are used as the initial source of supervision. The first two rows show the results of predicting the morals present in tweets using a bag-of-words (BoW) approach. Both the SVM and PSL models perform poorly due to the eleven predictive classes and noisy input features. The third row shows the results when taking a majority vote over the presence of MFD unigrams, similar to previous works. This approach is simpler and less noisy than M1, the PSL model closest to this approach. | [1, 1, 1, 1, 1, 1, 1, 1] | [' Table 6 shows an overview of the average results of our supervised experiments for five of the PSL models.', 'The first column lists the SVM or PSL model.', 'The second column presents the results of a given model when using the MFD as the source of the unigrams for the initial model (M1).', 'The final column shows the results when the AR unigrams are used as the initial source of supervision.', 'The first two rows show the results of predicting the morals present in tweets using a bag-of-words (BoW) approach.', 'Both the SVM and PSL models perform poorly due to the eleven predictive classes and noisy input features.', 'The third row shows the results when taking a majority vote over the presence of MFD unigrams, similar to previous works.', 'This approach is simpler and less noisy than M1, the PSL model closest to this approach.'] | [['SVM BOW', 'PSL BOW', 'M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'], ['SVM BOW', 'PSL BOW', 'M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'], ['MFD'], ['AR'], ['SVM BOW', 'PSL BOW'], ['SVM BOW', 'PSL BOW'], ['MAJORITY VOTE', 'MFD'], ['MAJORITY VOTE', 'M1 (UNIGRAMS)']] | 1 |
P18-1067table_9 | Overview of Macro-weighted Average F1 Scores of Joint PSL Model M13. BASELINE is the MORAL prediction result. JOINT is the result of jointly predicting the MORAL and uninitialized FRAME predicates. SKYLINE shows the results when using all features with initialized frames. | 2 | [['PSL MODEL', 'BASELINE'], ['PSL MODEL', 'JOINT'], ['PSL MODEL', 'SKYLINE']] | 1 | [['MFD'], ['AR']] | [['55.49', '55.88'], ['51.22', '58.75'], ['72.49', '69.38']] | column | ['F1', 'F1'] | ['SKYLINE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MFD</th> <th>AR</th> </tr> </thead> <tbody> <tr> <td>PSL MODEL || BASELINE</td> <td>55.49</td> <td>55.88</td> </tr> <tr> <td>PSL MODEL || JOINT</td> <td>51.22</td> <td>58.75</td> </tr> <tr> <td>PSL MODEL || SKYLINE</td> <td>72.49</td> <td>69.38</td> </tr> </tbody></table> | Table 9 | table_9 | P18-1067 | 9 | acl2018 | Table 9 shows the macro-weighted average F1 scores for three different models. The BASELINE model shows the results of predicting only the MORAL of the tweet using the non-joint model M13, which uses all features with frames initialized. The JOINT model is designed to predict both the moral foundation and frame of a tweet simultaneously (as shown in Table 5), with no frame initialization. Finally, the SKYLINE model is M13 with all features, where the frames are initialized with their known values. The joint model using AR unigrams outperforms the baseline, showing that there is some benefit to modeling moral foundations and frames together, as well as using domain-specific unigrams. However, it is unable to beat the MFDbased unigrams model. This is likely due to the large amount of noise introduced by incorrect frame predictions into the joint model. As expected, the joint model does not outperform the skyline model which is able to use the known values of the frames in order to accurately classify the moral foundations associated with the tweets. | [1, 2, 2, 2, 1, 2, 2, 1] | ['Table 9 shows the macro-weighted average F1 scores for three different models.', 'The BASELINE model shows the results of predicting only the MORAL of the tweet using the non-joint model M13, which uses all features with frames initialized.', 'The JOINT model is designed to predict both the moral foundation and frame of a tweet simultaneously (as shown in Table 5), with no frame initialization.', 'Finally, the SKYLINE model is M13 with all features, where the frames are initialized with their known values.', 'The joint model using AR unigrams outperforms the baseline, showing that there is some benefit to modeling moral foundations and frames together, as well as using domain-specific unigrams.', 'However, it is unable to beat the MFDbased unigrams model.', 'This is likely due to the large amount of noise introduced by incorrect frame predictions into the joint model.', 'As expected, the joint model does not outperform the skyline model which is able to use the known values of the frames in order to accurately classify the moral foundations associated with the tweets.'] | [['BASELINE', 'JOINT', 'SKYLINE'], ['BASELINE'], ['JOINT'], ['SKYLINE'], ['BASELINE', 'AR', 'MFD'], ['MFD'], ['JOINT'], ['JOINT', 'SKYLINE']] | 1 |
P18-1068table_3 | DJANGO results. Accuracies in the first and second block are taken from Ling et al. (2016) and Yin and Neubig (2017). | 2 | [['Method', 'Retrieval System'], ['Method', 'Phrasal SMT'], ['Method', 'Hierarchical SMT'], ['Method', 'SEQ2SEQ+UNK replacement'], ['Method', 'SEQ2TREE+UNK replacement'], ['Method', 'LPN+COPY (Ling et al. 2016)'], ['Method', 'SNM+COPY (Yin and Neubig 2017)'], ['Method', 'ONESTAGE'], ['Method', 'COARSE2FINE'], ['Method', 'COARSE2FINE - sketch encoder'], ['Method', 'COARSE2FINE + oracle sketch']] | 1 | [['Accuracy']] | [['14.7'], ['31.5'], ['9.5'], ['45.1'], ['39.4'], ['62.3'], ['71.6'], ['69.5'], ['74.1'], ['72.1'], ['83']] | column | ['Accuracy'] | ['COARSE2FINE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Method || Retrieval System</td> <td>14.7</td> </tr> <tr> <td>Method || Phrasal SMT</td> <td>31.5</td> </tr> <tr> <td>Method || Hierarchical SMT</td> <td>9.5</td> </tr> <tr> <td>Method || SEQ2SEQ+UNK replacement</td> <td>45.1</td> </tr> <tr> <td>Method || SEQ2TREE+UNK replacement</td> <td>39.4</td> </tr> <tr> <td>Method || LPN+COPY (Ling et al. 2016)</td> <td>62.3</td> </tr> <tr> <td>Method || SNM+COPY (Yin and Neubig 2017)</td> <td>71.6</td> </tr> <tr> <td>Method || ONESTAGE</td> <td>69.5</td> </tr> <tr> <td>Method || COARSE2FINE</td> <td>74.1</td> </tr> <tr> <td>Method || COARSE2FINE - sketch encoder</td> <td>72.1</td> </tr> <tr> <td>Method || COARSE2FINE + oracle sketch</td> <td>83</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1068 | 8 | acl2018 | Table 3 reports results on DJANGO where we observe similar tendencies. COARSE2FINE outperforms ONESTAGE by a wide margin. It is also superior to the best reported result in the literature (SNM+COPY; see the second block in the table). Again we observe that the sketch encoder is beneficial and that there is an 8.9 point difference in accuracy between COARSE2FINE and the oracle. | [1, 1, 1, 1] | ['Table 3 reports results on DJANGO where we observe similar tendencies.', 'COARSE2FINE outperforms ONESTAGE by a wide margin.', 'It is also superior to the best reported result in the literature (SNM+COPY; see the second block in the table).', 'Again we observe that the sketch encoder is beneficial and that there is an 8.9 point difference in accuracy between COARSE2FINE and the oracle.'] | [None, ['COARSE2FINE', 'ONESTAGE'], ['COARSE2FINE', 'SNM+COPY (Yin and Neubig 2017)'], ['COARSE2FINE - sketch encoder', 'COARSE2FINE + oracle sketch', 'COARSE2FINE']] | 1 |
P18-1069table_5 | Importance scores of confidence metrics (normalized by maximum value on each dataset). Best results are shown in bold. Same shorthands apply as in Table 3. | 2 | [['Metric', 'IFTTT'], ['Metric', 'DJANGO']] | 1 | [['Dout'], ['Noise'], ['PR'], ['PPL'], ['LM'], ['#UNK'], ['Var'], ['Ent']] | [['0.39', '1', '0.89', '0.27', '0.26', '0.46', '0.43', '0.34'], ['1', '0.59', '0.22', '0.58', '0.49', '0.14', '0.24', '0.25']] | column | ['Dout', 'Noise', 'PR', 'PPL', 'LM', '#UNK', 'Var', 'Ent'] | ['Metric'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dout</th> <th>Noise</th> <th>PR</th> <th>PPL</th> <th>LM</th> <th>#UNK</th> <th>Var</th> <th>Ent</th> </tr> </thead> <tbody> <tr> <td>Metric || IFTTT</td> <td>0.39</td> <td>1</td> <td>0.89</td> <td>0.27</td> <td>0.26</td> <td>0.46</td> <td>0.43</td> <td>0.34</td> </tr> <tr> <td>Metric || DJANGO</td> <td>1</td> <td>0.59</td> <td>0.22</td> <td>0.58</td> <td>0.49</td> <td>0.14</td> <td>0.24</td> <td>0.25</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1069 | 8 | acl2018 | Table 5 shows the relative importance of individual metrics in the regression model. As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016). The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays the most important role. On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs. Dout is short for dropout, PR for posterior probability, PPL for perplexity, LM for probability based on a language model,
#UNK for number of unknown tokens, Var for variance of top candidates, and Ent for Entropy. | [1, 2, 1, 1, 2] | ['Table 5 shows the relative importance of individual metrics in the regression model.', 'As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016).', 'The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays the most important role.', 'On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.', 'Dout is short for dropout, PR for posterior probability, PPL for perplexity, LM for probability based on a language model,\r\n#UNK for number of unknown tokens, Var for variance of top candidates, and Ent for Entropy.'] | [['Metric'], None, ['Noise', 'Dout', 'PR', 'PPL'], ['IFTTT', '#UNK', 'Var'], ['Dout', 'PR', 'PPL', 'LM', '#UNK', 'Var', 'Ent']] | 1 |
P18-1073table_3 | Accuracy (%) of the proposed method in comparison with previous work. *Results obtained with the official implementation from the authors. †Results obtained with the framework from Artetxe et al. (2018a). The remaining results were reported in the original papers. For methods that do not require supervision, we report the average accuracy across 10 runs. ‡For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2). | 4 | [['Supervision', '5k dict', 'Method', 'Mikolov et al. (2013)'], ['Supervision', '5k dict', 'Method', 'Faruqui and Dyer (2014)'], ['Supervision', '5k dict', 'Method', 'Shigeto et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Dinu et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Lazaridou et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Xing et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Zhang et al. (2016)'], ['Supervision', '5k dict', 'Method', 'Artetxe et al. (2016)'], ['Supervision', '5k dict', 'Method', 'Artetxe et al. (2017)'], ['Supervision', '5k dict', 'Method', 'Smith et al. (2017)'], ['Supervision', '5k dict', 'Method', 'Artetxe et al. (2018a)'], ['Supervision', '25 dict', 'Method', 'Artetxe et al. (2017)'], ['Supervision', 'Init.', 'Method', 'Smith et al. (2017) cognates'], ['Supervision', 'heurist', 'Method', 'Artetxe et al. (2017) num.'], ['Supervision', 'None', 'Method', 'Zhang et al. (2017a) λ = 1'], ['Supervision', 'None', 'Method', 'Zhang et al. (2017a) λ = 10'], ['Supervision', 'None', 'Method', 'Conneau et al. (2018) code‡'], ['Supervision', 'None', 'Method', 'Conneau et al. (2018) paper‡'], ['Supervision', 'None', 'Method', 'Proposed method']] | 1 | [['EN-IT'], ['EN-DE'], ['EN-FI'], ['EN-ES']] | [['34.93†', '35.00†', '25.91†', '27.73†'], ['38.40*', '37.13*', '27.60*', '26.80*'], ['41.53†', '43.07†', '31.04†', '33.73†'], ['37.7', '38.93*', '29.14*', '30.40*'], ['40.2', '-', '-', '-'], ['36.87†', '41.27†', '28.23†', '31.20†'], ['36.73†', '40.80†', '28.16†', '31.07†'], ['39.27', '41.87* ', '30.62*', '31.40*'], ['39.67', '40.87', '28.72', '-'], ['43.1', '43.33†', '29.42†', '35.13†'], ['45.27', '44.13', '32.94', '36.6'], ['37.27', '39.6', '28.16', '-'], ['39.9', '-', '-', '-'], ['39.4', '40.27', '26.47', '-'], ['0.00*', '0.00*', '0.00*', '0.00*'], ['0.00*', '0.00*', '0.01*', '0.01*'], ['45.15*', '46.83*', '0.38*', '35.38*'], ['45.1', '0.01*', '0.01*', '35.44*'], ['48.13', '48.19', '32.63', '37.33']] | column | ['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy'] | ['Proposed method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN-DE</th> <th>EN-FI</th> <th>EN-ES</th> </tr> </thead> <tbody> <tr> <td>Supervision || 5k dict || Method || Mikolov et al. (2013)</td> <td>34.93†</td> <td>35.00†</td> <td>25.91†</td> <td>27.73†</td> </tr> <tr> <td>Supervision || 5k dict || Method || Faruqui and Dyer (2014)</td> <td>38.40*</td> <td>37.13*</td> <td>27.60*</td> <td>26.80*</td> </tr> <tr> <td>Supervision || 5k dict || Method || Shigeto et al. (2015)</td> <td>41.53†</td> <td>43.07†</td> <td>31.04†</td> <td>33.73†</td> </tr> <tr> <td>Supervision || 5k dict || Method || Dinu et al. (2015)</td> <td>37.7</td> <td>38.93*</td> <td>29.14*</td> <td>30.40*</td> </tr> <tr> <td>Supervision || 5k dict || Method || Lazaridou et al. (2015)</td> <td>40.2</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Supervision || 5k dict || Method || Xing et al. (2015)</td> <td>36.87†</td> <td>41.27†</td> <td>28.23†</td> <td>31.20†</td> </tr> <tr> <td>Supervision || 5k dict || Method || Zhang et al. (2016)</td> <td>36.73†</td> <td>40.80†</td> <td>28.16†</td> <td>31.07†</td> </tr> <tr> <td>Supervision || 5k dict || Method || Artetxe et al. (2016)</td> <td>39.27</td> <td>41.87*</td> <td>30.62*</td> <td>31.40*</td> </tr> <tr> <td>Supervision || 5k dict || Method || Artetxe et al. (2017)</td> <td>39.67</td> <td>40.87</td> <td>28.72</td> <td>-</td> </tr> <tr> <td>Supervision || 5k dict || Method || Smith et al. (2017)</td> <td>43.1</td> <td>43.33†</td> <td>29.42†</td> <td>35.13†</td> </tr> <tr> <td>Supervision || 5k dict || Method || Artetxe et al. (2018a)</td> <td>45.27</td> <td>44.13</td> <td>32.94</td> <td>36.6</td> </tr> <tr> <td>Supervision || 25 dict || Method || Artetxe et al. (2017)</td> <td>37.27</td> <td>39.6</td> <td>28.16</td> <td>-</td> </tr> <tr> <td>Supervision || Init. || Method || Smith et al. (2017) cognates</td> <td>39.9</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Supervision || heurist || Method || Artetxe et al. (2017) num.</td> <td>39.4</td> <td>40.27</td> <td>26.47</td> <td>-</td> </tr> <tr> <td>Supervision || None || Method || Zhang et al. (2017a) λ = 1</td> <td>0.00*</td> <td>0.00*</td> <td>0.00*</td> <td>0.00*</td> </tr> <tr> <td>Supervision || None || Method || Zhang et al. (2017a) λ = 10</td> <td>0.00*</td> <td>0.00*</td> <td>0.01*</td> <td>0.01*</td> </tr> <tr> <td>Supervision || None || Method || Conneau et al. (2018) code‡</td> <td>45.15*</td> <td>46.83*</td> <td>0.38*</td> <td>35.38*</td> </tr> <tr> <td>Supervision || None || Method || Conneau et al. (2018) paper‡</td> <td>45.1</td> <td>0.01*</td> <td>0.01*</td> <td>35.44*</td> </tr> <tr> <td>Supervision || None || Method || Proposed method</td> <td>48.13</td> <td>48.19</td> <td>32.63</td> <td>37.33</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1073 | 7 | acl2018 | Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision. We focus on the widely used English-Italian dataset of Dinu et al(2015) and its extensions. Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches. The only exception is English-Finnish, where Artetxe et al.(2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair. At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.(2017), the only other system based on selflearning, with the additional advantage of being fully unsupervised. | [1, 1, 1, 1, 1] | ['Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.', 'We focus on the widely used English-Italian dataset of Dinu et al(2015) and its extensions.', 'Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.', 'The only exception is English-Finnish, where Artetxe et al.(2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.', 'At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.(2017), the only other system based on selflearning, with the additional advantage of being fully unsupervised.'] | [['Proposed method'], ['Dinu et al. (2015)', 'EN-IT'], ['Proposed method'], ['Proposed method', 'EN-FI', 'Artetxe et al. (2018a)'], ['Proposed method', 'Artetxe et al. (2017)']] | 1 |
P18-1074table_6 | Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout). | 2 | [['Model', 'Basic'], ['Model', 'Basic + C'], ['Model', 'Basic + CL'], ['Model', 'Basic + CLS'], ['Model', 'Basic + CLSH'], ['Model', 'Basic + CLSHD']] | 1 | [['0'], ['10'], ['100'], ['200'], ['All']] | [['2.06', '20.03', '47.98', '51.52', '77.63'], ['1.69', '24.22', '48.53', '56.26', '83.38'], ['9.62', '25.97', '49.54', '56.29', '83.37'], ['3.21', '25.43', '50.67', '56.34', '84.02'], ['7.7', '30.48', '53.73', '58.09', '84.68'], ['12.12', '35.82', '57.33', '63.27', '86']] | column | ['F-score', 'F-score', 'F-score', 'F-score', 'F-score'] | ['0', '10', '100', '200', 'All'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>0</th> <th>10</th> <th>100</th> <th>200</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Model || Basic</td> <td>2.06</td> <td>20.03</td> <td>47.98</td> <td>51.52</td> <td>77.63</td> </tr> <tr> <td>Model || Basic + C</td> <td>1.69</td> <td>24.22</td> <td>48.53</td> <td>56.26</td> <td>83.38</td> </tr> <tr> <td>Model || Basic + CL</td> <td>9.62</td> <td>25.97</td> <td>49.54</td> <td>56.29</td> <td>83.37</td> </tr> <tr> <td>Model || Basic + CLS</td> <td>3.21</td> <td>25.43</td> <td>50.67</td> <td>56.34</td> <td>84.02</td> </tr> <tr> <td>Model || Basic + CLSH</td> <td>7.7</td> <td>30.48</td> <td>53.73</td> <td>58.09</td> <td>84.68</td> </tr> <tr> <td>Model || Basic + CLSHD</td> <td>12.12</td> <td>35.82</td> <td>57.33</td> <td>63.27</td> <td>86</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1074 | 8 | acl2018 | As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data. For example, the language-specific layer slightly impairs the performance with only 10 training sentences. However, this is unsurprising as it introduces additional parameters that are only trained by the target task data. | [1, 1, 2] | ['As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data.', 'For example, the language-specific layer slightly impairs the performance with only 10 training sentences.', 'However, this is unsurprising as it introduces additional parameters that are only trained by the target task data.'] | [['Basic + C', 'Basic + CL', 'Basic + CLS', 'Basic + CLSH', 'Basic + CLSHD', '0', '10', '100', '200', 'All'], ['Basic + CLS', 'Basic + CLSH', 'Basic + CLSHD', '10'], None] | 1 |
P18-1075table_2 | We report F1 results for medical BLI with the cosine similarity and the classifier based systems. We present baseline and our proposed domain adaptation method using both general and medical lexicons. | 1 | [['Baseline'], ['Baseline BNC lexicon'], ['Adapted medical lexicon'], ['Adapted BNC lexicon']] | 2 | [['cosine similarity', 'F1 (top)'], ['cosine similarity', 'F1 (all)'], ['classifier', 'F1 (top)'], ['classifier', 'F1 (all)']] | [['13.43', '9.84', '37.73', '36.61'], ['-', '-', '20.73', '21.78'], ['14.18', '14.15', '40.71', '38.09'], ['16.29', '16.71', '22.1', '21.5']] | column | ['F1 (top)', 'F1 (all)', 'F1 (top)', 'F1 (all)'] | ['Adapted BNC lexicon'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>cosine similarity || F1 (top)</th> <th>cosine similarity || F1 (all)</th> <th>classifier || F1 (top)</th> <th>classifier || F1 (all)</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>13.43</td> <td>9.84</td> <td>37.73</td> <td>36.61</td> </tr> <tr> <td>Baseline BNC lexicon</td> <td>-</td> <td>-</td> <td>20.73</td> <td>21.78</td> </tr> <tr> <td>Adapted medical lexicon</td> <td>14.18</td> <td>14.15</td> <td>40.71</td> <td>38.09</td> </tr> <tr> <td>Adapted BNC lexicon</td> <td>16.29</td> <td>16.71</td> <td>22.1</td> <td>21.5</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1075 | 6 | acl2018 | Table 2 compares its performance with our adapted BWEs, with both cosine similarity and classification based systems. "top"F1 scores are based on the most probable word as prediction only; "all"F1 scores use all words as prediction whose probability is above the threshold. It can be seen that the cosine similarity based system using adapted BWEs clearly outperforms the nonadapted BWEs which were trained in a resource poor setup.4. Moreover, the best performance was reached using the general seed lexicon for the mapping which is due to the fact that general domain words have better quality embeddings in the MWE models, which in turn gives a better quality mapping. The classification based system performs significantly better comparing to cosine similarity by exploiting the seed lexicon better. Using adapted BWEs as input word embeddings for the system further improvements were achieved which shows the better quality of our BWEs. Simulating an even poorer setup by using a general lexicon, the performance gain of the classifier is lower. This shows the significance of the medical seed lexicon for this system. On the other hand, adapted BWEs have better performance compared to non-adapted ones using the best translation while they have just slightly lower F1 using multiple translations. This result shows that while with adapted BWEs the system predicts better gtoph translations, it has a harder time when predicting gallh due to the increased vocabulary size. | [1, 1, 1, 2, 1, 1, 2, 1, 1, 1] | ['Table 2 compares its performance with our adapted BWEs, with both cosine similarity and classification based systems.', '"top"F1 scores are based on the most probable word as prediction only; "all"F1 scores use all words as prediction whose probability is above the threshold.', 'It can be seen that the cosine similarity based system using adapted BWEs clearly outperforms the nonadapted BWEs which were trained in a resource poor setup.4.', 'Moreover, the best performance was reached using the general seed lexicon for the mapping which is due to the fact that general domain words have better quality embeddings in the MWE models, which in turn gives a better quality mapping.', 'The classification based system performs significantly better comparing to cosine similarity by exploiting the seed lexicon better.', 'Using adapted BWEs as input word embeddings for the system further improvements were achieved which shows the better quality of our BWEs.', 'Simulating an even poorer setup by using a general lexicon, the performance gain of the classifier is lower.', 'This shows the significance of the medical seed lexicon for this system.', 'On the other hand, adapted BWEs have better performance compared to non-adapted ones using the best translation while they have just slightly lower F1 using multiple translations.', 'This result shows that while with adapted BWEs the system predicts better \x81gtop\x81h translations, it has a harder time when predicting \x81gall\x81h due to the increased vocabulary size.'] | [['Adapted medical lexicon', 'Adapted BNC lexicon', 'cosine similarity', 'classifier'], ['F1 (top)', 'F1 (all)'], ['cosine similarity', 'F1 (top)', 'F1 (all)', 'Baseline', 'Adapted BNC lexicon'], None, ['classifier', 'cosine similarity'], ['classifier', 'Adapted BNC lexicon'], None, ['Adapted medical lexicon'], ['Adapted BNC lexicon', 'Baseline BNC lexicon', 'classifier', 'F1 (all)'], ['Adapted BNC lexicon', 'F1 (top)', 'F1 (all)']] | 1 |
P18-1075table_5 | Results with the semi-supervised system for BLI. Differences comparing to previous results are indicated in brackets. Baseline results are compared to rerun experiments of Heyman et al. (2017) using BWEs instead of MWEs. | 1 | [['Baseline+BNC'], ['Baseline+medical'], ['Adapted+BNC'], ['Adapted+medical']] | 1 | [['F1 (top)'], ['F1 (all)']] | [['35.04 (-0.66)', '34.98 (-1.40)'], ['36.20 (0.50)', '36.55 (0.16)'], ['41.01 (0.30)', '39.04 (0.95)'], ['41.44 (0.73)', '37.51 (-0.57)']] | column | ['F1 (top)', 'F1 (all)'] | ['Adapted+BNC', 'Adapted+medical'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (top)</th> <th>F1 (all)</th> </tr> </thead> <tbody> <tr> <td>Baseline+BNC</td> <td>35.04 (-0.66)</td> <td>34.98 (-1.40)</td> </tr> <tr> <td>Baseline+medical</td> <td>36.20 (0.50)</td> <td>36.55 (0.16)</td> </tr> <tr> <td>Adapted+BNC</td> <td>41.01 (0.30)</td> <td>39.04 (0.95)</td> </tr> <tr> <td>Adapted+medical</td> <td>41.44 (0.73)</td> <td>37.51 (-0.57)</td> </tr> </tbody></table> | Table 5 | table_5 | P18-1075 | 9 | acl2018 | Results in Table 5 show that adding semisup to the classifier further increases performance for BLI as well. For the baseline system, when using only in-domain text for creating BWEs, only the medical unlabeled set was effective, general domain word pairs could not be exploited due to the lack of general semantic knowledge in the BWE model. On the other hand, by using our domain adapted BWEs, which contain both general domain and in-domain semantical knowledge, we can exploit word pairs from both domains. Results for adapted BWEs increased in 3 out of 4 cases, where the only exception is when using multiple translations for a given source word (which may have been caused by the bigger vocabulary size). | [1, 1, 1, 1] | ['Results in Table 5 show that adding semisup to the classifier further increases performance for BLI as well.', 'For the baseline system, when using only in-domain text for creating BWEs, only the medical unlabeled set was effective, general domain word pairs could not be exploited due to the lack of general semantic knowledge in the BWE model.', 'On the other hand, by using our domain adapted BWEs, which contain both general domain and in-domain semantical knowledge, we can exploit word pairs from both domains.', 'Results for adapted BWEs increased in 3 out of 4 cases, where the only exception is when using multiple translations for a given source word (which may have been caused by the bigger vocabulary size).'] | [None, ['Baseline+BNC', 'Baseline+medical'], ['Adapted+BNC', 'Adapted+medical'], ['F1 (top)', 'F1 (all)', 'Adapted+BNC', 'Adapted+medical']] | 1 |
P18-1076table_4 | Results for different combinations of interactions between document (D) and question (Q) context (ctx) and context + knowledge (ctx+kn) representations. (CN5Sel, 50 facts) | 2 | [['Drepr to Qrepr interaction', 'Dctx Qctx (w/o know)'], ['Drepr to Qrepr interaction', 'Dctx+kn Qctx+kn'], ['Drepr to Qrepr interaction', 'Dctx Qctx+kn'], ['Drepr to Qrepr interaction', 'Dctx+kn Qctx'], ['Drepr to Qrepr interaction', 'Full model'], ['Drepr to Qrepr interaction', 'w/o Dctx Qctx'], ['Drepr to Qrepr interaction', 'w/o Dctx+kn Qctx+kn'], ['Drepr to Qrepr interaction', 'w/o Dctx Qctx+kn'], ['Drepr to Qrepr interaction', 'w/o Dctx+kn Qctx']] | 2 | [['NE', 'Dev'], ['NE', 'Test'], ['CN', 'Dev'], ['CN', 'Test']] | [['75.5', '70.3', '68.2', '64.8'], ['76.45', '69.68', '70.85', '66.32'], ['77.1', '69.72', '70.8', '66.32'], ['75.65', '70.88', '71.2', '67.96'], ['76.8', '70.24', '71.85', '67.64'], ['75.95', '70.24', '70.65', '67.12'], ['76.2', '69.8', '70.75', '67'], ['76.55', '70.52', '71.75', '66.32'], ['76.05', '70.84', '70.8', '66.8']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Dctx+kn Qctx+kn', 'Dctx Qctx+kn', 'Dctx+kn Qctx'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NE || Dev</th> <th>NE || Test</th> <th>CN || Dev</th> <th>CN || Test</th> </tr> </thead> <tbody> <tr> <td>Drepr to Qrepr interaction || Dctx Qctx (w/o know)</td> <td>75.5</td> <td>70.3</td> <td>68.2</td> <td>64.8</td> </tr> <tr> <td>Drepr to Qrepr interaction || Dctx+kn Qctx+kn</td> <td>76.45</td> <td>69.68</td> <td>70.85</td> <td>66.32</td> </tr> <tr> <td>Drepr to Qrepr interaction || Dctx Qctx+kn</td> <td>77.1</td> <td>69.72</td> <td>70.8</td> <td>66.32</td> </tr> <tr> <td>Drepr to Qrepr interaction || Dctx+kn Qctx</td> <td>75.65</td> <td>70.88</td> <td>71.2</td> <td>67.96</td> </tr> <tr> <td>Drepr to Qrepr interaction || Full model</td> <td>76.8</td> <td>70.24</td> <td>71.85</td> <td>67.64</td> </tr> <tr> <td>Drepr to Qrepr interaction || w/o Dctx Qctx</td> <td>75.95</td> <td>70.24</td> <td>70.65</td> <td>67.12</td> </tr> <tr> <td>Drepr to Qrepr interaction || w/o Dctx+kn Qctx+kn</td> <td>76.2</td> <td>69.8</td> <td>70.75</td> <td>67</td> </tr> <tr> <td>Drepr to Qrepr interaction || w/o Dctx Qctx+kn</td> <td>76.55</td> <td>70.52</td> <td>71.75</td> <td>66.32</td> </tr> <tr> <td>Drepr to Qrepr interaction || w/o Dctx+kn Qctx</td> <td>76.05</td> <td>70.84</td> <td>70.8</td> <td>66.8</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1076 | 7 | acl2018 | Table 4 shows that the combination of different interactions between ctx and ctx+kn representations leads to clear improvement over the w/o knowledge setup, in particular for the Common Nouns dataset. We also performed ablations for a model with 100 facts (see Supplement). | [1, 0] | ['Table 4 shows that the combination of different interactions between ctx and ctx+kn representations leads to clear improvement over the w/o knowledge setup, in particular for the Common Nouns dataset.', 'We also performed ablations for a model with 100 facts (see Supplement).'] | [['CN', 'Dctx+kn Qctx+kn', 'Dctx Qctx+kn', 'Dctx+kn Qctx'], None] | 1 |
P18-1076table_6 | Comparison of KnReader to existing endto-end neural models on the benchmark datasets. | 3 | [['Models', '-', 'Human (ctx + q)'], ['Models', 'Single interaction', 'LSTMs (ctx + q) (Hill et al. 2015)'], ['Models', 'Single interaction', 'AS Reader'], ['Models', 'Single interaction', 'AS Reader (our impl)'], ['Models', 'Single interaction', 'KnReader (ours)'], ['Models', 'Multiple interactions', 'MemNNs (Weston et al. 2015)'], ['Models', 'Multiple interactions', 'EpiReader (Trischler et al. 2016)'], ['Models', 'Multiple interactions', 'GA Reader (Dhingra et al. 2017)'], ['Models', 'Multiple interactions', 'IAA Reader (Sordoni et al. 2016)'], ['Models', 'Multiple interactions', 'AoA Reader (Cui et al. 2017)'], ['Models', 'Multiple interactions', 'GA Reader (+feat)'], ['Models', 'Multiple interactions', 'NSE (Munkhdalai and Yu 2016)']] | 2 | [['NE', 'dev'], ['NE', 'test'], ['CN', 'dev'], ['CN', 'test']] | [['-', '81.6', '-', '81.6'], ['51.2', '41.8', '62.6', '56'], ['73.8', '68.6', '68.8', '63.4'], ['75.5', '70.3', '68.2', '64.8'], ['77.4', '71.4', '71.8', '67.6'], ['70.4', '66.6', '64.2', '63'], ['74.9', '69', '71.5', '67.4'], ['77.2', '71.4', '71.6', '68'], ['75.3', '69.7', '72.1', '69.2'], ['75.2', '68.6', '72.2', '69.4'], ['77.8', '72', '74.4', '70.7'], ['77', '71.4', '74.3', '71.9']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['KnReader (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NE || dev</th> <th>NE || test</th> <th>CN || dev</th> <th>CN || test</th> </tr> </thead> <tbody> <tr> <td>Models || - || Human (ctx + q)</td> <td>-</td> <td>81.6</td> <td>-</td> <td>81.6</td> </tr> <tr> <td>Models || Single interaction || LSTMs (ctx + q) (Hill et al. 2015)</td> <td>51.2</td> <td>41.8</td> <td>62.6</td> <td>56</td> </tr> <tr> <td>Models || Single interaction || AS Reader</td> <td>73.8</td> <td>68.6</td> <td>68.8</td> <td>63.4</td> </tr> <tr> <td>Models || Single interaction || AS Reader (our impl)</td> <td>75.5</td> <td>70.3</td> <td>68.2</td> <td>64.8</td> </tr> <tr> <td>Models || Single interaction || KnReader (ours)</td> <td>77.4</td> <td>71.4</td> <td>71.8</td> <td>67.6</td> </tr> <tr> <td>Models || Multiple interactions || MemNNs (Weston et al. 2015)</td> <td>70.4</td> <td>66.6</td> <td>64.2</td> <td>63</td> </tr> <tr> <td>Models || Multiple interactions || EpiReader (Trischler et al. 2016)</td> <td>74.9</td> <td>69</td> <td>71.5</td> <td>67.4</td> </tr> <tr> <td>Models || Multiple interactions || GA Reader (Dhingra et al. 2017)</td> <td>77.2</td> <td>71.4</td> <td>71.6</td> <td>68</td> </tr> <tr> <td>Models || Multiple interactions || IAA Reader (Sordoni et al. 2016)</td> <td>75.3</td> <td>69.7</td> <td>72.1</td> <td>69.2</td> </tr> <tr> <td>Models || Multiple interactions || AoA Reader (Cui et al. 2017)</td> <td>75.2</td> <td>68.6</td> <td>72.2</td> <td>69.4</td> </tr> <tr> <td>Models || Multiple interactions || GA Reader (+feat)</td> <td>77.8</td> <td>72</td> <td>74.4</td> <td>70.7</td> </tr> <tr> <td>Models || Multiple interactions || NSE (Munkhdalai and Yu 2016)</td> <td>77</td> <td>71.4</td> <td>74.3</td> <td>71.9</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1076 | 7 | acl2018 | Table 6 compares our model (Knowledgeable Reader) to previous work on the CBT datasets. We show the results of our model with the settings that performed best on the Dev sets of the two datasets NE and CN: for NE, (Dctx+kn, Qctx) with 100 facts; for CN the Full model with 50 facts, both with CN5Sel. Note that our work focuses on the impact of external knowledge and employs a single interaction (single-hop) between the document context and the question so we primarily compare to and aim at improving over similar models. KnReader clearly outperforms prior single-hop models on both datasets. While we do not improve over the state of the art, our model stands well among other models that perform multiple hops. | [1, 1, 2, 1, 1] | ['Table 6 compares our model (Knowledgeable Reader) to previous work on the CBT datasets.', 'We show the results of our model with the settings that performed best on the Dev sets of the two datasets NE and CN: for NE, (Dctx+kn, Qctx) with 100 facts; for CN the Full model with 50 facts, both with CN5Sel.', 'Note that our work focuses on the impact of external knowledge and employs a single interaction (single-hop) between the document context and the question so we primarily compare to and aim at improving over similar models.', 'KnReader clearly outperforms prior single-hop models on both datasets.', 'While we do not improve over the state of the art, our model stands well among other models that perform multiple hops.'] | [['KnReader (ours)'], ['KnReader (ours)', 'NE', 'CN'], ['Single interaction'], ['KnReader (ours)', 'Single interaction', 'NE', 'CN'], ['KnReader (ours)', 'Multiple interactions']] | 1 |
P18-1084table_2 | Results on cross-lingual image description retrieval. NN-based models are above the dashed line. Best overall results are in bold. Best results with non-deep models are underlined. | 2 | [['Model', 'DPCCA (Variant A)'], ['Model', 'DPCCA (Variant B)'], ['Model', 'DPCCA(B)+DCCA NOI (concat)'], ['Model', 'DCCA NOI (Wang et al. 2015b)'], ['Model', 'DCCA SDL (Chang et al. 2017)'], ['Model', 'DCCA (Wang et al. 2015a)'], ['Model', 'DCCAE (Wang et al. 2015a)'], ['Model', 'IMG PIVOT (Gella et al. 2017)'], ['Model', 'BCN (Rajendran et al. 2016)'], ['Model', 'PCCA (Rao 1969)'], ['Model', 'CCA (Hotelling 1936)'], ['Model', 'GCCA (Funaki and Nakayama 2015)'], ['Model', 'NCCA (Michaeli et al. 2016)'], ['Model', 'PPCCA (Mukuta and Harada 2014)']] | 2 | [['R@1', 'EN→DE'], ['R@1', 'DE→EN'], ['BLEU+1', 'EN→DE'], ['BLEU+1', 'DE→EN']] | [['0.795', '0.779', '0.836', '0.827'], ['0.809', '0.794', '0.848', '0.839'], ['0.826', '0.791', '0.863', '0.837'], ['0.812', '0.788', '0.849', '0.83'], ['0.507', '0.487', '0.552', '0.533'], ['0.619', '0.621', '0.664', '0.673'], ['0.564', '0.542', '0.607', '0.598'], ['0.772', '0.763', '0.789', '0.781'], ['0.579', '0.57', '0.628', '0.629'], ['0.785', '0.737', '0.825', '0.787'], ['0.764', '0.704', '0.803', '0.754'], ['0.699', '0.69', '0.742', '0.743'], ['0.157', '0.165', '0.205', '0.213'], ['0.035', '0.05', '0.063', '0.086']] | column | ['R@1', 'R@1', 'BLEU+1', 'BLEU+1'] | ['DPCCA(B)+DCCA NOI (concat)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R@1 || EN→DE</th> <th>R@1 || DE→EN</th> <th>BLEU+1 || EN→DE</th> <th>BLEU+1 || DE→EN</th> </tr> </thead> <tbody> <tr> <td>Model || DPCCA (Variant A)</td> <td>0.795</td> <td>0.779</td> <td>0.836</td> <td>0.827</td> </tr> <tr> <td>Model || DPCCA (Variant B)</td> <td>0.809</td> <td>0.794</td> <td>0.848</td> <td>0.839</td> </tr> <tr> <td>Model || DPCCA(B)+DCCA NOI (concat)</td> <td>0.826</td> <td>0.791</td> <td>0.863</td> <td>0.837</td> </tr> <tr> <td>Model || DCCA NOI (Wang et al. 2015b)</td> <td>0.812</td> <td>0.788</td> <td>0.849</td> <td>0.83</td> </tr> <tr> <td>Model || DCCA SDL (Chang et al. 2017)</td> <td>0.507</td> <td>0.487</td> <td>0.552</td> <td>0.533</td> </tr> <tr> <td>Model || DCCA (Wang et al. 2015a)</td> <td>0.619</td> <td>0.621</td> <td>0.664</td> <td>0.673</td> </tr> <tr> <td>Model || DCCAE (Wang et al. 2015a)</td> <td>0.564</td> <td>0.542</td> <td>0.607</td> <td>0.598</td> </tr> <tr> <td>Model || IMG PIVOT (Gella et al. 2017)</td> <td>0.772</td> <td>0.763</td> <td>0.789</td> <td>0.781</td> </tr> <tr> <td>Model || BCN (Rajendran et al. 2016)</td> <td>0.579</td> <td>0.57</td> <td>0.628</td> <td>0.629</td> </tr> <tr> <td>Model || PCCA (Rao 1969)</td> <td>0.785</td> <td>0.737</td> <td>0.825</td> <td>0.787</td> </tr> <tr> <td>Model || CCA (Hotelling 1936)</td> <td>0.764</td> <td>0.704</td> <td>0.803</td> <td>0.754</td> </tr> <tr> <td>Model || GCCA (Funaki and Nakayama 2015)</td> <td>0.699</td> <td>0.69</td> <td>0.742</td> <td>0.743</td> </tr> <tr> <td>Model || NCCA (Michaeli et al. 2016)</td> <td>0.157</td> <td>0.165</td> <td>0.205</td> <td>0.213</td> </tr> <tr> <td>Model || PPCCA (Mukuta and Harada 2014)</td> <td>0.035</td> <td>0.05</td> <td>0.063</td> <td>0.086</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1084 | 8 | acl2018 | We report two standard evaluation metrics: 1) Recall at 1 (R@1) scores, and 2) the sentence-level BLEU+1 metric (Lin and Och, 2004), a variant of BLEU which smooths terms for higher-order n-grams, making it more suitable for evaluating short sentences. The scores for the retrieval task with all models are summarized in Table 2. The results clearly demonstrate the superiority of DPCCA (with a slight advantage to the more complex Variant B) and of the concatenation of their representation with that of the DCCA NOI (strongest) baseline. | [2, 1, 1] | ['We report two standard evaluation metrics: 1) Recall at 1 (R@1) scores, and 2) the sentence-level BLEU+1 metric (Lin and Och, 2004), a variant of BLEU which smooths terms for higher-order n-grams, making it more suitable for evaluating short sentences.', 'The scores for the retrieval task with all models are summarized in Table 2.', 'The results clearly demonstrate the superiority of DPCCA (with a slight advantage to the more complex Variant B) and of the concatenation of their representation with that of the DCCA NOI (strongest) baseline.'] | [['R@1', 'BLEU+1'], None, ['DPCCA(B)+DCCA NOI (concat)']] | 1 |
P18-1084table_3 | Results on EN and DE SimLex-999 (POS-based evaluation). All scores are Spearman’s rank correlations. INIT EMB refers to initial pre-trained monolingual word embeddings (see §6). | 2 | [['Model', 'DPCCA (Variant A)'], ['Model', 'DPCCA (Variant B)'], ['Model', 'DCCA NOI (Wang et al. 2015b)'], ['Model', 'DCCA (Wang et al. 2015a)'], ['Model', 'PCCA (Rao 1969)'], ['Model', 'CCA (Hotelling 1936)'], ['Model', 'GCCA (Funaki and Nakayama 2015)'], ['Model', 'INIT EMB']] | 2 | [['English-German', 'EN-Adj'], ['English-German', 'EN-Verbs'], ['English-German', 'EN-Nouns'], ['English-German', 'DE-Adj'], ['English-German', 'DE-Verbs'], ['English-German', 'DE-Nouns']] | [['0.64', '0.311', '0.369', '0.43', '0.321', '0.404'], ['0.626', '0.316', '0.382', '0.462', '0.319', '0.399'], ['0.611', '0.308', '0.361', '0.441', '0.297', '0.398'], ['0.618', '0.261', '0.327', '0.404', '0.29', '0.362'], ['0.614', '0.296', '0.34', '0.305', '0.143', '0.34'], ['0.557', '0.297', '0.321', '0.284', '0.157', '0.346'], ['0.636', '0.28', '0.378', '0.446', '0.277', '0.398'], ['0.582', '0.16', '0.306', '0.407', '0.164', '0.285']] | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['DPCCA (Variant A)', 'DPCCA (Variant B)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English-German || EN-Adj</th> <th>English-German || EN-Verbs</th> <th>English-German || EN-Nouns</th> <th>English-German || DE-Adj</th> <th>English-German || DE-Verbs</th> <th>English-German || DE-Nouns</th> </tr> </thead> <tbody> <tr> <td>Model || DPCCA (Variant A)</td> <td>0.64</td> <td>0.311</td> <td>0.369</td> <td>0.43</td> <td>0.321</td> <td>0.404</td> </tr> <tr> <td>Model || DPCCA (Variant B)</td> <td>0.626</td> <td>0.316</td> <td>0.382</td> <td>0.462</td> <td>0.319</td> <td>0.399</td> </tr> <tr> <td>Model || DCCA NOI (Wang et al. 2015b)</td> <td>0.611</td> <td>0.308</td> <td>0.361</td> <td>0.441</td> <td>0.297</td> <td>0.398</td> </tr> <tr> <td>Model || DCCA (Wang et al. 2015a)</td> <td>0.618</td> <td>0.261</td> <td>0.327</td> <td>0.404</td> <td>0.29</td> <td>0.362</td> </tr> <tr> <td>Model || PCCA (Rao 1969)</td> <td>0.614</td> <td>0.296</td> <td>0.34</td> <td>0.305</td> <td>0.143</td> <td>0.34</td> </tr> <tr> <td>Model || CCA (Hotelling 1936)</td> <td>0.557</td> <td>0.297</td> <td>0.321</td> <td>0.284</td> <td>0.157</td> <td>0.346</td> </tr> <tr> <td>Model || GCCA (Funaki and Nakayama 2015)</td> <td>0.636</td> <td>0.28</td> <td>0.378</td> <td>0.446</td> <td>0.277</td> <td>0.398</td> </tr> <tr> <td>Model || INIT EMB</td> <td>0.582</td> <td>0.16</td> <td>0.306</td> <td>0.407</td> <td>0.164</td> <td>0.285</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1084 | 9 | acl2018 | The results on the POS classes represented in SimLex-999 (nouns, verbs, adjectives, Table 3) form our main finding: conditioning the multilingual representations on a shared image leads to improvements in verb and adjective representations. While for nouns one of the DPCCA variants is the best performing model for both languages, the gaps from the best performing baselines are much smaller. This is interesting since, e.g., verbs are more abstract than nouns (Hartmann and S?gaard,2017; Hill et al., 2014). | [1, 1, 2] | ['The results on the POS classes represented in SimLex-999 (nouns, verbs, adjectives, Table 3) form our main finding: conditioning the multilingual representations on a shared image leads to improvements in verb and adjective representations.', 'While for nouns one of the DPCCA variants is the best performing model for both languages, the gaps from the best performing baselines are much smaller.', 'This is interesting since, e.g., verbs are more abstract than nouns (Hartmann and S?gaard,2017; Hill et al., 2014).'] | [['EN-Adj', 'EN-Verbs', 'DE-Adj', 'DE-Verbs'], ['DPCCA (Variant A)', 'DPCCA (Variant B)', 'EN-Nouns', 'DE-Nouns'], None] | 1 |
P18-1084table_4 | Results (Spearman rank correlation) of our models and the strongest baselines on Multilingual SimLex-999 (all data). | 2 | [['Model', 'DPCCA (A)'], ['Model', 'DPCCA (B)'], ['Model', 'PCCA'], ['Model', 'DCCA NOI'], ['Model', 'GCCA'], ['Model', 'INIT EMB']] | 2 | [['EN-DE WIW', 'EN'], ['EN-DE WIW', 'DE'], ['EN-IT WIW', 'EN'], ['EN-IT WIW', 'IT'], ['EN-RU WIW', 'EN'], ['EN-RU WIW', 'RU']] | [['0.398', '0.4', '0.412', '0.429', '0.404', '0.407'], ['0.405', '0.4', '0.413', '0.427', '0.413', '0.402'], ['0.374', '0.301', '0.37', '0.386', '0.374', '0.374'], ['0.39', '0.398', '0.413', '0.422', '0.407', '0.398'], ['0.395', '0.386', '0.414', '0.407', '0.412', '0.396'], ['0.321', '0.278', '0.321', '0.361', '0.321', '0.385']] | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['DPCCA (A)', 'DPCCA (B)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-DE WIW || EN</th> <th>EN-DE WIW || DE</th> <th>EN-IT WIW || EN</th> <th>EN-IT WIW || IT</th> <th>EN-RU WIW || EN</th> <th>EN-RU WIW || RU</th> </tr> </thead> <tbody> <tr> <td>Model || DPCCA (A)</td> <td>0.398</td> <td>0.4</td> <td>0.412</td> <td>0.429</td> <td>0.404</td> <td>0.407</td> </tr> <tr> <td>Model || DPCCA (B)</td> <td>0.405</td> <td>0.4</td> <td>0.413</td> <td>0.427</td> <td>0.413</td> <td>0.402</td> </tr> <tr> <td>Model || PCCA</td> <td>0.374</td> <td>0.301</td> <td>0.37</td> <td>0.386</td> <td>0.374</td> <td>0.374</td> </tr> <tr> <td>Model || DCCA NOI</td> <td>0.39</td> <td>0.398</td> <td>0.413</td> <td>0.422</td> <td>0.407</td> <td>0.398</td> </tr> <tr> <td>Model || GCCA</td> <td>0.395</td> <td>0.386</td> <td>0.414</td> <td>0.407</td> <td>0.412</td> <td>0.396</td> </tr> <tr> <td>Model || INIT EMB</td> <td>0.321</td> <td>0.278</td> <td>0.321</td> <td>0.361</td> <td>0.321</td> <td>0.385</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1084 | 9 | acl2018 | Further, Table 4 presents results on all SimLex word pairs. The POS class result patterns for EN-IT and EN-RU are very similar to the patterns in Table 3 and are provided in the supplementary material. First, the results over the initial monolingual embeddings before training (INIT EMB) clearly indicate that multilingual information is beneficial for the word similarity task. We observe improvements with all models (the only exception being extremely lowscoring PPCCA and NCCA, not shown). Moreover, by additionally grounding concepts from two languages in the visual modality it is possible to further boost word similarity scores. | [1, 2, 1, 1, 2] | ['Further, Table 4 presents results on all SimLex word pairs.', 'The POS class result patterns for EN-IT and EN-RU are very similar to the patterns in Table 3 and are provided in the supplementary material.', 'First, the results over the initial monolingual embeddings before training (INIT EMB) clearly indicate that multilingual information is beneficial for the word similarity task.', 'We observe improvements with all models (the only exception being extremely lowscoring PPCCA and NCCA, not shown).', 'Moreover, by additionally grounding concepts from two languages in the visual modality it is possible to further boost word similarity scores.'] | [None, ['EN', 'IT', 'RU'], ['INIT EMB'], ['DPCCA (A)', 'DPCCA (B)', 'PCCA', 'DCCA NOI', 'GCCA'], None] | 1 |
P18-1085table_3 | SimLex-999 results (Spearman’s ⇢). Best results overall are bolded. Best results per section are underlined. Bracketed numbers signify the number of images used. Some rows are copied across sections for ease of reading. | 2 | [['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Picturebook (Visual)'], ['Model', 'Picturebook (Semantic)'], ['Model', 'Picturebook (1)'], ['Model', 'Picturebook (2)'], ['Model', 'Picturebook (3)'], ['Model', 'Picturebook (5)'], ['Model', 'Picturebook (10)']] | 1 | [['all'], ['adjs'], ['nouns'], ['verbs'], ['conc-q1'], ['conc-q2'], ['conc-q3'], ['conc-q4'], ['hard']] | [['40.8', '62.2', '42.8', '19.6', '43.3', '41.6', '42.3', '40.2', '27.2'], ['37.3', '11.7', '48.2', '17.3', '14.4', '27.5', '46.2', '60.7', '28.8'], ['45.5', '46.2', '52.1', '22.8', '36.7', '41.7', '50.4', '57.3', '32.5'], ['31.3', '11.1', '38.8', '20.4', '13.9', '26.1', '38.7', '47.7', '23.9'], ['37.3', '11.7', '48.2', '17.3', '14.4', '27.5', '46.2', '60.7', '28.8'], ['24.5', '2.6', '33.5', '12.1', '4.7', '17.8', '32.8', '47.8', '13.6'], ['28.4', '6.5', '38.9', '9', '5', '21.3', '34.3', '55.1', '15.7'], ['30.3', '11.9', '41.9', '3.1', '2.6', '24.3', '37.5', '58.3', '18.4'], ['34.4', '6.8', '44.5', '18', '9', '27.9', '42.8', '58.3', '25.9'], ['37.3', '11.7', '48.2', '17.3', '14.4', '27.5', '46.2', '60.7', '28.8']] | column | ['Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s'] | ['Glove + Picturebook'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>all</th> <th>adjs</th> <th>nouns</th> <th>verbs</th> <th>conc-q1</th> <th>conc-q2</th> <th>conc-q3</th> <th>conc-q4</th> <th>hard</th> </tr> </thead> <tbody> <tr> <td>Model || Glove</td> <td>40.8</td> <td>62.2</td> <td>42.8</td> <td>19.6</td> <td>43.3</td> <td>41.6</td> <td>42.3</td> <td>40.2</td> <td>27.2</td> </tr> <tr> <td>Model || Picturebook</td> <td>37.3</td> <td>11.7</td> <td>48.2</td> <td>17.3</td> <td>14.4</td> <td>27.5</td> <td>46.2</td> <td>60.7</td> <td>28.8</td> </tr> <tr> <td>Model || Glove + Picturebook</td> <td>45.5</td> <td>46.2</td> <td>52.1</td> <td>22.8</td> <td>36.7</td> <td>41.7</td> <td>50.4</td> <td>57.3</td> <td>32.5</td> </tr> <tr> <td>Model || Picturebook (Visual)</td> <td>31.3</td> <td>11.1</td> <td>38.8</td> <td>20.4</td> <td>13.9</td> <td>26.1</td> <td>38.7</td> <td>47.7</td> <td>23.9</td> </tr> <tr> <td>Model || Picturebook (Semantic)</td> <td>37.3</td> <td>11.7</td> <td>48.2</td> <td>17.3</td> <td>14.4</td> <td>27.5</td> <td>46.2</td> <td>60.7</td> <td>28.8</td> </tr> <tr> <td>Model || Picturebook (1)</td> <td>24.5</td> <td>2.6</td> <td>33.5</td> <td>12.1</td> <td>4.7</td> <td>17.8</td> <td>32.8</td> <td>47.8</td> <td>13.6</td> </tr> <tr> <td>Model || Picturebook (2)</td> <td>28.4</td> <td>6.5</td> <td>38.9</td> <td>9</td> <td>5</td> <td>21.3</td> <td>34.3</td> <td>55.1</td> <td>15.7</td> </tr> <tr> <td>Model || Picturebook (3)</td> <td>30.3</td> <td>11.9</td> <td>41.9</td> <td>3.1</td> <td>2.6</td> <td>24.3</td> <td>37.5</td> <td>58.3</td> <td>18.4</td> </tr> <tr> <td>Model || Picturebook (5)</td> <td>34.4</td> <td>6.8</td> <td>44.5</td> <td>18</td> <td>9</td> <td>27.9</td> <td>42.8</td> <td>58.3</td> <td>25.9</td> </tr> <tr> <td>Model || Picturebook (10)</td> <td>37.3</td> <td>11.7</td> <td>48.2</td> <td>17.3</td> <td>14.4</td> <td>27.5</td> <td>46.2</td> <td>60.7</td> <td>28.8</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1085 | 5 | acl2018 | Table 3 displays our results, from which several observations can be made. First, we observe that combining Glove and Picturebook leads to improved similarity across most categories. For adjectives and the most abstract category, Glove performs significantly better, while for the most concrete category Picturebook is significantly better. This result confirms that Glove and Picturebook capture very different properties of words. Next we observe that the performance of Picturebook gets progressively better across each concreteness quartile rating, with a 20 point improvement over Glove for the most concrete category. For the hardest subset of words, Picturebook performs slightly better than Glove while Glove performs better across all pairs. We also compare to a convolutional network trained with visual similarity. We observe a performance difference between our visual and semantic embeddings: on all categories except verbs, the semantic embeddings outperform visual ones, even on the most concrete categories. This indicates the importance of the type of similarity used for training the model. Finally we note that adding more images nearly consistently improves similarity scores across categories. Kiela et al (2016) showed that after 10-20 images, performance tends to saturate. All subsequent experiments use 10 images with semantic Picturebook. | [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2] | ['Table 3 displays our results, from which several observations can be made.', 'First, we observe that combining Glove and Picturebook leads to improved similarity across most categories.', 'For adjectives and the most abstract category, Glove performs significantly better, while for the most concrete category Picturebook is significantly better.', 'This result confirms that Glove and Picturebook capture very different properties of words.', 'Next we observe that the performance of Picturebook gets progressively better across each concreteness quartile rating, with a 20 point improvement over Glove for the most concrete category.', 'For the hardest subset of words, Picturebook performs slightly better than Glove while Glove performs better across all pairs.', 'We also compare to a convolutional network trained with visual similarity.', 'We observe a performance difference between our visual and semantic embeddings: on all categories except verbs, the semantic embeddings outperform visual ones, even on the most concrete categories.', 'This indicates the importance of the type of similarity used for training the model.', 'Finally we note that adding more images nearly consistently improves similarity scores across categories.', 'Kiela et al (2016) showed that after 10-20 images, performance tends to saturate. All subsequent experiments use 10 images with semantic Picturebook.'] | [None, ['Glove + Picturebook'], ['adjs', 'Glove', 'conc-q4', 'Picturebook'], ['Glove', 'Picturebook'], ['Picturebook', 'conc-q1', 'conc-q2', 'conc-q3', 'conc-q4'], ['hard', 'Picturebook', 'all', 'Glove'], ['Picturebook (Visual)', 'Picturebook (Semantic)'], ['Picturebook (Visual)', 'Picturebook (Semantic)', 'all', 'adjs', 'nouns', 'conc-q1', 'conc-q2', 'conc-q3', 'conc-q4', 'hard'], ['Picturebook (Visual)', 'Picturebook (Semantic)'], ['Picturebook (1)', 'Picturebook (2)', 'Picturebook (3)', 'Picturebook (5)', 'Picturebook (10)'], ['Picturebook (10)']] | 1 |
P18-1085table_4 | Classification accuracies are reported for SNLI and MulitNLI. For SICK we report Pearson, Spearman and MSE. Higher is better for all metrics except MSE. Best results overall per column are bolded. Best results per section are underlined. | 2 | [['Model', 'Glove (bow)'], ['Model', 'Picturebook (bow)'], ['Model', 'Glove + Picturebook (bow)'], ['Model', 'BiLSTM-Max (Conneau et al. 2017a)'], ['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Glove + Picturebook + Contextual Gating']] | 2 | [['SNLI', 'dev'], ['SNLI', 'test'], ['MultiNLI', 'dev-mat'], ['MultiNLI', 'dev-mis'], ['SICK Relatedness', 'test-p'], ['SICK Relatedness', 'test-s'], ['SICK Relatedness', 'test-mse']] | [['85.2', '84.2', '70.5', '69.9', '86.8', '79.8', '25.2'], ['84', '83.8', '67.9', '67.1', '85.8', '79.3', '27'], ['86.2', '85.2', '71.3', '70.9', '87.2', '80.9', '24.4'], ['85', '84.5', '-', '-', '-', '-', '-'], ['86.8', '86.3', '74.1', '74.5', '-', '-', '-'], ['85.2', '85.1', '70.7', '70.3', '-', '-', '-'], ['86.7', '86.1', '73.7', '73.7', '-', '-', '-'], ['86.9', '86.5', '74.2', '74.4', '-', '-', '-']] | column | ['accuracy', 'acucracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Glove + Picturebook (bow)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI || dev</th> <th>SNLI || test</th> <th>MultiNLI || dev-mat</th> <th>MultiNLI || dev-mis</th> <th>SICK Relatedness || test-p</th> <th>SICK Relatedness || test-s</th> <th>SICK Relatedness || test-mse</th> </tr> </thead> <tbody> <tr> <td>Model || Glove (bow)</td> <td>85.2</td> <td>84.2</td> <td>70.5</td> <td>69.9</td> <td>86.8</td> <td>79.8</td> <td>25.2</td> </tr> <tr> <td>Model || Picturebook (bow)</td> <td>84</td> <td>83.8</td> <td>67.9</td> <td>67.1</td> <td>85.8</td> <td>79.3</td> <td>27</td> </tr> <tr> <td>Model || Glove + Picturebook (bow)</td> <td>86.2</td> <td>85.2</td> <td>71.3</td> <td>70.9</td> <td>87.2</td> <td>80.9</td> <td>24.4</td> </tr> <tr> <td>Model || BiLSTM-Max (Conneau et al. 2017a)</td> <td>85</td> <td>84.5</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Glove</td> <td>86.8</td> <td>86.3</td> <td>74.1</td> <td>74.5</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Picturebook</td> <td>85.2</td> <td>85.1</td> <td>70.7</td> <td>70.3</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Glove + Picturebook</td> <td>86.7</td> <td>86.1</td> <td>73.7</td> <td>73.7</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Glove + Picturebook + Contextual Gating</td> <td>86.9</td> <td>86.5</td> <td>74.2</td> <td>74.4</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1085 | 6 | acl2018 | Table 4 displays our results. For BoW models, adding Picturebook embeddings to Glove results in significant gains across all three tasks. For BiLSTM-Max, our contextual gating sets a new state-of-the-art on SNLI sentence encoding methods (methods without interaction layers), outperforming the recently proposed methods of Im and Cho (2017); Shen et al.(2018). It is worth noting the effect that different encoders have when using our embeddings. While non-contextual gating is sufficient to improve bag-of-words methods, with BiLSTM-Max it slightly hurts performance over the Glove baseline. Adding contextual gating was necessary to improve over the Glove baseline on SNLI. Finally we note the strength of our own Glove baseline over the reported results of Conneau et al.(2017a), from which we improve on their accuracy from 85.0 to 86.8 on the development set. | [1, 1, 2, 2, 1, 1, 1] | ['Table 4 displays our results.', 'For BoW models, adding Picturebook embeddings to Glove results in significant gains across all three tasks.', 'For BiLSTM-Max, our contextual gating sets a new state-of-the-art on SNLI sentence encoding methods (methods without interaction layers), outperforming the recently proposed methods of Im and Cho (2017); Shen et al.(2018).', 'It is worth noting the effect that different encoders have when using our embeddings.', 'While non-contextual gating is sufficient to improve bag-of-words methods, with BiLSTM-Max it slightly hurts performance over the Glove baseline.', 'Adding contextual gating was necessary to improve over the Glove baseline on SNLI.', 'Finally we note the strength of our own Glove baseline over the reported results of Conneau et al.(2017a), from which we improve on their accuracy from 85.0 to 86.8 on the development set.'] | [None, ['Glove (bow)', 'Picturebook (bow)', 'Glove + Picturebook (bow)'], ['BiLSTM-Max (Conneau et al. 2017a)', 'SNLI'], None, ['Glove (bow)', 'Picturebook (bow)', 'Glove + Picturebook (bow)', 'BiLSTM-Max (Conneau et al. 2017a)', 'Glove'], ['Glove + Picturebook + Contextual Gating', 'SNLI'], ['BiLSTM-Max (Conneau et al. 2017a)', 'Glove']] | 1 |
P18-1085table_6 | COCO test-set results for image-sentence retrieval experiments. Our models use VSE++. R@K is Recall@K (high is good). Med r is the median rank (low is good). | 2 | [['Model', 'VSE++ (Faghri et al. 2017)'], ['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Glove + Picturebook + Contextual Gating']] | 2 | [['Image Annotation', 'R@1'], ['Image Annotation', 'R@5'], ['Image Annotation', 'R@10'], ['Image Annotation', 'Med r'], ['Image Search', 'R@1'], ['Image Annotation', 'R@5'], ['Image Annotation', 'R@10'], ['Image Annotation', 'Med r']] | [['64.6', '-', '95.7', '1', '52', '-', '92', '1'], ['64.6', '88.9', '95.5', '1', '53.7', '86.5', '94.4', '1'], ['62.4', '90.2', '95.3', '1', '54.2', '86.4', '94.3', '1'], ['61.8', '89.2', '95', '1', '54.1', '86.7', '94.7', '1'], ['63.4', '90.3', '96.5', '1', '55.2', '87.2', '94.4', '1']] | column | ['R@1', 'R@5', 'R@10', 'Med r', 'R@1', 'R@5', 'R@10', 'Med r'] | ['Glove', 'Glove + Picturebook', 'Glove + Picturebook + Contextual Gating'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Image Annotation || R@1</th> <th>Image Annotation || R@5</th> <th>Image Annotation || R@10</th> <th>Image Annotation || Med r</th> <th>Image Search || R@1</th> <th>Image Annotation || R@5</th> <th>Image Annotation || R@10</th> <th>Image Annotation || Med r</th> </tr> </thead> <tbody> <tr> <td>Model || VSE++ (Faghri et al. 2017)</td> <td>64.6</td> <td>-</td> <td>95.7</td> <td>1</td> <td>52</td> <td>-</td> <td>92</td> <td>1</td> </tr> <tr> <td>Model || Glove</td> <td>64.6</td> <td>88.9</td> <td>95.5</td> <td>1</td> <td>53.7</td> <td>86.5</td> <td>94.4</td> <td>1</td> </tr> <tr> <td>Model || Picturebook</td> <td>62.4</td> <td>90.2</td> <td>95.3</td> <td>1</td> <td>54.2</td> <td>86.4</td> <td>94.3</td> <td>1</td> </tr> <tr> <td>Model || Glove + Picturebook</td> <td>61.8</td> <td>89.2</td> <td>95</td> <td>1</td> <td>54.1</td> <td>86.7</td> <td>94.7</td> <td>1</td> </tr> <tr> <td>Model || Glove + Picturebook + Contextual Gating</td> <td>63.4</td> <td>90.3</td> <td>96.5</td> <td>1</td> <td>55.2</td> <td>87.2</td> <td>94.4</td> <td>1</td> </tr> </tbody></table> | Table 6 | table_6 | P18-1085 | 7 | acl2018 | Table 6 displays our results on this task. Our Glove baseline was able to match or outperform the reported results in Faghri et al.(2017) with the exception of Recall@10 for image annotation, where it performs slightly worse. Glove+Picturebook improves over the Glove baseline for image search but falls short on image annotation. However, using contextual gating results in improvements over the baseline on all metrics except R@1 for image annotation. | [1, 1, 1, 1] | ['Table 6 displays our results on this task.', 'Our Glove baseline was able to match or outperform the reported results in Faghri et al.(2017) with the exception of Recall@10 for image annotation, where it performs slightly worse.', 'Glove+Picturebook improves over the Glove baseline for image search but falls short on image annotation.', 'However, using contextual gating results in improvements over the baseline on all metrics except R@1 for image annotation.'] | [None, ['Glove', 'VSE++ (Faghri et al. 2017)'], ['Glove + Picturebook', 'Image Search', 'Glove', 'R@10'], ['Glove + Picturebook + Contextual Gating', 'Image Annotation', 'R@5', 'R@10', 'Image Search', 'R@1']] | 1 |
P18-1085table_7 | Machine Translation results on the Multi30k English ! German task. We note that our models do not use BPE, and we perform better in BLEU relative to METEOR. | 2 | [['Model', 'BPE (Caglayan et al. 2017)'], ['Model', 'Baseline'], ['Model', 'Picturebook'], ['Model', 'Picturebook + Inverse Picturebook'], ['Model', 'Picturebook + Inverse Picturebook + Gating']] | 2 | [['Test2016', 'BLEU'], ['Test2016', 'METEOR'], ['Test2017', 'BLEU'], ['Test2017', 'METEOR'], ['MSCOCO', 'BLEU'], ['MSCOCO', 'METEOR']] | [['38.1', '57.3', '30.8', '51.6', '26.4', '46.8'], ['38.9', '56.5', '32.6', '50.7', '26.8', '45.4'], ['39.6', '56.9', '31.8', '50.1', '27.7', '45.8'], ['40.2', '57.2', '32.3', '50.7', '27.8', '46.3'], ['40', '57.3', '33', '51.1', '27.9', '46.5']] | column | ['BLEU', 'METEOR', 'BLEU', 'METEOR', 'BLEU', 'METEOR'] | ['Picturebook'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test2016 || BLEU</th> <th>Test2016 || METEOR</th> <th>Test2017 || BLEU</th> <th>Test2017 || METEOR</th> <th>MSCOCO || BLEU</th> <th>MSCOCO || METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || BPE (Caglayan et al. 2017)</td> <td>38.1</td> <td>57.3</td> <td>30.8</td> <td>51.6</td> <td>26.4</td> <td>46.8</td> </tr> <tr> <td>Model || Baseline</td> <td>38.9</td> <td>56.5</td> <td>32.6</td> <td>50.7</td> <td>26.8</td> <td>45.4</td> </tr> <tr> <td>Model || Picturebook</td> <td>39.6</td> <td>56.9</td> <td>31.8</td> <td>50.1</td> <td>27.7</td> <td>45.8</td> </tr> <tr> <td>Model || Picturebook + Inverse Picturebook</td> <td>40.2</td> <td>57.2</td> <td>32.3</td> <td>50.7</td> <td>27.8</td> <td>46.3</td> </tr> <tr> <td>Model || Picturebook + Inverse Picturebook + Gating</td> <td>40</td> <td>57.3</td> <td>33</td> <td>51.1</td> <td>27.9</td> <td>46.5</td> </tr> </tbody></table> | Table 7 | table_7 | P18-1085 | 9 | acl2018 | On the English ¨ German tasks, we find our Picturebook model to perform on average 0.8 BLEU or 0.7 METEOR over our baseline. On the German task, compared to the previously best published results (Caglayan et al.,2017) we do better in BLEU but slightly worse in METEOR. We suspect this is due to the fact that we did not use BPE. | [1, 1, 2] | ['On the English \x81¨ German tasks, we find our Picturebook model to perform on average 0.8 BLEU or 0.7 METEOR over our baseline.', 'On the German task, compared to the previously best published results (Caglayan et al.,2017) we do better in BLEU but slightly worse in METEOR.', 'We suspect this is due to the fact that we did not use BPE.'] | [['Picturebook', 'BLEU', 'METEOR'], ['BPE (Caglayan et al. 2017)', 'Picturebook', 'Picturebook + Inverse Picturebook', 'Picturebook + Inverse Picturebook + Gating', 'BLEU', 'METEOR'], None] | 1 |
P18-1087table_3 | Experimental results (%). The results with symbol“(cid:92)” are retrieved from the original papers, and those starred (∗) one are from Dong et al. (2014). The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM. | 3 | [['Baselines', 'Models', 'SVM'], ['Baselines', 'Models', 'AdaRNN'], ['Baselines', 'Models', 'AE-LSTM'], ['Baselines', 'Models', 'ATAE-LSTM'], ['Baselines', 'Models', 'IAN'], ['Baselines', 'Models', 'CNN-ASP'], ['Baselines', 'Models', 'TD-LSTM'], ['Baselines', 'Models', 'MemNet'], ['Baselines', 'Models', 'BILSTM-ATT-G'], ['Baselines', 'Models', 'RAM'], ['CPT Alternatives', 'Models', 'LSTM-ATT-CNN'], ['CPT Alternatives', 'Models', 'LSTM-FC-CNN-LF'], ['CPT Alternatives', 'Models', 'LSTM-FC-CNN-AS'], ['Ablated TNet', 'Models', 'TNet w/o transformation'], ['Ablated TNet', 'Models', 'TNet w/o context'], ['Ablated TNet', 'Models', 'TNet-LF w/o position'], ['Ablated TNet', 'Models', 'TNet-AS w/o position'], ['TNet variants', 'Models', 'TNet-LF'], ['TNet variants', 'Models', 'TNet-AS']] | 2 | [['LAPTOP', 'ACC'], ['LAPTOP', 'Macro-F1'], ['REST', 'ACC'], ['REST', 'Macro-F1'], ['TWITTER', 'ACC'], ['TWITTER', 'Macro-F1']] | [['70.49\\', '-', '80.16\\', '-', '63.40?', '63.30?'], ['-', '-', '-', '-', '66.30\\', '65.90\\'], ['68.90\\', '-', '76.60\\', '-', '-', '-'], ['68.70\\ -', '-', '77.20\\', '-', '-', '-'], ['72.10\\ -', '-', '78.60\\', '-', '-', '-'], ['72.46', '65.31', '77.82', '65.11', '73.27', '71.77'], ['71.83', '68.43', '78', '66.73', '66.62', '64.01'], ['70.33', '64.09', '78.16', '65.83', '68.5', '66.91'], ['74.37', '69.9', '80.38', '70.78', '72.7', '70.84'], ['75.01', '70.51', '79.79', '68.86', '71.88', '70.33'], ['73.37', '68.03', '78.95', '68.71', '70.09', '67.68'], ['75.59', '70.6', '80.41', '70.23', '73.7', '72.82'], ['75.78', '70.72', '80.23', '70.06', '74.28', '72.6'], ['73.3', '68.25', '78.9', '65.86', '72.1', '70.57'], ['73.91', '68.87', '80.07', '69.01', '74.51', '73.05'], ['75.13', '70.63', '79.86', '69.69', '73.83', '72.49'], ['75.27', '70.03', '79.79', '69.78', '73.84', '72.47'], ['76.01†,‡', '71.47†,‡ ', '80.79†,‡', ' 70.84‡', '74.68†,‡', '73.36†,‡'], ['76.54†,‡', '71.75†,‡', ' 80.69†,‡', '71.27†,‡ ', '74.97†,‡', '73.60†,‡']] | column | ['ACC', 'Macro-F1', 'ACC', 'Macro-F1', 'ACC', 'Macro-F1'] | ['TNet-LF', 'TNet-AS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAPTOP || ACC</th> <th>LAPTOP || Macro-F1</th> <th>REST || ACC</th> <th>REST || Macro-F1</th> <th>TWITTER || ACC</th> <th>TWITTER || Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Baselines || Models || SVM</td> <td>70.49\</td> <td>-</td> <td>80.16\</td> <td>-</td> <td>63.40?</td> <td>63.30?</td> </tr> <tr> <td>Baselines || Models || AdaRNN</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>66.30\</td> <td>65.90\</td> </tr> <tr> <td>Baselines || Models || AE-LSTM</td> <td>68.90\</td> <td>-</td> <td>76.60\</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Baselines || Models || ATAE-LSTM</td> <td>68.70\ -</td> <td>-</td> <td>77.20\</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Baselines || Models || IAN</td> <td>72.10\ -</td> <td>-</td> <td>78.60\</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Baselines || Models || CNN-ASP</td> <td>72.46</td> <td>65.31</td> <td>77.82</td> <td>65.11</td> <td>73.27</td> <td>71.77</td> </tr> <tr> <td>Baselines || Models || TD-LSTM</td> <td>71.83</td> <td>68.43</td> <td>78</td> <td>66.73</td> <td>66.62</td> <td>64.01</td> </tr> <tr> <td>Baselines || Models || MemNet</td> <td>70.33</td> <td>64.09</td> <td>78.16</td> <td>65.83</td> <td>68.5</td> <td>66.91</td> </tr> <tr> <td>Baselines || Models || BILSTM-ATT-G</td> <td>74.37</td> <td>69.9</td> <td>80.38</td> <td>70.78</td> <td>72.7</td> <td>70.84</td> </tr> <tr> <td>Baselines || Models || RAM</td> <td>75.01</td> <td>70.51</td> <td>79.79</td> <td>68.86</td> <td>71.88</td> <td>70.33</td> </tr> <tr> <td>CPT Alternatives || Models || LSTM-ATT-CNN</td> <td>73.37</td> <td>68.03</td> <td>78.95</td> <td>68.71</td> <td>70.09</td> <td>67.68</td> </tr> <tr> <td>CPT Alternatives || Models || LSTM-FC-CNN-LF</td> <td>75.59</td> <td>70.6</td> <td>80.41</td> <td>70.23</td> <td>73.7</td> <td>72.82</td> </tr> <tr> <td>CPT Alternatives || Models || LSTM-FC-CNN-AS</td> <td>75.78</td> <td>70.72</td> <td>80.23</td> <td>70.06</td> <td>74.28</td> <td>72.6</td> </tr> <tr> <td>Ablated TNet || Models || TNet w/o transformation</td> <td>73.3</td> <td>68.25</td> <td>78.9</td> <td>65.86</td> <td>72.1</td> <td>70.57</td> </tr> <tr> <td>Ablated TNet || Models || TNet w/o context</td> <td>73.91</td> <td>68.87</td> <td>80.07</td> <td>69.01</td> <td>74.51</td> <td>73.05</td> </tr> <tr> <td>Ablated TNet || Models || TNet-LF w/o position</td> <td>75.13</td> <td>70.63</td> <td>79.86</td> <td>69.69</td> <td>73.83</td> <td>72.49</td> </tr> <tr> <td>Ablated TNet || Models || TNet-AS w/o position</td> <td>75.27</td> <td>70.03</td> <td>79.79</td> <td>69.78</td> <td>73.84</td> <td>72.47</td> </tr> <tr> <td>TNet variants || Models || TNet-LF</td> <td>76.01†,‡</td> <td>71.47†,‡</td> <td>80.79†,‡</td> <td>70.84‡</td> <td>74.68†,‡</td> <td>73.36†,‡</td> </tr> <tr> <td>TNet variants || Models || TNet-AS</td> <td>76.54†,‡</td> <td>71.75†,‡</td> <td>80.69†,‡</td> <td>71.27†,‡</td> <td>74.97†,‡</td> <td>73.60†,‡</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1087 | 7 | acl2018 | As shown in Table 3, both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model. Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER. The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences. Indeed, we can also observe that another CNN-based baseline, i.e., CNNASP implemented by us, also obtains good results on TWITTER. On the other hand, the performance of those comparison methods is mostly unstable. For the tweet in TWITTER, the competitive BILSTMATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their capability in capturing the context features. Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information. To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3). After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNetAS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet. It shows that the integration of target information into the word-level representations is crucial for good performanc. Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST7, while on TWITTER, TNet w/o context performs very competitive (p-values with TNetLF and TNet-AS are 0.066 and 0.053 respectively for Accuracy). Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data. TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving. As for the position information, we conduct statistical t-test between TNet-LF/AS and TNetLF/AS w/o position together with performance comparison. | [1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 2] | ['As shown in Table 3, both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.', 'Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.', 'The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.', ' Indeed, we can also observe that another CNN-based baseline, i.e., CNNASP implemented by us, also obtains good results on TWITTER.', 'On the other hand, the performance of those comparison methods is mostly unstable.', 'For the tweet in TWITTER, the competitive BILSTMATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their capability in capturing the context features.', ' Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.', 'To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3).', 'After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNetAS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.', ' It shows that the integration of target information into the word-level representations is crucial for good performanc.', 'Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST7, while on TWITTER, TNet w/o context performs very competitive (p-values with TNetLF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).', 'Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.', 'TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.', 'As for the position information, we conduct statistical t-test between TNet-LF/AS and TNetLF/AS w/o position together with performance comparison.'] | [['TNet-LF', 'TNet-AS'], ['TNet-LF', 'TNet-AS', 'LAPTOP', 'REST', 'TWITTER'], ['TNet variants', 'CNN-ASP'], ['CNN-ASP', 'TWITTER', 'TNet-LF', 'TNet-AS'], None, ['BILSTM-ATT-G', 'RAM', 'LAPTOP', 'REST'], ['AdaRNN'], ['TNet w/o transformation', 'TNet w/o context', 'TNet-LF w/o position', 'TNet-AS w/o position'], ['TNet-LF', 'TNet-AS', 'TNet w/o transformation', 'ACC', 'Macro-F1'], None, ['TNet-LF', 'TNet-AS', 'TNet w/o context', 'LAPTOP', 'REST', 'TWITTER'], ['TWITTER'], ['TNet w/o context', 'TNet w/o transformation'], ['TNet-LF w/o position', 'TNet-AS w/o position']] | 1 |
P18-1090table_2 | Human evaluations of the proposed method and baselines. Sentiment evaluates sentiment transformation. Semantic evaluates content preservation. | 2 | [['Yelp', 'CAAE (Shen et al. 2017)'], ['Yelp', 'MDAL (Fu et al. 2018)'], ['Yelp', 'Proposed Method'], ['Amazon', 'CAAE (Shen et al. 2017)'], ['Amazon', 'MDAL (Fu et al. 2018)'], ['Amazon', 'Proposed Method']] | 1 | [['Sentiment'], ['Semantic'], ['G-score']] | [['7.67', '3.87', '5.45'], ['7.12', '3.68', '5.12'], ['6.99', '5.08', '5.96'], ['8.61', '3.15', '5.21'], ['7.93', '3.22', '5.05'], ['7.92', '4.67', '6.08']] | column | ['Sentiment', 'Semantic', 'G-score'] | ['Proposed Method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment</th> <th>Semantic</th> <th>G-score</th> </tr> </thead> <tbody> <tr> <td>Yelp || CAAE (Shen et al. 2017)</td> <td>7.67</td> <td>3.87</td> <td>5.45</td> </tr> <tr> <td>Yelp || MDAL (Fu et al. 2018)</td> <td>7.12</td> <td>3.68</td> <td>5.12</td> </tr> <tr> <td>Yelp || Proposed Method</td> <td>6.99</td> <td>5.08</td> <td>5.96</td> </tr> <tr> <td>Amazon || CAAE (Shen et al. 2017)</td> <td>8.61</td> <td>3.15</td> <td>5.21</td> </tr> <tr> <td>Amazon || MDAL (Fu et al. 2018)</td> <td>7.93</td> <td>3.22</td> <td>5.05</td> </tr> <tr> <td>Amazon || Proposed Method</td> <td>7.92</td> <td>4.67</td> <td>6.08</td> </tr> </tbody></table> | Table 2 | table_2 | P18-1090 | 7 | acl2018 | Table 2 shows the human evaluation results. It can be clearly seen that the proposed method obviously improves semantic preservation. The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset. In general, our proposed model achieves the best overall performance. Furthermore, it also needs to be noticed that with the large improvement in content preservation, the sentiment accuracy of the proposed method is lower than that of CAAE on the two datasets. It shows that simultaneously promoting sentiment transformation and content preservation remains to be studied further. | [1, 1, 1, 1, 1, 2] | ['Table 2 shows the human evaluation results.', 'It can be clearly seen that the proposed method obviously improves semantic preservation.', 'The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset.', 'In general, our proposed model achieves the best overall performance.', 'Furthermore, it also needs to be noticed that with the large improvement in content preservation, the sentiment accuracy of the proposed method is lower than that of CAAE on the two datasets.', 'It shows that simultaneously promoting sentiment transformation and content preservation remains to be studied further.'] | [None, ['Proposed Method', 'CAAE (Shen et al. 2017)', 'MDAL (Fu et al. 2018)', 'Semantic'], ['Proposed Method', 'CAAE (Shen et al. 2017)', 'MDAL (Fu et al. 2018)', 'Semantic', 'Yelp', 'Amazon'], ['Proposed Method', 'Semantic', 'G-score'], ['Sentiment', 'CAAE (Shen et al. 2017)', 'Proposed Method'], ['Sentiment']] | 1 |
P18-1093table_3 | Experimental results on Reddit datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. | 2 | [['Model', 'NBOW'], ['Model', 'Vanilla CNN'], ['Model', 'Vanilla LSTM'], ['Model', 'Attention LSTM'], ['Model', 'GRNN (Zhang et al.)'], ['Model', 'CNN-LSTM-DNN (Ghosh and Veale)'], ['Model', 'SIARN (this paper)'], ['Model', 'MIARN (this paper)']] | 2 | [['Reddit (/r/movies)', 'P'], ['Reddit (/r/movies)', 'R'], ['Reddit (/r/movies)', 'F1'], ['Reddit (/r/movies)', 'Acc'], ['Reddit (/r/technology)', 'P'], ['Reddit (/r/technology)', 'R'], ['Reddit (/r/technology)', 'F1'], ['Reddit (/r/technology)', 'Acc']] | [['67.33', '66.56', '66.82', '67.52', '65.45', '65.62', '65.52', '66.55'], ['65.97', '65.97', '65.97', '66.24', '65.88', '62.9', '62.85', '66.8'], ['67.57', '67.67', '67.32', '67.34', '66.94', '67.22', '67.03', '67.92'], ['68.11', '67.87', '67.94', '68.37', '68.2', '68.78', '67.44', '67.22'], ['66.16', '66.16', '66.16', '66.42', '66.56', '66.73', '66.66', '67.65'], ['68.27', '67.87', '67.95', '68.5', '66.14', '66.73', '65.74', '66'], ['69.59', '69.48', '69.52', '69.84', '69.35', '70.05', '69.22', '69.57'], ['69.68', '69.37', '69.54', '69.9', '68.97', '69.3', '69.09', '69.91']] | column | ['P', 'R', 'F1', 'Acc', 'P', 'R', 'F1', 'Acc'] | ['SIARN (this paper)', 'MIARN (this paper)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Reddit (/r/movies) || P</th> <th>Reddit (/r/movies) || R</th> <th>Reddit (/r/movies) || F1</th> <th>Reddit (/r/movies) || Acc</th> <th>Reddit (/r/technology) || P</th> <th>Reddit (/r/technology) || R</th> <th>Reddit (/r/technology) || F1</th> <th>Reddit (/r/technology) || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || NBOW</td> <td>67.33</td> <td>66.56</td> <td>66.82</td> <td>67.52</td> <td>65.45</td> <td>65.62</td> <td>65.52</td> <td>66.55</td> </tr> <tr> <td>Model || Vanilla CNN</td> <td>65.97</td> <td>65.97</td> <td>65.97</td> <td>66.24</td> <td>65.88</td> <td>62.9</td> <td>62.85</td> <td>66.8</td> </tr> <tr> <td>Model || Vanilla LSTM</td> <td>67.57</td> <td>67.67</td> <td>67.32</td> <td>67.34</td> <td>66.94</td> <td>67.22</td> <td>67.03</td> <td>67.92</td> </tr> <tr> <td>Model || Attention LSTM</td> <td>68.11</td> <td>67.87</td> <td>67.94</td> <td>68.37</td> <td>68.2</td> <td>68.78</td> <td>67.44</td> <td>67.22</td> </tr> <tr> <td>Model || GRNN (Zhang et al.)</td> <td>66.16</td> <td>66.16</td> <td>66.16</td> <td>66.42</td> <td>66.56</td> <td>66.73</td> <td>66.66</td> <td>67.65</td> </tr> <tr> <td>Model || CNN-LSTM-DNN (Ghosh and Veale)</td> <td>68.27</td> <td>67.87</td> <td>67.95</td> <td>68.5</td> <td>66.14</td> <td>66.73</td> <td>65.74</td> <td>66</td> </tr> <tr> <td>Model || SIARN (this paper)</td> <td>69.59</td> <td>69.48</td> <td>69.52</td> <td>69.84</td> <td>69.35</td> <td>70.05</td> <td>69.22</td> <td>69.57</td> </tr> <tr> <td>Model || MIARN (this paper)</td> <td>69.68</td> <td>69.37</td> <td>69.54</td> <td>69.9</td> <td>68.97</td> <td>69.3</td> <td>69.09</td> <td>69.91</td> </tr> </tbody></table> | Table 3 | table_3 | P18-1093 | 8 | acl2018 | Table 3 reports a performance comparison of all benchmarked models on the Reddit datasets. Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ? 2% margin improvement over the best baselines. Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models. This further reinforces the effectiveness of our proposed approach. | [1, 1, 1, 1] | ['Table 3 reports a performance comparison of all benchmarked models on the Reddit datasets.', 'Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ? 2% margin improvement over the best baselines.', 'Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models.', 'This further reinforces the effectiveness of our proposed approach.'] | [None, ['SIARN (this paper)', 'MIARN (this paper)', 'Reddit (/r/movies)', 'Reddit (/r/technology)'], ['SIARN (this paper)', 'MIARN (this paper)', 'CNN-LSTM-DNN (Ghosh and Veale)', 'GRNN (Zhang et al.)', 'Vanilla CNN', 'NBOW'], ['SIARN (this paper)', 'MIARN (this paper)']] | 1 |
P18-1093table_4 | Experimental results on Debates datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. | 2 | [['Model', 'NBOW'], ['Model', 'Vanilla CNN'], ['Model', 'Vanilla LSTM'], ['Model', 'Attention LSTM'], ['Model', 'GRNN (Zhang et al.)'], ['Model', 'CNN-LSTM-DNN (Ghosh and Veale)'], ['Model', 'SIARN (this paper)'], ['Model', 'MIARN (this paper)']] | 2 | [['Debates (IAC-V1)', 'P'], ['Debates (IAC-V1)', 'R'], ['Debates (IAC-V1)', 'F1'], ['Debates (IAC-V1)', 'Acc'], ['Debates (IAC-V2)', 'P'], ['Debates (IAC-V2)', 'R'], ['Debates (IAC-V2)', 'F1'], ['Debates (IAC-V2)', 'Acc']] | [['57.17', '57.03', '57', '57.51', '66.01', '66.03', '66.02', '66.09'], ['58.21', '58', '57.95', '58.55', '68.45', '68.18', '68.21', '68.56'], ['54.87', '54.89', '54.84', '54.92', '68.3', '63.96', '60.78', '62.66'], ['58.98', '57.93', '57.23', '59.07', '70.04', '69.62', '69.63', '69.96'], ['56.21', '56.21', '55.96', '55.96', '62.26', '61.87', '61.21', '61.37'], ['55.5', '54.6', '53.31', '55.96', '64.31', '64.33', '64.31', '64.38'], ['63.94', '63.45', '62.52', '62.69', '72.17', '71.81', '71.85', '72.1'], ['63.88', '63.71', '63.18', '63.21', '72.92', '72.93', '72.75', '72.75']] | column | ['P', 'R', 'F1', 'Acc', 'P', 'R', 'F1', 'Acc'] | ['MIARN (this paper)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Debates (IAC-V1) || P</th> <th>Debates (IAC-V1) || R</th> <th>Debates (IAC-V1) || F1</th> <th>Debates (IAC-V1) || Acc</th> <th>Debates (IAC-V2) || P</th> <th>Debates (IAC-V2) || R</th> <th>Debates (IAC-V2) || F1</th> <th>Debates (IAC-V2) || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || NBOW</td> <td>57.17</td> <td>57.03</td> <td>57</td> <td>57.51</td> <td>66.01</td> <td>66.03</td> <td>66.02</td> <td>66.09</td> </tr> <tr> <td>Model || Vanilla CNN</td> <td>58.21</td> <td>58</td> <td>57.95</td> <td>58.55</td> <td>68.45</td> <td>68.18</td> <td>68.21</td> <td>68.56</td> </tr> <tr> <td>Model || Vanilla LSTM</td> <td>54.87</td> <td>54.89</td> <td>54.84</td> <td>54.92</td> <td>68.3</td> <td>63.96</td> <td>60.78</td> <td>62.66</td> </tr> <tr> <td>Model || Attention LSTM</td> <td>58.98</td> <td>57.93</td> <td>57.23</td> <td>59.07</td> <td>70.04</td> <td>69.62</td> <td>69.63</td> <td>69.96</td> </tr> <tr> <td>Model || GRNN (Zhang et al.)</td> <td>56.21</td> <td>56.21</td> <td>55.96</td> <td>55.96</td> <td>62.26</td> <td>61.87</td> <td>61.21</td> <td>61.37</td> </tr> <tr> <td>Model || CNN-LSTM-DNN (Ghosh and Veale)</td> <td>55.5</td> <td>54.6</td> <td>53.31</td> <td>55.96</td> <td>64.31</td> <td>64.33</td> <td>64.31</td> <td>64.38</td> </tr> <tr> <td>Model || SIARN (this paper)</td> <td>63.94</td> <td>63.45</td> <td>62.52</td> <td>62.69</td> <td>72.17</td> <td>71.81</td> <td>71.85</td> <td>72.1</td> </tr> <tr> <td>Model || MIARN (this paper)</td> <td>63.88</td> <td>63.71</td> <td>63.18</td> <td>63.21</td> <td>72.92</td> <td>72.93</td> <td>72.75</td> <td>72.75</td> </tr> </tbody></table> | Table 4 | table_4 | P18-1093 | 8 | acl2018 | Table 4 reports a performance comparison of all benchmarked models on the Debates datasets. The performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit). For example, MIARN outperforms GRNN and CNN-LSTM-DNN by 8% to 10% on both IAC-V1 and IAC-V2. | [1, 1, 1] | ['Table 4 reports a performance comparison of all benchmarked models on the Debates datasets.', 'The performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).', 'For example, MIARN outperforms GRNN and CNN-LSTM-DNN by 8% to 10% on both IAC-V1 and IAC-V2.'] | [None, ['Debates (IAC-V1)', 'Debates (IAC-V2)'], ['Debates (IAC-V1)', 'Debates (IAC-V2)', 'MIARN (this paper)', 'GRNN (Zhang et al.)', 'CNN-LSTM-DNN (Ghosh and Veale)']] | 1 |
Subsets and Splits