table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D19-1259table_4
Human performance (single-annotator).
2
[['Setting', 'Reasoning-Free'], ['Setting', 'Reasoning-Required']]
1
[['Accuracy (%)'], ['Macro-F1 (%)']]
[['90.4', '84.18'], ['78', '72.19']]
column
['accuracy (%)', 'Macro-F1 (%)']
['Reasoning-Free']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> <th>Macro-F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Setting || Reasoning-Free</td> <td>90.4</td> <td>84.18</td> </tr> <tr> <td>Setting || Reasoning-Required</td> <td>78</td> <td>72.19</td> </tr> </tbody></table>
Table 4
table_4
D19-1259
7
emnlp2019
5 Experiments . 5.1 Human Performance . Human performance is measured during the annotation: As shown in Algorithm 1, annotations of annotator 1 and annotator 2 are used to calculate reasoning-free and reasoning-required human performance, respectively, against the discussed ground truth labels. Human performance on the test set of PQA-L is shown in Table 4. We only test single-annotator performance due to limited resources. Kwiatkowski et al. (2019) show that an ensemble of annotators perform significantly better than single-annotator, so the results reported in Table 4 are the lower bounds of human performance. Under reasoning-free setting where the annotator can see the conclusions, a single human achieves 90.4% accuracy and 84.2% macroF1.
[2, 2, 2, 1, 2, 2, 1]
['5 Experiments .', '5.1 Human Performance .', 'Human performance is measured during the annotation: As shown in Algorithm 1, annotations of annotator 1 and annotator 2 are used to calculate reasoning-free and reasoning-required human performance, respectively, against the discussed ground truth labels.', 'Human performance on the test set of PQA-L is shown in Table 4.', 'We only test single-annotator performance due to limited resources.', 'Kwiatkowski et al. (2019) show that an ensemble of annotators perform significantly better than single-annotator, so the results reported in Table 4 are the lower bounds of human performance.', 'Under reasoning-free setting where the annotator can see the conclusions, a single human achieves 90.4% accuracy and 84.2% macroF1.']
[None, None, None, None, None, None, ['Reasoning-Free', 'Accuracy (%)', 'Macro-F1 (%)']]
1
D19-1268table_2
Evaluation results on link prediction
2
[['Model', 'SE'], ['Model', 'SME'], ['Model', 'TransE'], ['Model', 'TransH'], ['Model', 'TransR'], ['Model', 'TranSparse'], ['Model', 'STransE'], ['Model', 'ITransF'], ['Model', 'HolE'], ['Model', 'ComplEx'], ['Model', 'ANALOGY'], ['Model', 'ProjE'], ['Model', 'RTransE'], ['Model', 'PTransE (ADD, 2-step)'], ['Model', 'PTransE (MUL, 2-step)'], ['Model', 'PTransE (ADD, 3-step)'], ['Model', 'PaSKoGE'], ['Model', 'RPE (ACOM)'], ['Model', 'RPE (MCOM)'], ['Model', 'RotatE'], ['Model', 'OPTransE']]
3
[['WN18', 'Mean Rank', 'Raw'], ['WN18', 'Mean Rank', 'Filtered'], ['WN18', 'Hits@10(%)', 'Raw'], ['WN18', 'Hits@10(%)', 'Filtered'], ['FB15K', 'Mean Rank', 'Raw'], ['FB15K', 'Mean Rank', 'Filtered'], ['FB15K', 'Hits@10(%)', 'Raw'], ['FB15K', 'Hits@10(%)', 'Filtered']]
[['1011', '985', '68.5', '80.5', '273', '162', '28.8', '39.8'], ['545', '533', '65.1', '74.1', '274', '154', '30.7', '40.8'], ['263', '251', '75.4', '89.2', '243', '125', '34.9', '47.1'], ['318', '303', '75.4', '86.7', '212', '87', '45.7', '64.4'], ['238', '225', '79.8', '92', '198', '77', '48.2', '68.7'], ['223', '211', '80.1', '93.2', '187', '82', '53.5', '79.5'], ['217', '206', '80.9', '93.4', '219', '69', '51.6', '79.7'], ['-', '205', '-', '94.2', '-', '65', '-', '81'], ['-', '-', '-', '94.9', '-', '-', '-', '73.9'], ['-', '-', '-', '94.7', '-', '-', '-', '84'], ['-', '-', '-', '94.7', '-', '-', '-', '85.4'], ['277', '260', '79.4', '94.9', '124', '34', '54.7', '88.4'], ['-', '-', '-', '-', '-', '50', '-', '76.2'], ['235', '221', '81.3', '92.7', '200', '54', '51.8', '83.4'], ['243', '230', '79.5', '90.9', '216', '67', '47.4', '77.7'], ['238', '219', '81.1', '94.2', '207', '58', '51.4', '84.6'], ['-', '-', '81.3', '95', '-', '-', '53.1', '88'], ['-', '-', '-', '-', '171', '41', '52', '85.5'], ['-', '-', '-', '-', '183', '43', '52.2', '81.7'], ['-', '309', '-', '95.9', '-', '40', '-', '88.4'], ['211', '199', '83.2', '95.7', '136', '33', '58', '89.9']]
column
['Mean Rank', 'Mean Rank', 'Hits@10(%)', 'Hits@10(%)', 'Mean Rank', 'Mean Rank', 'Hits@10(%)', 'Hits@10(%)']
['OPTransE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WN18 || Mean Rank || Raw</th> <th>WN18 || Mean Rank || Filtered</th> <th>WN18 || Hits@10(%) || Raw</th> <th>WN18 || Hits@10(%) || Filtered</th> <th>FB15K || Mean Rank || Raw</th> <th>FB15K || Mean Rank || Filtered</th> <th>FB15K || Hits@10(%) || Raw</th> <th>FB15K || Hits@10(%) || Filtered</th> </tr> </thead> <tbody> <tr> <td>Model || SE</td> <td>1011</td> <td>985</td> <td>68.5</td> <td>80.5</td> <td>273</td> <td>162</td> <td>28.8</td> <td>39.8</td> </tr> <tr> <td>Model || SME</td> <td>545</td> <td>533</td> <td>65.1</td> <td>74.1</td> <td>274</td> <td>154</td> <td>30.7</td> <td>40.8</td> </tr> <tr> <td>Model || TransE</td> <td>263</td> <td>251</td> <td>75.4</td> <td>89.2</td> <td>243</td> <td>125</td> <td>34.9</td> <td>47.1</td> </tr> <tr> <td>Model || TransH</td> <td>318</td> <td>303</td> <td>75.4</td> <td>86.7</td> <td>212</td> <td>87</td> <td>45.7</td> <td>64.4</td> </tr> <tr> <td>Model || TransR</td> <td>238</td> <td>225</td> <td>79.8</td> <td>92</td> <td>198</td> <td>77</td> <td>48.2</td> <td>68.7</td> </tr> <tr> <td>Model || TranSparse</td> <td>223</td> <td>211</td> <td>80.1</td> <td>93.2</td> <td>187</td> <td>82</td> <td>53.5</td> <td>79.5</td> </tr> <tr> <td>Model || STransE</td> <td>217</td> <td>206</td> <td>80.9</td> <td>93.4</td> <td>219</td> <td>69</td> <td>51.6</td> <td>79.7</td> </tr> <tr> <td>Model || ITransF</td> <td>-</td> <td>205</td> <td>-</td> <td>94.2</td> <td>-</td> <td>65</td> <td>-</td> <td>81</td> </tr> <tr> <td>Model || HolE</td> <td>-</td> <td>-</td> <td>-</td> <td>94.9</td> <td>-</td> <td>-</td> <td>-</td> <td>73.9</td> </tr> <tr> <td>Model || ComplEx</td> <td>-</td> <td>-</td> <td>-</td> <td>94.7</td> <td>-</td> <td>-</td> <td>-</td> <td>84</td> </tr> <tr> <td>Model || ANALOGY</td> <td>-</td> <td>-</td> <td>-</td> <td>94.7</td> <td>-</td> <td>-</td> <td>-</td> <td>85.4</td> </tr> <tr> <td>Model || ProjE</td> <td>277</td> <td>260</td> <td>79.4</td> <td>94.9</td> <td>124</td> <td>34</td> <td>54.7</td> <td>88.4</td> </tr> <tr> <td>Model || RTransE</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>50</td> <td>-</td> <td>76.2</td> </tr> <tr> <td>Model || PTransE (ADD, 2-step)</td> <td>235</td> <td>221</td> <td>81.3</td> <td>92.7</td> <td>200</td> <td>54</td> <td>51.8</td> <td>83.4</td> </tr> <tr> <td>Model || PTransE (MUL, 2-step)</td> <td>243</td> <td>230</td> <td>79.5</td> <td>90.9</td> <td>216</td> <td>67</td> <td>47.4</td> <td>77.7</td> </tr> <tr> <td>Model || PTransE (ADD, 3-step)</td> <td>238</td> <td>219</td> <td>81.1</td> <td>94.2</td> <td>207</td> <td>58</td> <td>51.4</td> <td>84.6</td> </tr> <tr> <td>Model || PaSKoGE</td> <td>-</td> <td>-</td> <td>81.3</td> <td>95</td> <td>-</td> <td>-</td> <td>53.1</td> <td>88</td> </tr> <tr> <td>Model || RPE (ACOM)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>171</td> <td>41</td> <td>52</td> <td>85.5</td> </tr> <tr> <td>Model || RPE (MCOM)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>183</td> <td>43</td> <td>52.2</td> <td>81.7</td> </tr> <tr> <td>Model || RotatE</td> <td>-</td> <td>309</td> <td>-</td> <td>95.9</td> <td>-</td> <td>40</td> <td>-</td> <td>88.4</td> </tr> <tr> <td>Model || OPTransE</td> <td>211</td> <td>199</td> <td>83.2</td> <td>95.7</td> <td>136</td> <td>33</td> <td>58</td> <td>89.9</td> </tr> </tbody></table>
Table 2
table_2
D19-1268
7
emnlp2019
4.4 Results Table 2 shows the performances of different methods on the link prediction task according to various metrics. Numbers in bold mean the best results among all methods and the underlined ones mean the second best. The evaluation results of baselines are from their original work, and ”-” in the table means there is no reported results in prior work. Note that we implement ProjE and PTransE on WN18 using the public codes. From Table 2 we could observe that: (1) PTransE performs better than its basic model TransE, and RPE outperforms its original method TransR. This indicates that additional information from relation paths between entity pairs is helpful for link prediction. Note that OPTransE outperforms baselines which do not take relation paths into consideration in most cases. These results demonstrate the effectiveness of OPTransE to take advantage of the path features in the KG. (2) OPTransE performs better than previous pathbased models like RTransE, PTransE, PaSKoGE and RPE on all metrics. This implies that the order of relations in paths is of great importance for reasoning, and learning representations of ordered relation paths can significantly improve the accuracy of link prediction. Moreover, the proposed pooling strategy which aims to extract nonlinear features from different relation paths also contributes to the improvements of performance.
[1, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2]
['4.4 Results Table 2 shows the performances of different methods on the link prediction task according to various metrics.', 'Numbers in bold mean the best results among all methods and the underlined ones mean the second best.', 'The evaluation results of baselines are from their original work, and ”-” in the table means there is no reported results in prior work.', 'Note that we implement ProjE and PTransE on WN18 using the public codes.', 'From Table 2 we could observe that: (1) PTransE performs better than its basic model TransE, and RPE outperforms its original method TransR.', 'This indicates that additional information from relation paths between entity pairs is helpful for link prediction.', 'Note that OPTransE outperforms baselines which do not take relation paths into consideration in most cases.', 'These results demonstrate the effectiveness of OPTransE to take advantage of the path features in the KG.', '(2) OPTransE performs better than previous pathbased models like RTransE, PTransE, PaSKoGE and RPE on all metrics.', 'This implies that the order of relations in paths is of great importance for reasoning, and learning representations of ordered relation paths can significantly improve the accuracy of link prediction.', 'Moreover, the proposed pooling strategy which aims to extract nonlinear features from different relation paths also contributes to the improvements of performance.']
[None, None, None, None, ['PTransE (ADD, 2-step)', 'PTransE (MUL, 2-step)', 'PTransE (ADD, 3-step)', 'OPTransE', 'TransE', 'RPE (ACOM)', 'RPE (MCOM)', 'TransR'], None, ['OPTransE'], ['OPTransE'], ['OPTransE', 'RTransE', 'PTransE (ADD, 2-step)', 'PTransE (MUL, 2-step)', 'PTransE (ADD, 3-step)', 'PaSKoGE', 'RPE (ACOM)', 'RPE (MCOM)'], None, None]
1
D19-1272table_2
Overall average results by model (with % changes from the input)
2
[['Model', 'Input'], ['Model', 'SMERTI-Transformer'], ['Model', 'SMERTI-RNN'], ['Model', 'W2V-STEM'], ['Model', 'GWN-STEM'], ['Model', 'NWN-STEM']]
1
[['SPA'], ['SLOR'], ['CSS'], ['STES']]
[['-', '0.5962', '0.1166', '-'], ['0.6606', '0.5255 (-11.86%)', '0.2857 (+145.03%)', '0.4337'], ['0.6574', '0.5122 (-14.09%)', '0.2927 (+151.03%)', '0.4354'], ['0.6667', '0.4672 (-21.64%)', '0.2851 (+144.51%)', '0.4197'], ['0.8903', '0.4864 (-18.42%)', '0.1419 (+21.70%)', '0.2934'], ['0.9116', '0.4832 (-18.95%)', '0.1335 (+14.49%)', '0.2814']]
column
['SPA', 'SLOR', 'CSS', 'STES']
['SMERTI-Transformer', 'SMERTI-RNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SPA</th> <th>SLOR</th> <th>CSS</th> <th>STES</th> </tr> </thead> <tbody> <tr> <td>Model || Input</td> <td>-</td> <td>0.5962</td> <td>0.1166</td> <td>-</td> </tr> <tr> <td>Model || SMERTI-Transformer</td> <td>0.6606</td> <td>0.5255 (-11.86%)</td> <td>0.2857 (+145.03%)</td> <td>0.4337</td> </tr> <tr> <td>Model || SMERTI-RNN</td> <td>0.6574</td> <td>0.5122 (-14.09%)</td> <td>0.2927 (+151.03%)</td> <td>0.4354</td> </tr> <tr> <td>Model || W2V-STEM</td> <td>0.6667</td> <td>0.4672 (-21.64%)</td> <td>0.2851 (+144.51%)</td> <td>0.4197</td> </tr> <tr> <td>Model || GWN-STEM</td> <td>0.8903</td> <td>0.4864 (-18.42%)</td> <td>0.1419 (+21.70%)</td> <td>0.2934</td> </tr> <tr> <td>Model || NWN-STEM</td> <td>0.9116</td> <td>0.4832 (-18.95%)</td> <td>0.1335 (+14.49%)</td> <td>0.2814</td> </tr> </tbody></table>
Table 2
table_2
D19-1272
10
emnlp2019
Table 2 shows overall average results by model. As seen in Table 2, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst. SMERTI excels especially on fluency and content similarity. The transformer variation achieves slightly higher SLOR, while the RNN variation achieves slightly higher CSS. These results correspond well with our automatic evaluation results in Table 2. We look at the Pearson correlation values between RE Match, Fluency, and Sentiment Preservation with CSS, SLOR, and SPA, respectively. These are 0.9952, 0.9327, and 0.8768, respectively, demonstrating that our automatic metrics are highly effective and correspond well with human ratings.
[1, 1, 2, 1, 0, 0, 0]
['Table 2 shows overall average results by model.', 'As seen in Table 2, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst.', 'SMERTI excels especially on fluency and content similarity.', 'The transformer variation achieves slightly higher SLOR, while the RNN variation achieves slightly higher CSS.', 'These results correspond well with our automatic evaluation results in Table 2.', 'We look at the Pearson correlation values between RE Match, Fluency, and Sentiment Preservation with CSS, SLOR, and SPA, respectively.', 'These are 0.9952, 0.9327, and 0.8768, respectively, demonstrating that our automatic metrics are highly effective and correspond well with human ratings.']
[None, ['SMERTI-Transformer', 'SMERTI-RNN', 'STES'], ['SMERTI-Transformer', 'SMERTI-RNN'], ['SMERTI-Transformer', 'SLOR', 'SMERTI-RNN', 'CSS'], None, None, None]
1
D19-1278table_2
Model performance (Precision, Recall, F1) on PMB data (v.2.1.0, test set); models were trained on gold standard data.
1
[['van Noord et al. (2018)'], ['seq2seq+copy'], ['seq2graph']]
1
[['P'], ['R'], ['F1'], ['illformed']]
[['-', '-', '72.8', '20%'], ['75.57', '67.27', '71.18', '4.12%'], ['75.51', '71.69', '73.55', '0.40%']]
column
['P', 'R', 'F1', 'illformed']
['seq2graph']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>illformed</th> </tr> </thead> <tbody> <tr> <td>van Noord et al. (2018)</td> <td>-</td> <td>-</td> <td>72.8</td> <td>20%</td> </tr> <tr> <td>seq2seq+copy</td> <td>75.57</td> <td>67.27</td> <td>71.18</td> <td>4.12%</td> </tr> <tr> <td>seq2graph</td> <td>75.51</td> <td>71.69</td> <td>73.55</td> <td>0.40%</td> </tr> </tbody></table>
Table 2
table_2
D19-1278
8
emnlp2019
5 Results System comparison Table 2 summarizes our results on the PMB gold data (v.2.1.0, test set). We compare our graph decoder against the system of van Noord et al. (2018) and our implementation of a seq2seq model, enhanced with a copy mechanism. Overall, we see that our graph decoder outperforms both models. Moreover, it reduces the number of illformed representations without any specific constraints or post-processing in order to ensure the well-formedness of the semantics of the output.
[1, 1, 1, 1]
['5 Results System comparison Table 2 summarizes our results on the PMB gold data (v.2.1.0, test set).', 'We compare our graph decoder against the system of van Noord et al. (2018) and our implementation of a seq2seq model, enhanced with a copy mechanism.', 'Overall, we see that our graph decoder outperforms both models.', 'Moreover, it reduces the number of illformed representations without any specific constraints or post-processing in order to ensure the well-formedness of the semantics of the output.']
[None, ['van Noord et al. (2018)', 'seq2seq+copy'], ['van Noord et al. (2018)', 'seq2seq+copy', 'seq2graph'], ['seq2graph']]
1
D19-1282table_1
Comparisons with large pre-trained language model fine-tuning with different amount of training data.
2
[['Model', 'Random guess'], ['Model', 'GPT-FINETUNING'], ['Model', 'GPT-KAGNET'], ['Model', 'BERT-BASE-FINETUNING'], ['Model', 'BERT-BASE-KAGNET'], ['Model', 'BERT-LARGE-FINETUNING'], ['Model', 'BERT-LARGE-KAGNET'], ['Model', 'Human Performance']]
2
[['10(%) of Ihtrain', 'IHdev-Acc. (%)'], ['10(%) of Ihtrain', 'IHtest-Acc. (%)'], ['50(%) of Ihtrain', 'IHdev-Acc. (%)'], ['50(%) of Ihtrain', 'IHtest-Acc. (%)'], ['100(%) of Ihtrain', 'IHdev-Acc. (%)'], ['100(%) of Ihtrain', 'IHtest-Acc. (%)']]
[['20', '20', '20', '20', '20', '20'], ['27.55', '26.51', '32.46', '31.28', '47.35', '45.58'], ['28.13', '26.98', '33.72', '32.33', '48.95', '46.79'], ['30.11', '29.78', '38.66', '36.83', '53.48', '53.26'], ['31.05', '30.94', '40.32', '39.01', '55.57', '56.19'], ['35.71', '32.88', '55.45', '49.88', '60.61', '55.84'], ['36.82', '33.91', '58.73', '51.13', '62.35', '57.16'], ['-', '88.9', '-', '88.9', '-', '88.9']]
column
['IHdev-Acc. (%)', 'IHtest-Acc. (%)', 'IHdev-Acc. (%)', 'IHtest-Acc. (%)', 'IHdev-Acc. (%)', 'IHtest-Acc. (%)']
['GPT-KAGNET', 'BERT-BASE-KAGNET', 'BERT-LARGE-KAGNET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>10(%) of Ihtrain || IHdev-Acc. (%)</th> <th>10(%) of Ihtrain || IHtest-Acc. (%)</th> <th>50(%) of Ihtrain || IHdev-Acc. (%)</th> <th>50(%) of Ihtrain || IHtest-Acc. (%)</th> <th>100(%) of Ihtrain || IHdev-Acc. (%)</th> <th>100(%) of Ihtrain || IHtest-Acc. (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Random guess</td> <td>20</td> <td>20</td> <td>20</td> <td>20</td> <td>20</td> <td>20</td> </tr> <tr> <td>Model || GPT-FINETUNING</td> <td>27.55</td> <td>26.51</td> <td>32.46</td> <td>31.28</td> <td>47.35</td> <td>45.58</td> </tr> <tr> <td>Model || GPT-KAGNET</td> <td>28.13</td> <td>26.98</td> <td>33.72</td> <td>32.33</td> <td>48.95</td> <td>46.79</td> </tr> <tr> <td>Model || BERT-BASE-FINETUNING</td> <td>30.11</td> <td>29.78</td> <td>38.66</td> <td>36.83</td> <td>53.48</td> <td>53.26</td> </tr> <tr> <td>Model || BERT-BASE-KAGNET</td> <td>31.05</td> <td>30.94</td> <td>40.32</td> <td>39.01</td> <td>55.57</td> <td>56.19</td> </tr> <tr> <td>Model || BERT-LARGE-FINETUNING</td> <td>35.71</td> <td>32.88</td> <td>55.45</td> <td>49.88</td> <td>60.61</td> <td>55.84</td> </tr> <tr> <td>Model || BERT-LARGE-KAGNET</td> <td>36.82</td> <td>33.91</td> <td>58.73</td> <td>51.13</td> <td>62.35</td> <td>57.16</td> </tr> <tr> <td>Model || Human Performance</td> <td>-</td> <td>88.9</td> <td>-</td> <td>88.9</td> <td>-</td> <td>88.9</td> </tr> </tbody></table>
Table 1
table_1
D19-1282
6
emnlp2019
We conduct the experiments with our in-house splits to investigate whether our KA GNE T can also work well on other universal language encoders (GPT and BERT-BASE), particularly with different fractions of the dataset (say 10%, 50%, 100% of the training data). Table 1 shows that our KA GNE T-based methods using fixed pre-trained language encoders outperform fine-tuning themselves in all settings. Furthermore, we find that the improvements in a small data situation (10%) is relatively limited, and we believe an important future research direction is thus few-shot learning for commonsense reasoning.
[2, 1, 1]
['We conduct the experiments with our in-house splits to investigate whether our KA GNE T can also work well on other universal language encoders (GPT and BERT-BASE), particularly with different fractions of the dataset (say 10%, 50%, 100% of the training data).', 'Table 1 shows that our KA GNE T-based methods using fixed pre-trained language encoders outperform fine-tuning themselves in all settings.', 'Furthermore, we find that the improvements in a small data situation (10%) is relatively limited, and we believe an important future research direction is thus few-shot learning for commonsense reasoning.']
[None, ['GPT-KAGNET', 'BERT-BASE-KAGNET', 'BERT-LARGE-KAGNET', 'GPT-FINETUNING', 'BERT-BASE-FINETUNING', 'BERT-LARGE-FINETUNING'], ['10(%) of Ihtrain']]
1
D19-1284table_4
Results on WIKISQL. We compare accuracy with weakly-supervised or fully-supervised settings. Our method outperforms previous weakly-supervised methods and most of published fully-supervised methods.
3
[['Model', 'Weakly-supervised setting', 'REINFORCE (Williams, 1992)'], ['Model', 'Weakly-supervised setting', 'Iterative ML (Liang et al., 2017)'], ['Model', 'Weakly-supervised setting', 'Hard EM (Liang et al., 2018)'], ['Model', 'Weakly-supervised setting', 'Beam-based MML (Liang et al., 2018)'], ['Model', 'Weakly-supervised setting', 'MAPO (Liang et al., 2018)'], ['Model', 'Weakly-supervised setting', 'MAPOX (Agarwal et al., 2019)'], ['Model', 'Weakly-supervised setting', 'MAPOX+MeRL (Agarwal et al., 2019)'], ['Model', 'Weakly-supervised setting', 'MML'], ['Model', 'Weakly-supervised setting', 'Ours'], ['Model', 'Fully-supervised setting', 'SQLNet (Xu et al., 2018)'], ['Model', 'Fully-supervised setting', 'TypeSQL (Yu et al., 2018b)'], ['Model', 'Fully-supervised setting', 'Coarse2Fine (Dong and Lapata, 2018)'], ['Model', 'Fully-supervised setting', 'SQLova (Hwang et al., 2019)'], ['Model', 'Fully-supervised setting', 'X-SQL (He et al., 2019)']]
2
[['Accuracy', 'dev'], ['Accuracy', 'test']]
[['< 10', '-'], ['70.1', '-'], ['70.2', '-'], ['70.7', '-'], ['71.8', '72.4'], ['74.5', '74.2'], ['74.9', '74.8'], ['70.6', '70.5'], ['84.4', '83.9'], ['69.8', '68'], ['74.5', '73.5'], ['79', '78.5'], ['87.2', '86.2'], ['89.5', '88.7']]
column
['Accuracy', 'Accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || dev</th> <th>Accuracy || test</th> </tr> </thead> <tbody> <tr> <td>Model || Weakly-supervised setting || REINFORCE (Williams, 1992)</td> <td>&lt; 10</td> <td>-</td> </tr> <tr> <td>Model || Weakly-supervised setting || Iterative ML (Liang et al., 2017)</td> <td>70.1</td> <td>-</td> </tr> <tr> <td>Model || Weakly-supervised setting || Hard EM (Liang et al., 2018)</td> <td>70.2</td> <td>-</td> </tr> <tr> <td>Model || Weakly-supervised setting || Beam-based MML (Liang et al., 2018)</td> <td>70.7</td> <td>-</td> </tr> <tr> <td>Model || Weakly-supervised setting || MAPO (Liang et al., 2018)</td> <td>71.8</td> <td>72.4</td> </tr> <tr> <td>Model || Weakly-supervised setting || MAPOX (Agarwal et al., 2019)</td> <td>74.5</td> <td>74.2</td> </tr> <tr> <td>Model || Weakly-supervised setting || MAPOX+MeRL (Agarwal et al., 2019)</td> <td>74.9</td> <td>74.8</td> </tr> <tr> <td>Model || Weakly-supervised setting || MML</td> <td>70.6</td> <td>70.5</td> </tr> <tr> <td>Model || Weakly-supervised setting || Ours</td> <td>84.4</td> <td>83.9</td> </tr> <tr> <td>Model || Fully-supervised setting || SQLNet (Xu et al., 2018)</td> <td>69.8</td> <td>68</td> </tr> <tr> <td>Model || Fully-supervised setting || TypeSQL (Yu et al., 2018b)</td> <td>74.5</td> <td>73.5</td> </tr> <tr> <td>Model || Fully-supervised setting || Coarse2Fine (Dong and Lapata, 2018)</td> <td>79</td> <td>78.5</td> </tr> <tr> <td>Model || Fully-supervised setting || SQLova (Hwang et al., 2019)</td> <td>87.2</td> <td>86.2</td> </tr> <tr> <td>Model || Fully-supervised setting || X-SQL (He et al., 2019)</td> <td>89.5</td> <td>88.7</td> </tr> </tbody></table>
Table 4
table_4
D19-1284
7
emnlp2019
Table 4 shows training method significantly outperforms all the weaklysupervised learning algorithms, including 10% gain over the previous state of the art. These results indicate that precomputing a solution set and training a model through hard updates play a significant role to the performance. Given that our method does not require SQL executions at training time (unlike MAPO), it provides a simpler, more effective and time-efficient strategy. Comparing to previous models with full supervision, our results are still on par and outperform most of the published results.
[1, 2, 2, 1]
['Table 4 shows training method significantly outperforms all the weaklysupervised learning algorithms, including 10% gain over the previous state of the art.', 'These results indicate that precomputing a solution set and training a model through hard updates play a significant role to the performance.', 'Given that our method does not require SQL executions at training time (unlike MAPO), it provides a simpler, more effective and time-efficient strategy.', 'Comparing to previous models with full supervision, our results are still on par and outperform most of the published results.']
[['Fully-supervised setting', 'Weakly-supervised setting'], None, None, ['Ours', 'Fully-supervised setting']]
1
D19-1291table_3
Results for Intra-turn Relation Prediction with Gold and Predicted Premises
2
[['Method', 'All relations'], ['Method', 'Menini et al. (2018)'], ['Method', 'Menini et al. (2018) + RST Features'], ['Method', 'RST Features'], ['Method', 'Morio and Fujita (2018)'], ['Method', 'BERT Devlin et al. (2019)'], ['Method', 'IMHO Context Fine-Tuned BERT'], ['Method', ' + RST Ensemble']]
2
[['Precision', 'Gold'], ['Precision', 'Pred'], ['Recall', 'Gold'], ['Recall', 'Pred'], ['F-Score', 'Gold'], ['F-Score', 'Pred']]
[['5', '-', '100', '-', '9', '-'], ['7', '5.9', '82', '80', '13', '11'], ['7.4', '6.1', '83', '81', '13.7', '11.4'], ['6.3', '5.7', '79.5', '77', '11.8', '10.6'], ['10', '-', '48.8', '-', '16.6', '-'], ['12', '11', '67', '60', '20.3', '18.5'], ['14.3', '13.2', '69', '65', '23.7', '21.8'], ['16.7', '15.5', '73', '70.2', '27.2', '25.4']]
column
['Precision', 'Precision', 'Recall', 'Recall', 'F-Score', 'F-Score']
['IMHO Context Fine-Tuned BERT', ' + RST Ensemble']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision || Gold</th> <th>Precision || Pred</th> <th>Recall || Gold</th> <th>Recall || Pred</th> <th>F-Score || Gold</th> <th>F-Score || Pred</th> </tr> </thead> <tbody> <tr> <td>Method || All relations</td> <td>5</td> <td>-</td> <td>100</td> <td>-</td> <td>9</td> <td>-</td> </tr> <tr> <td>Method || Menini et al. (2018)</td> <td>7</td> <td>5.9</td> <td>82</td> <td>80</td> <td>13</td> <td>11</td> </tr> <tr> <td>Method || Menini et al. (2018) + RST Features</td> <td>7.4</td> <td>6.1</td> <td>83</td> <td>81</td> <td>13.7</td> <td>11.4</td> </tr> <tr> <td>Method || RST Features</td> <td>6.3</td> <td>5.7</td> <td>79.5</td> <td>77</td> <td>11.8</td> <td>10.6</td> </tr> <tr> <td>Method || Morio and Fujita (2018)</td> <td>10</td> <td>-</td> <td>48.8</td> <td>-</td> <td>16.6</td> <td>-</td> </tr> <tr> <td>Method || BERT Devlin et al. (2019)</td> <td>12</td> <td>11</td> <td>67</td> <td>60</td> <td>20.3</td> <td>18.5</td> </tr> <tr> <td>Method || IMHO Context Fine-Tuned BERT</td> <td>14.3</td> <td>13.2</td> <td>69</td> <td>65</td> <td>23.7</td> <td>21.8</td> </tr> <tr> <td>Method || + RST Ensemble</td> <td>16.7</td> <td>15.5</td> <td>73</td> <td>70.2</td> <td>27.2</td> <td>25.4</td> </tr> </tbody></table>
Table 3
table_3
D19-1291
7
emnlp2019
Intra-turn Relations . We report the results of our binary classification task in Table 3 in terms of Precision, Recall and F-score for the “true” class, i.e., when a relation is present. We report results given both gold premises and predicted premises (using our best model from 5.1). Our best results are obtained from ensembling the RST classifier with BERT fine-tuned on IMHO+context, for statistically significant (p < 0.001) improvement over all other models. We obtain comparable performance to previous work on relation prediction in other argumentative datasets (Niculae et al., 2017; Morio and Fujita, 2018).
[2, 1, 1, 1, 1]
['Intra-turn Relations .', 'We report the results of our binary classification task in Table 3 in terms of Precision, Recall and F-score for the “true” class, i.e., when a relation is present.', 'We report results given both gold premises and predicted premises (using our best model from 5.1).', 'Our best results are obtained from ensembling the RST classifier with BERT fine-tuned on IMHO+context, for statistically significant (p < 0.001) improvement over all other models.', 'We obtain comparable performance to previous work on relation prediction in other argumentative datasets (Niculae et al., 2017; Morio and Fujita, 2018).']
[None, ['Precision', 'Recall', 'F-Score'], ['Gold', 'Pred'], [' + RST Ensemble'], ['IMHO Context Fine-Tuned BERT', 'Morio and Fujita (2018)']]
1
D19-1294table_2
Performance for classifying review segments as good or bad for recommendation justification.
2
[['Method', 'BOW-Xgboost'], ['Method', 'CNN'], ['Method', 'LSTM-MaxPool'], ['Method', 'BERT'], ['Method', 'BERT-SA (one epoch)'], ['Method', 'BERT-SA (three epoch)']]
1
[['F1'], ['Recall'], ['Precision']]
[['0.559', '0.679', '0.475'], ['0.644', '0.596', '0.7'], ['0.675', '0.703', '0.65'], ['0.747', '0.7', '0.8'], ['0.481', '0.975', '0.32'], ['0.491', '1', '0.325']]
column
['F1', 'Recall', 'Precision']
['BERT-SA (three epoch)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>Recall</th> <th>Precision</th> </tr> </thead> <tbody> <tr> <td>Method || BOW-Xgboost</td> <td>0.559</td> <td>0.679</td> <td>0.475</td> </tr> <tr> <td>Method || CNN</td> <td>0.644</td> <td>0.596</td> <td>0.7</td> </tr> <tr> <td>Method || LSTM-MaxPool</td> <td>0.675</td> <td>0.703</td> <td>0.65</td> </tr> <tr> <td>Method || BERT</td> <td>0.747</td> <td>0.7</td> <td>0.8</td> </tr> <tr> <td>Method || BERT-SA (one epoch)</td> <td>0.481</td> <td>0.975</td> <td>0.32</td> </tr> <tr> <td>Method || BERT-SA (three epoch)</td> <td>0.491</td> <td>1</td> <td>0.325</td> </tr> </tbody></table>
Table 2
table_2
D19-1294
3
emnlp2019
Table 2 presents results for our binary classification task. The BERT classifier has higher F1-score and precision than other classifiers. The BERTSA model after three epochs only achieves an F1 score of 0.491, which confirms the difference between sentiment analysis and our good/bad task, i.e., even if the segment has positive sentiment, it might be not suitable as a justification.
[1, 1, 1]
['Table 2 presents results for our binary classification task.', 'The BERT classifier has higher F1-score and precision than other classifiers.', 'The BERTSA model after three epochs only achieves an F1 score of 0.491, which confirms the difference between sentiment analysis and our good/bad task, i.e., even if the segment has positive sentiment, it might be not suitable as a justification.']
[None, ['BERT', 'F1', 'Precision'], ['BERT-SA (three epoch)', 'F1']]
1
D19-1296table_1
Performance results of the compared methods.
1
[['SECTION'], ['INFOBOX'], ['RELATED'], ['O RACLE RE'], ['PROP']]
1
[['Acc'], ['Prec'], ['Rec'], ['F1']]
[['50.56', '100', '1.12', '2.21'], ['53.71', '100', '7.41', '13.81'], ['68.86', '66.23', '76.97', '71.2'], ['75.89', '100', '51.77', '68.22'], ['81.45', '98.28', '64.02', '77.53']]
column
['Acc', 'Prec', 'Rec', 'F1']
['PROP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> <th>Prec</th> <th>Rec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>SECTION</td> <td>50.56</td> <td>100</td> <td>1.12</td> <td>2.21</td> </tr> <tr> <td>INFOBOX</td> <td>53.71</td> <td>100</td> <td>7.41</td> <td>13.81</td> </tr> <tr> <td>RELATED</td> <td>68.86</td> <td>66.23</td> <td>76.97</td> <td>71.2</td> </tr> <tr> <td>O RACLE RE</td> <td>75.89</td> <td>100</td> <td>51.77</td> <td>68.22</td> </tr> <tr> <td>PROP</td> <td>81.45</td> <td>98.28</td> <td>64.02</td> <td>77.53</td> </tr> </tbody></table>
Table 1
table_1
D19-1296
6
emnlp2019
Results . Table 1 lists the accuracy (Acc), precision (Prec), recall (Rec), and F1 of the compared methods. The SECTION method achieved 100% precision, which indicates that our technique of exploiting causality-describing sections in Wikipedia could accurately extract causalities. As the method’s recall indicates, however, it covered only a small portion of our target causalities. The INFOBOX method also achieved 100% precision, though its coverage was also quite limited. The RELATED method exhibited the highest recall, but the precision was unacceptably low for the subsequent manual labor that would be required to construct the CKB. The ORACLE RE method achieved 100% precision by design. Its recall was rather low because 67.3% of the entity pairs in the data (§3.1) consisted of entities that did NOT co-occur in a sentence. This means that most RE methods that work sentence-wise will miss a large portion of causalities, regardless of their accuracy. Finally, PROP achieved the best F1 score, though its recall still had room for improvement. The fact that PROP outperformed the baselines, especially ORACLE RE, clearly shows the effectiveness of our method.
[2, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Results .', 'Table 1 lists the accuracy (Acc), precision (Prec), recall (Rec), and F1 of the compared methods.', 'The SECTION method achieved 100% precision, which indicates that our technique of exploiting causality-describing sections in Wikipedia could accurately extract causalities.', 'As the method’s recall indicates, however, it covered only a small portion of our target causalities.', 'The INFOBOX method also achieved 100% precision, though its coverage was also quite limited.', 'The RELATED method exhibited the highest recall, but the precision was unacceptably low for the subsequent manual labor that would be required to construct the CKB.', 'The ORACLE RE method achieved 100% precision by design. Its recall was rather low because 67.3% of the entity pairs in the data (§3.1) consisted of entities that did NOT co-occur in a sentence.', 'This means that most RE methods that work sentence-wise will miss a large portion of causalities, regardless of their accuracy.', 'Finally, PROP achieved the best F1 score, though its recall still had room for improvement.', 'The fact that PROP outperformed the baselines, especially ORACLE RE, clearly shows the effectiveness of our method.']
[None, ['Acc', 'Prec', 'Rec', 'F1'], ['SECTION', 'Prec'], None, ['INFOBOX', 'Prec'], ['RELATED'], ['O RACLE RE', 'Prec'], ['O RACLE RE'], ['PROP', 'F1'], ['PROP', 'O RACLE RE']]
1
D19-1298table_2
Results on the arXiv dataset. For models with an ∗, we report results from (Cohan et al., 2018). Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.2. Results that are not significantly distinguished from the best systems are bold.
2
[['Model', 'SumBasic*'], ['Model', 'LSA*'], ['Model', 'LexRank*'], ['Model', 'Attn-Seq2Seq*'], ['Model', 'Pntr-Gen-Seq2Seq*'], ['Model', 'Discourse-aware*'], ['Model', 'Baseline'], ['Model', 'Cheng & Lapata'], ['Model', 'SummaRuNNer'], ['Model', 'Ours-attentive context'], ['Model', 'Ours-concat'], ['Model', 'Lead'], ['Model', 'Oracle']]
1
[['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L'], ['METEOR']]
[['29.47', '6.95', '26.3', '-'], ['29.91', '7.42', '25.67', '-'], ['33.85', '10.73', '28.99', '-'], ['29.3', '6', '25.56', '-'], ['32.06', '9.04', '25.16', '-'], ['35.8', '11.05', '31.8', '-'], ['42.91', '16.65', '28.53', '21.35'], ['42.24', '15.97', '27.88', '20.97'], ['42.81', '16.52', '28.23', '21.35'], ['43.58', '17.37', '29.3', '21.71'], ['43.62', '17.36', '29.14', '21.78'], ['33.66', '8.94', '22.19', '16.45'], ['53.88', '23.05', '34.9', '24.11']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'METEOR']
['Ours-attentive context', 'Ours-concat']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || SumBasic*</td> <td>29.47</td> <td>6.95</td> <td>26.3</td> <td>-</td> </tr> <tr> <td>Model || LSA*</td> <td>29.91</td> <td>7.42</td> <td>25.67</td> <td>-</td> </tr> <tr> <td>Model || LexRank*</td> <td>33.85</td> <td>10.73</td> <td>28.99</td> <td>-</td> </tr> <tr> <td>Model || Attn-Seq2Seq*</td> <td>29.3</td> <td>6</td> <td>25.56</td> <td>-</td> </tr> <tr> <td>Model || Pntr-Gen-Seq2Seq*</td> <td>32.06</td> <td>9.04</td> <td>25.16</td> <td>-</td> </tr> <tr> <td>Model || Discourse-aware*</td> <td>35.8</td> <td>11.05</td> <td>31.8</td> <td>-</td> </tr> <tr> <td>Model || Baseline</td> <td>42.91</td> <td>16.65</td> <td>28.53</td> <td>21.35</td> </tr> <tr> <td>Model || Cheng &amp; Lapata</td> <td>42.24</td> <td>15.97</td> <td>27.88</td> <td>20.97</td> </tr> <tr> <td>Model || SummaRuNNer</td> <td>42.81</td> <td>16.52</td> <td>28.23</td> <td>21.35</td> </tr> <tr> <td>Model || Ours-attentive context</td> <td>43.58</td> <td>17.37</td> <td>29.3</td> <td>21.71</td> </tr> <tr> <td>Model || Ours-concat</td> <td>43.62</td> <td>17.36</td> <td>29.14</td> <td>21.78</td> </tr> <tr> <td>Model || Lead</td> <td>33.66</td> <td>8.94</td> <td>22.19</td> <td>16.45</td> </tr> <tr> <td>Model || Oracle</td> <td>53.88</td> <td>23.05</td> <td>34.9</td> <td>24.11</td> </tr> </tbody></table>
Table 2
table_2
D19-1298
7
emnlp2019
The performance of all models on arXiv and Pubmed is shown in Table 2 and Table 3, respectively. Follow the work (Kedzie et al., 2018), we use the approximate randomization as the statistical significance test method (Riezler and Maxwell, 2005) with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 (p < 0.01). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries. Compared with other neural extractive models, our models (both with attentive context and concatenation decoder) have better performances on all three ROUGE scores, as well as METEOR. In particular, the improvements over the Baseline model show that a combination of local and global contextual information does help to identify the most important sentences (more on this in the next section). Interestingly, just the Baseline model already achieves a slightly better performance than previous works; possibly because the auto-regressive approach used in those models is even more detrimental for long documents.
[2, 2, 1, 2, 2, 1, 1, 2, 1]
['The performance of all models on arXiv and Pubmed is shown in Table 2 and Table 3, respectively.', 'Follow the work (Kedzie et al., 2018), we use the approximate randomization as the statistical significance test method (Riezler and Maxwell, 2005) with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 (p < 0.01).', 'As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L.', 'Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1.', 'Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work.', 'Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.', 'Compared with other neural extractive models, our models (both with attentive context and concatenation decoder) have better performances on all three ROUGE scores, as well as METEOR.', 'In particular, the improvements over the Baseline model show that a combination of local and global contextual information does help to identify the most important sentences (more on this in the next section).', 'Interestingly, just the Baseline model already achieves a slightly better performance than previous works; possibly because the auto-regressive approach used in those models is even more detrimental for long documents.']
[None, None, ['Baseline', 'Cheng & Lapata', 'SummaRuNNer', 'Ours-attentive context', 'Ours-concat', 'SumBasic*', 'LSA*', 'LexRank*', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['ROUGE-1'], None, ['Baseline', 'Cheng & Lapata', 'SummaRuNNer', 'Ours-attentive context', 'Ours-concat', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'Attn-Seq2Seq*', 'Pntr-Gen-Seq2Seq*', 'Discourse-aware*'], ['Baseline', 'Cheng & Lapata', 'SummaRuNNer', 'Ours-attentive context', 'Ours-concat', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'METEOR'], ['Baseline'], ['Baseline']]
1
D19-1300table_1
Results on the combined CNN/DailyMail test set. We report F1 scores of ROUGE-1 (R1), ROUGE2 (R2), and ROUGE-L (RL). The result of Lead-3 is taken from Dong et al. (2018).
2
[['Model', 'Lead-3'], ['Model', 'SummaRuNNer'], ['Model', 'DQN'], ['Model', 'Refresh'], ['Model', 'RNES'], ['Model', 'BANDITSUM'], ['Model', 'HER']]
2
[[' ROUGE', 'R1'], ['ROUGE', ' R2'], ['ROUGE', ' RL']]
[['40', '17.5', '36.2'], ['39.6', '16.2', '35.3'], ['39.4', '16.1', '35.6'], ['40', '18.2', '36.6'], ['41.3', '18.9', '37.6'], ['41.5', '18.7', '37.6'], ['42.3', '18.9', '37.9']]
column
['R1', 'R2', 'RL']
['HER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE || R1</th> <th>ROUGE || R2</th> <th>ROUGE || RL</th> </tr> </thead> <tbody> <tr> <td>Model || Lead-3</td> <td>40</td> <td>17.5</td> <td>36.2</td> </tr> <tr> <td>Model || SummaRuNNer</td> <td>39.6</td> <td>16.2</td> <td>35.3</td> </tr> <tr> <td>Model || DQN</td> <td>39.4</td> <td>16.1</td> <td>35.6</td> </tr> <tr> <td>Model || Refresh</td> <td>40</td> <td>18.2</td> <td>36.6</td> </tr> <tr> <td>Model || RNES</td> <td>41.3</td> <td>18.9</td> <td>37.6</td> </tr> <tr> <td>Model || BANDITSUM</td> <td>41.5</td> <td>18.7</td> <td>37.6</td> </tr> <tr> <td>Model || HER</td> <td>42.3</td> <td>18.9</td> <td>37.9</td> </tr> </tbody></table>
Table 1
table_1
D19-1300
7
emnlp2019
4 Experimental Results . 4.1 Quantitative Analysis . We first report the ROUGE metrics on the combined CNN/DailyMail test sets in Table 1 and the separate results in Table 2. We can get several observations from these two tables. Firstly, our model generally performs the best and even surpasses 42 on ROUGE-1 score on the combined CNN/DailyMail dataset. It also shows better results on the separate datasets. We argue that global and local features from rough reading can help extract summaries by capturing deep contextual relations, and the designed structure in careful reading makes it more flexible in selecting sentence sets. Hence a two-stage framework based on the human’s reading cognition is more appropriate for extractive summarization. Secondly, directly optimizing the evaluation metric by combining cross-entropy loss with rewards may improve the extractive results. RLbased methods, Refresh (Narayan et al., 2018) and RNES (Wu and Hu, 2018), perform better than the sequence labeling methods like SummaRuNNer (Nallapati et al., 2017). BANDITSUM (Dong et al., 2018) generally performs better than the other baselines, and it reports that framing the extractive summarization based on contextual bandit is more suitable than sequential labeling setting and also has more search space than other RLbased methods (Narayan et al., 2018; Yao et al., 2018; Wu and Hu, 2018).
[2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1]
['4 Experimental Results .', '4.1 Quantitative Analysis .', 'We first report the ROUGE metrics on the combined CNN/DailyMail test sets in Table 1 and the separate results in Table 2.', 'We can get several observations from these two tables.', 'Firstly, our model generally performs the best and even surpasses 42 on ROUGE-1 score on the combined CNN/DailyMail dataset.', 'It also shows better results on the separate datasets.', 'We argue that global and local features from rough reading can help extract summaries by capturing deep contextual relations, and the designed structure in careful reading makes it more flexible in selecting sentence sets.', 'Hence a two-stage framework based on the human’s reading cognition is more appropriate for extractive summarization.', 'Secondly, directly optimizing the evaluation metric by combining cross-entropy loss with rewards may improve the extractive results.', 'RLbased methods, Refresh (Narayan et al., 2018) and RNES (Wu and Hu, 2018), perform better than the sequence labeling methods like SummaRuNNer (Nallapati et al., 2017).', 'BANDITSUM (Dong et al., 2018) generally performs better than the other baselines, and it reports that framing the extractive summarization based on contextual bandit is more suitable than sequential labeling setting and also has more search space than other RLbased methods (Narayan et al., 2018; Yao et al., 2018; Wu and Hu, 2018).']
[None, None, [' ROUGE'], None, ['HER', 'R1'], None, None, None, None, ['Refresh', 'RNES'], ['BANDITSUM']]
1
D19-1300table_3
The results of ablation test on the test split of the combined CNN/DailyMail dataset. L and F are short for local net and rough reading.
2
[['Model', 'HER'], ['Model', 'HER-3'], ['Model', 'HER-3 w/o policy'], ['Model', 'HER-3 w/o policy&L'], ['Model', 'HER-3 w/o policy&F']]
2
[['ROUGE', 'R1'], ['ROUGE', ' R2'], ['ROUGE', ' RL']]
[['42.3', '18.9', '37.9'], ['42', '18.5', '37.6'], ['41.7', '18.3', '37.1'], ['41.2', '18.4', '37'], ['40.6', '18.2', '36.9']]
column
['R1', 'R2', 'RL']
['HER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE || R1</th> <th>ROUGE || R2</th> <th>ROUGE || RL</th> </tr> </thead> <tbody> <tr> <td>Model || HER</td> <td>42.3</td> <td>18.9</td> <td>37.9</td> </tr> <tr> <td>Model || HER-3</td> <td>42</td> <td>18.5</td> <td>37.6</td> </tr> <tr> <td>Model || HER-3 w/o policy</td> <td>41.7</td> <td>18.3</td> <td>37.1</td> </tr> <tr> <td>Model || HER-3 w/o policy&amp;L</td> <td>41.2</td> <td>18.4</td> <td>37</td> </tr> <tr> <td>Model || HER-3 w/o policy&amp;F</td> <td>40.6</td> <td>18.2</td> <td>36.9</td> </tr> </tbody></table>
Table 3
table_3
D19-1300
7
emnlp2019
4.2 Ablation Test. Next, we conduct ablation test by removing the modules of the proposed HER step by step. Firstly, we replace the automatic termination mechanism with a fixed extracting strategy that always selects three sentences for every document and we present the model as HER-3. Based on HER-3, we also remove bandit policy, local net, general net gradually, and denote them as HER-3 w/o policy, HER3 w/o policy & local net and HER-3 w/o policy & rough reading individually. The results are reported in Table 3 and it proves the effectiveness of each proposed module. Firstly, HER constructed with an automatic termination mechanism is more flexible and reliable in extracting various numbers of sentences varying different documents. Secondly, HER use (cid:15)-greedy to select sentences in order to raise the exploration chances on discovering important but easily ignored information. Thirdly, general cognition from rough reading process is useful in extractive summarizarion.
[2, 1, 2, 1, 1, 1, 2, 2]
['4.2 Ablation Test.', 'Next, we conduct ablation test by removing the modules of the proposed HER step by step.', 'Firstly, we replace the automatic termination mechanism with a fixed extracting strategy that always selects three sentences for every document and we present the model as HER-3.', 'Based on HER-3, we also remove bandit policy, local net, general net gradually, and denote them as HER-3 w/o policy, HER3 w/o policy & local net and HER-3 w/o policy & rough reading individually.', 'The results are reported in Table 3 and it proves the effectiveness of each proposed module.', 'Firstly, HER constructed with an automatic termination mechanism is more flexible and reliable in extracting various numbers of sentences varying different documents.', 'Secondly, HER use (cid:15)-greedy to select sentences in order to raise the exploration chances on discovering important but easily ignored information.', 'Thirdly, general cognition from rough reading process is useful in extractive summarizarion.']
[None, ['HER'], ['HER-3'], ['HER-3', 'HER-3 w/o policy', 'HER-3 w/o policy&L', 'HER-3 w/o policy&F'], None, ['HER'], None, None]
1
D19-1301table_1
ROUGE scores on the English evaluation sets of both Gigaword and DUC2004. On Gigaword, the fulllength F-1 based ROUGE scores are reported. On DUC2004, the recall based ROUGE scores are reported. “-” denotes no score is available in that work.
2
[['System', 'ABS (Rush et al. 2015)'], ['System', 'ABS+ (Rush et al. 2015)'], ['System', 'RAS-Elman (Chopra et al. 2016)'], ['System', 'words-lvt5k-1sent (Nallapati et al. 2016)'], ['System', 'SEASSbeam (Zhou et al. 2017)'], ['System', 'RNNMRT (Ayana et al. 2016)'], ['System', 'Actor-Critic (Li et al. 2018)'], ['System', 'StructuredLoss (Edunov et al. 2018)'], ['System', 'DRGD (Li et al. 2017)'], ['System', 'ConvS2S (Gehring et al. 2017)'], ['System', 'ConvS2SReinforceTopic (Wang et al. 2018)'], ['System', 'FactAware (Cao et al. 2018)'], ['System', 'Transformer'], ['System', 'Transformer+ContrastiveAttention']]
2
[['Gigaword', 'R-1'], ['Gigaword', 'R-2'], ['Gigaword', 'R-L'], ['DUC2004', 'R-1'], ['DUC2005', 'R-2'], ['DUC2006', 'R-L']]
[['29.55', '11.32', '26.42', '26.55', '7.06', '22.05'], ['29.76', '11.88', '26.96', '28.18', '8.49', '23.81'], ['33.78', '15.97', '31.15', '28.97', '8.26', '24.06'], ['35.3', '16.64', '32.62', '28.61', '9.42', '25.24'], ['36.15', '17.54', '33.63', '29.21', '9.56', '25.51'], ['36.54', '16.59', '33.44', '30.41', '10.87', '26.79'], ['36.05', '17.35', '33.49', '29.41', '9.84', '25.85'], ['36.7', '17.88', '34.29', '-', '-', '-'], ['36.27', '17.57', '33.62', '31.79', '10.75', '27.48'], ['35.88', '17.48', '33.29', '30.44', '10.84', '26.9'], ['36.92', '18.29', '34.58', '31.15', '10.85', '27.68'], ['37.27', '17.65', '34.24', '-', '-', '-'], ['37.87', '18.69', '35.22', '31.38', '10.89', '27.18'], ['38.72', '19.09', '35.82', '32.22', '11.04', '27.59']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L']
['Transformer+ContrastiveAttention', 'Transformer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gigaword || R-1</th> <th>Gigaword || R-2</th> <th>Gigaword || R-L</th> <th>DUC2004 || R-1</th> <th>DUC2005 || R-2</th> <th>DUC2006 || R-L</th> </tr> </thead> <tbody> <tr> <td>System || ABS (Rush et al. 2015)</td> <td>29.55</td> <td>11.32</td> <td>26.42</td> <td>26.55</td> <td>7.06</td> <td>22.05</td> </tr> <tr> <td>System || ABS+ (Rush et al. 2015)</td> <td>29.76</td> <td>11.88</td> <td>26.96</td> <td>28.18</td> <td>8.49</td> <td>23.81</td> </tr> <tr> <td>System || RAS-Elman (Chopra et al. 2016)</td> <td>33.78</td> <td>15.97</td> <td>31.15</td> <td>28.97</td> <td>8.26</td> <td>24.06</td> </tr> <tr> <td>System || words-lvt5k-1sent (Nallapati et al. 2016)</td> <td>35.3</td> <td>16.64</td> <td>32.62</td> <td>28.61</td> <td>9.42</td> <td>25.24</td> </tr> <tr> <td>System || SEASSbeam (Zhou et al. 2017)</td> <td>36.15</td> <td>17.54</td> <td>33.63</td> <td>29.21</td> <td>9.56</td> <td>25.51</td> </tr> <tr> <td>System || RNNMRT (Ayana et al. 2016)</td> <td>36.54</td> <td>16.59</td> <td>33.44</td> <td>30.41</td> <td>10.87</td> <td>26.79</td> </tr> <tr> <td>System || Actor-Critic (Li et al. 2018)</td> <td>36.05</td> <td>17.35</td> <td>33.49</td> <td>29.41</td> <td>9.84</td> <td>25.85</td> </tr> <tr> <td>System || StructuredLoss (Edunov et al. 2018)</td> <td>36.7</td> <td>17.88</td> <td>34.29</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || DRGD (Li et al. 2017)</td> <td>36.27</td> <td>17.57</td> <td>33.62</td> <td>31.79</td> <td>10.75</td> <td>27.48</td> </tr> <tr> <td>System || ConvS2S (Gehring et al. 2017)</td> <td>35.88</td> <td>17.48</td> <td>33.29</td> <td>30.44</td> <td>10.84</td> <td>26.9</td> </tr> <tr> <td>System || ConvS2SReinforceTopic (Wang et al. 2018)</td> <td>36.92</td> <td>18.29</td> <td>34.58</td> <td>31.15</td> <td>10.85</td> <td>27.68</td> </tr> <tr> <td>System || FactAware (Cao et al. 2018)</td> <td>37.27</td> <td>17.65</td> <td>34.24</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Transformer</td> <td>37.87</td> <td>18.69</td> <td>35.22</td> <td>31.38</td> <td>10.89</td> <td>27.18</td> </tr> <tr> <td>System || Transformer+ContrastiveAttention</td> <td>38.72</td> <td>19.09</td> <td>35.82</td> <td>32.22</td> <td>11.04</td> <td>27.59</td> </tr> </tbody></table>
Table 1
table_1
D19-1301
6
emnlp2019
4.3 Results. 4.3.1 English Results. The experimental results on the English evaluation sets are listed in Table 1. We report the full-length F-1 scores of ROUGE-1 (R-1), ROUGE2 (R-2), and ROUGE-L (R-L) on the evaluation set of the annotated Gigaword, while report the recall-based scores of the R-1, R-2, and R-L on the evaluation set of DUC2004 to follow the setting of the previous works. The results of our works are shown at the bottom of Table 1. The performances of the related works are reported in the upper part of Table 1 for comparison. ABS and ABS+ are the pioneer works of using neural models for abstractive text summarization. RAS-Elman extends ABS/ABS+ with attentive CNN encoder, wordslvt5k-1sent uses large vocabulary and linguistic features such as POS and NER tags. RNNMRT, Actor-Critic, StructuredLoss are sequence-level training methods to overcome the problem of the usual teacher-forcing methods. DRGD uses recurrent latent random model to improve summarization quality. FactAware generates summary words conditioned on both the source text and the fact descriptions extracted from OpenIE or dependencies. Besides the above RNN-based related works, CNN-based architectures of ConvS2S and ConvS2SReinforceTopic are included for comparison. Table 1 shows that we build a strong baseline using Transformer alone which obtains the state-of-the-art performance on Gigaword evaluation set, and obtains comparable performance to the state-of-the-art on DUC2004. When we introduce the contrastive attention mechanism into Transformer, it significantly improves the performance of Transformer, and greatly advances the state-of-the-art on both Gigaword evaluation set and DUC2004, as shown in the row of “Transformer+Contrastive Attention”.
[2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1]
['4.3 Results.', '4.3.1 English Results.', 'The experimental results on the English evaluation sets are listed in Table 1.', 'We report the full-length F-1 scores of ROUGE-1 (R-1), ROUGE2 (R-2), and ROUGE-L (R-L) on the evaluation set of the annotated Gigaword, while report the recall-based scores of the R-1, R-2, and R-L on the evaluation set of DUC2004 to follow the setting of the previous works.', 'The results of our works are shown at the bottom of Table 1.', 'The performances of the related works are reported in the upper part of Table 1 for comparison.', 'ABS and ABS+ are the pioneer works of using neural models for abstractive text summarization.', 'RAS-Elman extends ABS/ABS+ with attentive CNN encoder, wordslvt5k-1sent uses large vocabulary and linguistic features such as POS and NER tags.', 'RNNMRT, Actor-Critic, StructuredLoss are sequence-level training methods to overcome the problem of the usual teacher-forcing methods.', 'DRGD uses recurrent latent random model to improve summarization quality.', 'FactAware generates summary words conditioned on both the source text and the fact descriptions extracted from OpenIE or dependencies.', 'Besides the above RNN-based related works, CNN-based architectures of ConvS2S and ConvS2SReinforceTopic are included for comparison.', 'Table 1 shows that we build a strong baseline using Transformer alone which obtains the state-of-the-art performance on Gigaword evaluation set, and obtains comparable performance to the state-of-the-art on DUC2004.', 'When we introduce the contrastive attention mechanism into Transformer, it significantly improves the performance of Transformer, and greatly advances the state-of-the-art on both Gigaword evaluation set and DUC2004, as shown in the row of “Transformer+Contrastive Attention”.']
[None, None, None, ['R-1', 'R-2', 'R-L'], ['Transformer+ContrastiveAttention'], None, ['ABS (Rush et al. 2015)', 'ABS+ (Rush et al. 2015)'], ['RAS-Elman (Chopra et al. 2016)'], ['RNNMRT (Ayana et al. 2016)', 'Actor-Critic (Li et al. 2018)', 'StructuredLoss (Edunov et al. 2018)'], ['DRGD (Li et al. 2017)'], ['FactAware (Cao et al. 2018)'], ['ConvS2SReinforceTopic (Wang et al. 2018)', 'ConvS2S (Gehring et al. 2017)'], ['Transformer'], ['Transformer+ContrastiveAttention']]
1
D19-1301table_2
The full-length F-1 based ROUGE scores on the Chinese evaluation set of LCSTS.
2
[['System', 'RNN context (Hu et al., 2015)'], ['System', 'CopyNet (Gu et al., 2016)'], ['System', 'RNNMRT (Ayana et al., 2016)'], ['System', 'RNNdistraction (Chen et al., 2016)'], ['System', 'DRGD (Li et al., 2017)'], ['System', 'Actor-Critic (Li et al., 2018)'], ['System', 'Global (Lin et al., 2018)'], ['System', 'Transformer'], ['System', 'Transformer+ContrastiveAttention']]
1
[[' R-1'], [' R-2'], [' R-L']]
[['29.9', '17.4', '27.2'], ['34.4', '21.6', '31.3'], ['38.2', '25.2', '35.4'], ['35.2', '22.6', '32.5'], ['36.99', '24.15', '34.21'], ['37.51', '24.68', '35.02'], ['39.4', '26.9', '36.5'], ['41.93', '28.28', '38.32'], ['44.35', '30.65', '40.58']]
column
['R-1', 'R-2', 'R-L']
['Transformer+ContrastiveAttention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>System || RNN context (Hu et al., 2015)</td> <td>29.9</td> <td>17.4</td> <td>27.2</td> </tr> <tr> <td>System || CopyNet (Gu et al., 2016)</td> <td>34.4</td> <td>21.6</td> <td>31.3</td> </tr> <tr> <td>System || RNNMRT (Ayana et al., 2016)</td> <td>38.2</td> <td>25.2</td> <td>35.4</td> </tr> <tr> <td>System || RNNdistraction (Chen et al., 2016)</td> <td>35.2</td> <td>22.6</td> <td>32.5</td> </tr> <tr> <td>System || DRGD (Li et al., 2017)</td> <td>36.99</td> <td>24.15</td> <td>34.21</td> </tr> <tr> <td>System || Actor-Critic (Li et al., 2018)</td> <td>37.51</td> <td>24.68</td> <td>35.02</td> </tr> <tr> <td>System || Global (Lin et al., 2018)</td> <td>39.4</td> <td>26.9</td> <td>36.5</td> </tr> <tr> <td>System || Transformer</td> <td>41.93</td> <td>28.28</td> <td>38.32</td> </tr> <tr> <td>System || Transformer+ContrastiveAttention</td> <td>44.35</td> <td>30.65</td> <td>40.58</td> </tr> </tbody></table>
Table 2
table_2
D19-1301
7
emnlp2019
4.3.2 Chinese Results. Table 2 presents the evaluation results on LCSTS. The upper rows list the performances of the related works, the bottom rows list the performances of our Transformer baseline and the integration of the contrastive attention mechanism into Transformer. We only take character sequences as source-summary pairs and evaluate the performance based on reference characters for strict comparison to the related works. Table 2 shows that Transformer also sets a strong baseline on LCSTS that surpasses the performances of the previous works. When Transformer is equipped with our proposed contrastive attention mechanism, the performance is significantly improved and drastically advances the state-of-the-art on LCSTS.
[2, 1, 2, 2, 1, 1]
['4.3.2 Chinese Results.', 'Table 2 presents the evaluation results on LCSTS.', 'The upper rows list the performances of the related works, the bottom rows list the performances of our Transformer baseline and the integration of the contrastive attention mechanism into Transformer.', 'We only take character sequences as source-summary pairs and evaluate the performance based on reference characters for strict comparison to the related works.', 'Table 2 shows that Transformer also sets a strong baseline on LCSTS that surpasses the performances of the previous works.', 'When Transformer is equipped with our proposed contrastive attention mechanism, the performance is significantly improved and drastically advances the state-of-the-art on LCSTS.']
[None, None, ['Transformer'], None, ['Transformer'], None]
1
D19-1307table_4
Human evaluation on extractive summaries. Our system receives significantly higher human ratings on average. “Best%”: in how many percentage of documents a system receives the highest human rating.
1
[['Avg. Human Rating'], ['Best%']]
1
[['Ours'], ['Refresh'], ['ExtAbsRL']]
[['2.52', '2.27', '1.66'], ['70', '33.3', '6.7']]
row
['Avg. Human Rating', 'best%']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ours</th> <th>Refresh</th> <th>ExtAbsRL</th> </tr> </thead> <tbody> <tr> <td>Avg. Human Rating</td> <td>2.52</td> <td>2.27</td> <td>1.66</td> </tr> <tr> <td>Best%</td> <td>70</td> <td>33.3</td> <td>6.7</td> </tr> </tbody></table>
Table 4
table_4
D19-1307
9
emnlp2019
Table 4 presents the human evaluation results. Summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh and ExtAbsRL. Also, the average human rating for Refresh is significantly higher than ExtAbsRL.
[1, 1, 1]
['Table 4 presents the human evaluation results.', 'Summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh and ExtAbsRL.', 'Also, the average human rating for Refresh is significantly higher than ExtAbsRL.']
[None, ['Ours', ' Refresh', ' ExtAbsRL'], ['Avg. Human Rating', ' Refresh', ' ExtAbsRL']]
1
D19-1307table_5
Performance of ExtAbsRL with different reward functions, measured in terms of ROUGE (center) and human judgements (right). Using our learned reward yields significantly (p = 0.0057) higher average human rating. “Pref%”: in how many percentage of documents a system receives the higher human rating.
2
[['Reward', 'R-L (original)'], ['Reward', 'Learned (ours)']]
1
[['R-1'], [' R-2'], [' R-L'], [' Human'], [' Pref%']]
[['40.9', '17.8', '38.5', '1.75', '15'], ['39.2', '17.4', '37.5', '2.2', '75']]
column
['R-1', 'R-2', 'R-L', 'Human', 'Pref%']
['Learned (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> <th>Human</th> <th>Pref%</th> </tr> </thead> <tbody> <tr> <td>Reward || R-L (original)</td> <td>40.9</td> <td>17.8</td> <td>38.5</td> <td>1.75</td> <td>15</td> </tr> <tr> <td>Reward || Learned (ours)</td> <td>39.2</td> <td>17.4</td> <td>37.5</td> <td>2.2</td> <td>75</td> </tr> </tbody></table>
Table 5
table_5
D19-1307
9
emnlp2019
7.3 Abstractive Summarisation. Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences). Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores. We randomly sampled 20 documents from the test set in CNN/DailyMail and asked three users to rate the quality of the two summaries generated with different rewards. We asked the users to consider not only the informativeness and conciseness of summaries, but also their grammaticality and faithfulness (whether the information in the summary is consistent with that in the original news). It is clear from Table 5 that using the learned reward helps the RL-based system generate summaries with significantly higher human ratings. Furthermore, we note that the overall human ratings for the abstractive summaries are lower than the extractive summaries (compared to Table 4). Qualitative analysis suggests that the poor overall rating may be caused by occasional information inconsistencies between a summary and its source text: for instance, a sentence in the source article reads “after Mayweather was almost two hours late for his workout , Pacquiao has promised to be on time”, but the generated summary outputs “Mayweather has promised to be on time for the fight”. High redundancy is another reason for the low human ratings: ExtAbsRL generates six summaries with redundant sentences when applying ROUGE-L as reward, while the number drops to two when the learned reward is applied.
[2, 1, 1, 2, 2, 1, 2, 1, 2]
['7.3 Abstractive Summarisation.', 'Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences).', 'Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores.', 'We randomly sampled 20 documents from the test set in CNN/DailyMail and asked three users to rate the quality of the two summaries generated with different rewards.', 'We asked the users to consider not only the informativeness and conciseness of summaries, but also their grammaticality and faithfulness (whether the information in the summary is consistent with that in the original news).', 'It is clear from Table 5 that using the learned reward helps the RL-based system generate summaries with significantly higher human ratings.', 'Furthermore, we note that the overall human ratings for the abstractive summaries are lower than the extractive summaries (compared to Table 4).', 'Qualitative analysis suggests that the poor overall rating may be caused by occasional information inconsistencies between a summary and its source text: for instance, a sentence in the source article reads “after Mayweather was almost two hours late for his workout , Pacquiao has promised to be on time”, but the generated summary outputs “Mayweather has promised to be on time for the fight”.', 'High redundancy is another reason for the low human ratings: ExtAbsRL generates six summaries with redundant sentences when applying ROUGE-L as reward, while the number drops to two when the learned reward is applied.']
[None, None, ['Learned (ours)', 'R-1', ' R-2', ' R-L'], None, None, [' Human', 'R-L (original)'], [' Human'], None, None]
1
D19-1308table_4
Comparison of single-expert selector with state-of-the-art abstractive summarization methods on CNN-DM. R stands for ROUGE (Lin, 2004)
2
[['Method', 'PG (See et al., 2017)'], ['Method', 'Bottom-Up (Gehrmann et al., 2018)'], ['Method', 'DCA (Celikyilmaz et al., 2018)'], ['Method', 'SELECTOR & 10-Beam PG (Ours)']]
1
[[' R-1'], [' R-2'], [' R-L']]
[['39.53', '17.28', '36.38'], ['41.22', '18.68', '38.34'], ['41.69', '19.47', '37.92'], ['41.72', '18.74', '38.79']]
column
['R-1', 'R-2', 'R-L']
['SELECTOR & 10-Beam PG (Ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Method || PG (See et al., 2017)</td> <td>39.53</td> <td>17.28</td> <td>36.38</td> </tr> <tr> <td>Method || Bottom-Up (Gehrmann et al., 2018)</td> <td>41.22</td> <td>18.68</td> <td>38.34</td> </tr> <tr> <td>Method || DCA (Celikyilmaz et al., 2018)</td> <td>41.69</td> <td>19.47</td> <td>37.92</td> </tr> <tr> <td>Method || SELECTOR &amp; 10-Beam PG (Ours)</td> <td>41.72</td> <td>18.74</td> <td>38.79</td> </tr> </tbody></table>
Table 4
table_4
D19-1308
8
emnlp2019
Comparison with State-of-the-art . Table 4 compares the performance of SELECTOR with the state-of-the-art bottom-up content selection of Gehrmann et al. (2018) in abstractive summarization. SELECTOR passes focus embeddings at the decoding step, whereas the bottom-up selection method only uses the masked words for the copy mechanism. We set K, the number of mixtures of SELECTOR, to 1 to directly compare it with the previous work (Bottom-Up (Gehrmann et al., 2018)). We observe that SELECTOR not only outperforms Bottom-Up in every metric, but also achieves a new state-of-the-art ROUGE-1 and ROUGE-L on CNN-DM. Moreover, our method scores state-of-the-art BLEU-4 in question generation on SQuAD (Table 1).
[2, 1, 2, 1, 1, 0]
['Comparison with State-of-the-art .', 'Table 4 compares the performance of SELECTOR with the state-of-the-art bottom-up content selection of Gehrmann et al. (2018) in abstractive summarization.', 'SELECTOR passes focus embeddings at the decoding step, whereas the bottom-up selection method only uses the masked words for the copy mechanism.', 'We set K, the number of mixtures of SELECTOR, to 1 to directly compare it with the previous work (Bottom-Up (Gehrmann et al., 2018)).', 'We observe that SELECTOR not only outperforms Bottom-Up in every metric, but also achieves a new state-of-the-art ROUGE-1 and ROUGE-L on CNN-DM.', 'Moreover, our method scores state-of-the-art BLEU-4 in question generation on SQuAD (Table 1).']
[None, ['SELECTOR & 10-Beam PG (Ours)'], ['SELECTOR & 10-Beam PG (Ours)'], ['SELECTOR & 10-Beam PG (Ours)', 'Bottom-Up (Gehrmann et al., 2018)'], ['SELECTOR & 10-Beam PG (Ours)', 'Bottom-Up (Gehrmann et al., 2018)', ' R-1', ' R-L'], None]
1
D19-1308table_5
Training time: Comparison of training time on CNN-DM. See 4.6 for implementation details.
2
[['Method', 'PG'], ['Method', '3-M. Decoder'], ['Method', '5-M. Decoder'], ['Method', 'SELECTOR (Ours)'], ['Method', '3-M. SELECTOR (Ours)'], ['Method', '5-M. SELECTOR (Ours)']]
1
[[' Training time (ms. / step)']]
[['641.2'], [' 1804.1 (\x02 2.81)'], [' 2367.6 (\x02 4.37)'], [' 692.1 (\x02 1.08)'], [' 740.8 (\x02 1.16)'], [' 747.6 (\x02 1.17)']]
column
['Training time (ms. / step)']
['SELECTOR (Ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training time (ms. / step)</th> </tr> </thead> <tbody> <tr> <td>Method || PG</td> <td>641.2</td> </tr> <tr> <td>Method || 3-M. Decoder</td> <td>1804.1 ( 2.81)</td> </tr> <tr> <td>Method || 5-M. Decoder</td> <td>2367.6 ( 4.37)</td> </tr> <tr> <td>Method || SELECTOR (Ours)</td> <td>692.1 ( 1.08)</td> </tr> <tr> <td>Method || 3-M. SELECTOR (Ours)</td> <td>740.8 ( 1.16)</td> </tr> <tr> <td>Method || 5-M. SELECTOR (Ours)</td> <td>747.6 ( 1.17)</td> </tr> </tbody></table>
Table 5
table_5
D19-1308
9
emnlp2019
Efficient Training . Table 5 shows that SELECTOR trains up to 3.7 times faster than mixture decoder (Shen et al., 2019). Training time of mixture decoder linearly increases with the number of decoders, while parallel focus inference of SELECTOR makes additional training time negligible.
[2, 1, 1]
['Efficient Training .', 'Table 5 shows that SELECTOR trains up to 3.7 times faster than mixture decoder (Shen et al., 2019).', 'Training time of mixture decoder linearly increases with the number of decoders, while parallel focus inference of SELECTOR makes additional training time negligible.']
[None, ['SELECTOR (Ours)', '3-M. Decoder', '5-M. Decoder'], ['3-M. Decoder', '5-M. Decoder', 'SELECTOR (Ours)']]
1
D19-1312table_4
Evaluation for seen and unseen entities.
3
[['Seen', 'Type', 'demonstrative'], ['Seen', 'Type', 'description'], ['Seen', 'Type', 'name'], ['Seen', 'Type', 'pronoun'], ['Seen', 'Type', 'total'], ['Unseen', 'Type', 'demonstrative'], ['Unseen', 'Type', 'description'], ['Unseen', 'Type', 'name'], ['Unseen', 'Type', 'pronoun'], ['Unseen', 'Type', 'total']]
1
[['Acc.'], [' Support']]
[['0.00%', '22'], ['48.72%', '862'], ['79.11%', '2547'], ['90.00%', '160'], ['71.82%', '3591'], ['0.00%', '3'], ['20.54%', '409'], ['74.74%', '2423'], ['88.33%', '120'], ['67.72%', '2955']]
column
['Acc.', 'Support']
['Seen']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Support</th> </tr> </thead> <tbody> <tr> <td>Seen || Type || demonstrative</td> <td>0.00%</td> <td>22</td> </tr> <tr> <td>Seen || Type || description</td> <td>48.72%</td> <td>862</td> </tr> <tr> <td>Seen || Type || name</td> <td>79.11%</td> <td>2547</td> </tr> <tr> <td>Seen || Type || pronoun</td> <td>90.00%</td> <td>160</td> </tr> <tr> <td>Seen || Type || total</td> <td>71.82%</td> <td>3591</td> </tr> <tr> <td>Unseen || Type || demonstrative</td> <td>0.00%</td> <td>3</td> </tr> <tr> <td>Unseen || Type || description</td> <td>20.54%</td> <td>409</td> </tr> <tr> <td>Unseen || Type || name</td> <td>74.74%</td> <td>2423</td> </tr> <tr> <td>Unseen || Type || pronoun</td> <td>88.33%</td> <td>120</td> </tr> <tr> <td>Unseen || Type || total</td> <td>67.72%</td> <td>2955</td> </tr> </tbody></table>
Table 4
table_4
D19-1312
8
emnlp2019
6.2 Seen Entities vs. Unseen Entities. In the evaluation, we also distinguished the results for seen and unseen entities. We trained the model on a training set contains 64,353 referring expressions and evaluate the model on a test set with 3,591 referring expressions related to seen entities and 2,955 expressions related to unseen entities. Table 4 shows the evaluation results. From Table 4, it is easy to see that the model performs better when generating referring for seen entities. Among four referring expression types, the accuracy of description type drops dramatically for unseen entities, from 48.72% to 20.54%. This is probably due to the fact that compared with name and pronoun, description type is often hard to identify and more flexible. For instance, one of the gold-standard descriptions in the test set is the comic character , amazing-man. The model’s generation for this referring expression is amazingman.
[2, 1, 1, 1, 1, 1, 2, 2, 2]
['6.2 Seen Entities vs. Unseen Entities.', 'In the evaluation, we also distinguished the results for seen and unseen entities.', 'We trained the model on a training set contains 64,353 referring expressions and evaluate the model on a test set with 3,591 referring expressions related to seen entities and 2,955 expressions related to unseen entities.', 'Table 4 shows the evaluation results.', 'From Table 4, it is easy to see that the model performs better when generating referring for seen entities.', 'Among four referring expression types, the accuracy of description type drops dramatically for unseen entities, from 48.72% to 20.54%.', 'This is probably due to the fact that compared with name and pronoun, description type is often hard to identify and more flexible.', 'For instance, one of the gold-standard descriptions in the test set is the comic character , amazing-man.', 'The model’s generation for this referring expression is amazingman.']
[['Seen', 'Unseen'], ['Seen', 'Unseen'], ['Seen', 'Unseen'], None, ['Seen'], ['Seen', 'Unseen', 'description'], ['name', 'pronoun', 'description'], None, None]
1
D19-1313table_3
Performances on Twitter dataset.
2
[['Models', 'Seq2Seq-BS'], ['Models', 'VAE-SVG-eq'], ['Models', 'GAN'], ['Models', 'DP-GAN'], ['Models', 'IRL'], ['Models', 'D-PAGE'], ['Models', 'PG-BS'], ['Models', 'ours']]
2
[['Twitter (Quality)', 'BLEU2'], ['Twitter (Quality)', 'BLEU3'], ['Twitter (Quality)', 'BLEU4'], ['Twitter (Quality)', 'METEOR'], ['Twitter (Diversity)', 'self-BLEU2'], ['Twitter (Diversity)', 'self-BLEU3'], ['Twitter (Diversity)', 'self-BLEU4']]
[['32.69', '28.25', '24.99', '22.51', '82.79', '80.29', '77.98'], ['26.43', '23.04', '20.57', '18.34', '91.46', '90.51', '89.67'], ['23.1', '20.45', '18.47', '15.33', '94.35', '93.75', '93.22'], ['33.07', '28.68', '25.49', '22.84', '82.52', '80.05', '77.63'], ['32.96', '28.6', '25.39', '22.72', '83.53', '81.22', '78.95'], ['32.95', '28.82', '25.88', '22.59', '88.35', '86.45', '84.76'], ['33.86', '29.42', '26.15', '23.52', '82.57', '79.99', '77.48'], ['34.23', '29.66', '26.38', '24.29', '65.83', '61.17', '57.45']]
column
['BLEU2', 'BLEU3', 'BLEU4', 'METEOR', 'self-BLEU2', 'self-BLEU3', 'self-BLEU4']
['ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter (Quality) || BLEU2</th> <th>Twitter (Quality) || BLEU3</th> <th>Twitter (Quality) || BLEU4</th> <th>Twitter (Quality) || METEOR</th> <th>Twitter (Diversity) || self-BLEU2</th> <th>Twitter (Diversity) || self-BLEU3</th> <th>Twitter (Diversity) || self-BLEU4</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq-BS</td> <td>32.69</td> <td>28.25</td> <td>24.99</td> <td>22.51</td> <td>82.79</td> <td>80.29</td> <td>77.98</td> </tr> <tr> <td>Models || VAE-SVG-eq</td> <td>26.43</td> <td>23.04</td> <td>20.57</td> <td>18.34</td> <td>91.46</td> <td>90.51</td> <td>89.67</td> </tr> <tr> <td>Models || GAN</td> <td>23.1</td> <td>20.45</td> <td>18.47</td> <td>15.33</td> <td>94.35</td> <td>93.75</td> <td>93.22</td> </tr> <tr> <td>Models || DP-GAN</td> <td>33.07</td> <td>28.68</td> <td>25.49</td> <td>22.84</td> <td>82.52</td> <td>80.05</td> <td>77.63</td> </tr> <tr> <td>Models || IRL</td> <td>32.96</td> <td>28.6</td> <td>25.39</td> <td>22.72</td> <td>83.53</td> <td>81.22</td> <td>78.95</td> </tr> <tr> <td>Models || D-PAGE</td> <td>32.95</td> <td>28.82</td> <td>25.88</td> <td>22.59</td> <td>88.35</td> <td>86.45</td> <td>84.76</td> </tr> <tr> <td>Models || PG-BS</td> <td>33.86</td> <td>29.42</td> <td>26.15</td> <td>23.52</td> <td>82.57</td> <td>79.99</td> <td>77.48</td> </tr> <tr> <td>Models || ours</td> <td>34.23</td> <td>29.66</td> <td>26.38</td> <td>24.29</td> <td>65.83</td> <td>61.17</td> <td>57.45</td> </tr> </tbody></table>
Table 3
table_3
D19-1313
7
emnlp2019
Table 3 shows scores on Twitter dataset. Our model achieves the best BLEU score and outperforms all baselines for METEOR. For diversity, our model again performs considerably better than other models. Since part of sentence pairs in Twitter dataset may be mislabelled by the labeling algorithm, it is harder to generate high-quality paraphrases on Twitter. Paraphrases generated by our model have not only more diverse expression but also better quality compared to those generated by other models.
[1, 1, 1, 2, 1]
['Table 3 shows scores on Twitter dataset.', 'Our model achieves the best BLEU score and outperforms all baselines for METEOR.', 'For diversity, our model again performs considerably better than other models.', 'Since part of sentence pairs in Twitter dataset may be mislabelled by the labeling algorithm, it is harder to generate high-quality paraphrases on Twitter.', 'Paraphrases generated by our model have not only more diverse expression but also better quality compared to those generated by other models.']
[None, ['ours', 'BLEU2', 'BLEU3', 'BLEU4', 'self-BLEU2', 'self-BLEU3', 'self-BLEU4', 'METEOR'], ['ours', 'Twitter (Diversity)'], ['Twitter (Quality)'], ['ours', 'Twitter (Diversity)', 'Twitter (Quality)']]
1
D19-1313table_4
Human evaluation results.
2
[['Models', 'D-PAGE'], ['Models', 'PG-BS'], ['Models', 'DP-GAN'], ['Models', 'ours']]
2
[['Quora', 'Fluency'], ['Quora', 'Consistency'], ['Twitter', 'Fluency'], ['Twitter', ' Consistency']]
[['4.21', '3.44', '3.66', '3.08'], ['4.2', '3.34', '3.85', '3.17'], ['4.27', '3.49', '4.09', '3.3'], ['4.57', '3.82', '4.24', '3.59']]
column
['fluency', 'consistency', 'fluency', 'consistency']
['ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora || Fluency</th> <th>Quora || Consistency</th> <th>Twitter || Fluency</th> <th>Twitter || Consistency</th> </tr> </thead> <tbody> <tr> <td>Models || D-PAGE</td> <td>4.21</td> <td>3.44</td> <td>3.66</td> <td>3.08</td> </tr> <tr> <td>Models || PG-BS</td> <td>4.2</td> <td>3.34</td> <td>3.85</td> <td>3.17</td> </tr> <tr> <td>Models || DP-GAN</td> <td>4.27</td> <td>3.49</td> <td>4.09</td> <td>3.3</td> </tr> <tr> <td>Models || ours</td> <td>4.57</td> <td>3.82</td> <td>4.24</td> <td>3.59</td> </tr> </tbody></table>
Table 4
table_4
D19-1313
7
emnlp2019
For human evaluation, we randomly select 100 input sentences from the test set of each dataset and get the generated results of different models for these inputs. We follow the human evaluation guideline in (Li et al., 2018). The sentence pairs are scored for two aspects of the generated results: fluency (whether the generated sentence is fluent and grammatical) and consistency (the meaning similarity between the input sentence and the generated sentence). Each output is given two ratings, scaling from 1 to 5, on fluency and consistency separately. An input and outputs generated for the input from different systems form a test group. The outputs from different systems in the test group are shuffled. The test groups are randomly assigned to annotators. Every test group is evaluated by two annotators and scores for each output are averaged. The agreement between annotators is moderate (kappa=0.54). The final score for each system is the average score for all outputs. Table 4 shows our model achieves better scores in both meaning similarity and fluency than baseline models.
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
['For human evaluation, we randomly select 100 input sentences from the test set of each dataset and get the generated results of different models for these inputs.', 'We follow the human evaluation guideline in (Li et al., 2018).', 'The sentence pairs are scored for two aspects of the generated results: fluency (whether the generated sentence is fluent and grammatical) and consistency (the meaning similarity between the input sentence and the generated sentence).', 'Each output is given two ratings, scaling from 1 to 5, on fluency and consistency separately.', 'An input and outputs generated for the input from different systems form a test group.', 'The outputs from different systems in the test group are shuffled.', 'The test groups are randomly assigned to annotators.', 'Every test group is evaluated by two annotators and scores for each output are averaged.', 'The agreement between annotators is moderate (kappa=0.54).', 'The final score for each system is the average score for all outputs.', 'Table 4 shows our model achieves better scores in both meaning similarity and fluency than baseline models.']
[None, None, ['Fluency', 'Consistency'], ['Fluency', 'Consistency'], None, None, None, None, None, None, ['ours', 'Fluency', 'Consistency']]
1
D19-1315table_1
Comparison of human evaluation results with their consistency. HG-KG, HG-CC and KG-CC indicate the consistency scores between the headline generation and key phrase generation, headline generation and category classification and key phrase generation and category classification, respectively.
1
[['Baseline'], ['Proposed'], ['Gold']]
1
[['HG-KG'], [' HG-CC'], ['KG-CC'], ['3 Outputs']]
[['56.80%', '37.60%', '37.60%', '30.00%'], ['58.80%', '39.60%', '39.20%', '32.40%'], ['65.20%', '44.40%', '48.80%', '35.20%']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Proposed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HG-KG</th> <th>HG-CC</th> <th>KG-CC</th> <th>3 Outputs</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>56.80%</td> <td>37.60%</td> <td>37.60%</td> <td>30.00%</td> </tr> <tr> <td>Proposed</td> <td>58.80%</td> <td>39.60%</td> <td>39.20%</td> <td>32.40%</td> </tr> <tr> <td>Gold</td> <td>65.20%</td> <td>44.40%</td> <td>48.80%</td> <td>35.20%</td> </tr> </tbody></table>
Table 1
table_1
D19-1315
6
emnlp2019
First, we conduct a human evaluation to measure the consistency among three outputs. Table 1 shows the evaluation results with their consistency. The scores are the percentage of articles evaluated as consistent by a majority of workers. We consider all three outputs as consistent if all pairs of outputs are consistent. The results indicate that the proposed method improves the consistency of the three generated outputs 1.
[2, 1, 2, 2, 1]
['First, we conduct a human evaluation to measure the consistency among three outputs.', 'Table 1 shows the evaluation results with their consistency.', 'The scores are the percentage of articles evaluated as consistent by a majority of workers.', 'We consider all three outputs as consistent if all pairs of outputs are consistent.', 'The results indicate that the proposed method improves the consistency of the three generated outputs 1.']
[None, None, None, None, ['Proposed', '3 Outputs']]
1
D19-1315table_2
Comparison of human evaluation results for headlines. Scores are an average of ten crowd-sourcing workers with five scale rating.
1
[['Baseline'], ['Proposed'], ['Gold']]
1
[['Adequacy'], [' Fluency'], [' Occupation Adequacy']]
[['3.34', '3.69', '3.45'], ['3.76', '3.86', '3.89'], ['4.09', '4.12', '4.13']]
column
['Adequacy', 'Fluency', 'Occupation Adequacy']
['Proposed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Adequacy</th> <th>Fluency</th> <th>Occupation Adequacy</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>3.34</td> <td>3.69</td> <td>3.45</td> </tr> <tr> <td>Proposed</td> <td>3.76</td> <td>3.86</td> <td>3.89</td> </tr> <tr> <td>Gold</td> <td>4.09</td> <td>4.12</td> <td>4.13</td> </tr> </tbody></table>
Table 2
table_2
D19-1315
6
emnlp2019
Second, to evaluate the quality of the generated headline, we implement a human evaluation to measure the adequacy and fluency. Table 2 shows the evaluation result along with the adequacy and fluency. The proposed method improves the adequacy by 0.42pt and the occupation adequacy by 0.44pt. Proposed method can generate more adequate outputs, particularly for the occupation.
[2, 1, 1, 1]
['Second, to evaluate the quality of the generated headline, we implement a human evaluation to measure the adequacy and fluency.', 'Table 2 shows the evaluation result along with the adequacy and fluency.', 'The proposed method improves the adequacy by 0.42pt and the occupation adequacy by 0.44pt.', 'Proposed method can generate more adequate outputs, particularly for the occupation.']
[None, ['Adequacy', ' Fluency'], ['Adequacy', ' Occupation Adequacy'], ['Proposed']]
1
D19-1315table_3
Automatic evaluation results based on the ROUGE metrics and accuracy (%) of classification of job advertisement dataset. R-1, R-2 and R-L indicate the F1 scores of ROUGE-1, ROUGE-2 and ROUGE-L, respectively. The proposed method (MTL + SD + HCL) achieved the best scores (bold) for all tasks, and the ROUGE scores are statistically significant from the baseline model (p < 0.05) in both the headline and key phrase (indicated as ⇤). Lead-1 is the baseline score, which uses the first sentence of the input article as a predicted headline.
1
[['Baseline (Pointer-Generator Network)'], ['Multi-Task Learning (MTL)'], ['MTL + Scheduling (SD)'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Lead-1']]
2
[['Headline generation', 'R-1'], ['Headline generation', 'R-2'], ['Headline generation', 'R-L'], ['Key phrase generation', 'R-1'], ['Key phrase generation', 'R-2'], ['Key phrase generation', 'R-L'], [' Classification', 'Accuracy']]
[['25.1', '5.3', '21.1', '30.9', '10.6', '28.7', '62.8'], ['26.2', '5.8', '21.6', '32.3', '10.9', '30', '64.1'], ['26.3', '6', '21.8', '32.3', '10.4', '29.9', '63.9'], [' *26.9', ' *6.1', ' *22.4', ' *32.8', ' *11.2', ' *30.5', ' *64.4'], ['19', '4.3', '13.5', ' -', '-', '-', '-']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'Accuracy']
['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Headline generation || R-1</th> <th>Headline generation || R-2</th> <th>Headline generation || R-L</th> <th>Key phrase generation || R-1</th> <th>Key phrase generation || R-2</th> <th>Key phrase generation || R-L</th> <th>Classification || Accuracy</th> </tr> </thead> <tbody> <tr> <td>Baseline (Pointer-Generator Network)</td> <td>25.1</td> <td>5.3</td> <td>21.1</td> <td>30.9</td> <td>10.6</td> <td>28.7</td> <td>62.8</td> </tr> <tr> <td>Multi-Task Learning (MTL)</td> <td>26.2</td> <td>5.8</td> <td>21.6</td> <td>32.3</td> <td>10.9</td> <td>30</td> <td>64.1</td> </tr> <tr> <td>MTL + Scheduling (SD)</td> <td>26.3</td> <td>6</td> <td>21.8</td> <td>32.3</td> <td>10.4</td> <td>29.9</td> <td>63.9</td> </tr> <tr> <td>Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))</td> <td>*26.9</td> <td>*6.1</td> <td>*22.4</td> <td>*32.8</td> <td>*11.2</td> <td>*30.5</td> <td>*64.4</td> </tr> <tr> <td>Lead-1</td> <td>19</td> <td>4.3</td> <td>13.5</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 3
table_3
D19-1315
7
emnlp2019
Automatic evaluation of job advertisement corpus. We implement an automatic evaluation using the ROUGE metrics (Lin, 2004) and accuracy. We conduct the experiment ten times, and calculate the average score. Table 3 shows the effect of the proposed methods: multi-task learning (MTL), scheduling strategy (SD) and hierarchical consistency loss (HCL). From this result, the proposed method (MTL + SD + HCL) achieves the best score on all three tasks. MTL and HCL improve for all three tasks, and SD improves the score of the headline generation. Automatic evaluation of the CNN-DM dataset.
[2, 2, 2, 1, 1, 1, 0]
['Automatic evaluation of job advertisement corpus.', 'We implement an automatic evaluation using the ROUGE metrics (Lin, 2004) and accuracy.', 'We conduct the experiment ten times, and calculate the average score.', 'Table 3 shows the effect of the proposed methods: multi-task learning (MTL), scheduling strategy (SD) and hierarchical consistency loss (HCL).', 'From this result, the proposed method (MTL + SD + HCL) achieves the best score on all three tasks.', 'MTL and HCL improve for all three tasks, and SD improves the score of the headline generation.', 'Automatic evaluation of the CNN-DM dataset.']
[None, None, None, ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], None]
1
D19-1315table_4
Automatic evaluation results based on the ROUGE metrics and accuracy (%) of classification of the CNN and DailyMail datasets. The metrics are the same as in Table 3. The proposed method (MTL + SD + HCL) improved the scores for all tasks. The scores with ⇤ indicate that the scores are statistically significant from the baseline model (p < 0.05). The lead is the score that uses the first three sentences of the input article as a predicted summary, and the first sentence of the input article as a predicted headline.
2
[['CNN', 'Baseline'], ['CNN', 'Proposed (MTL + SD + HCL)'], ['CNN', 'Lead'], ['DailyMail', 'Baseline'], ['DailyMail', 'Proposed (MTL + SD + HCL)'], ['DailyMail', 'Lead']]
2
[['Summarization', 'R-1'], ['Summarization', 'R-2'], ['Summarization', 'R-L'], [' Headline Generation', 'R-1'], ['Headline Generation', 'R-2'], ['Headline Generation', 'R-L'], [' Classification', 'Accuracy']]
[['30.7', '10.6', '27.3', '19.5', '5', '17', '43.8'], [' *31.0', ' *10.9', ' *27.8', '19.6', '5', '17.1', '43.9'], ['33.4', '12.2', '26.1', '17.2', '5', '11.1', ' -'], ['38.4', '15.8', '35', '43.1', '25.3', '39.6', '89'], [' *38.9', ' *16.3', ' *35.4', ' *43.7', '25.5', ' *40.1', '89.8'], ['43.8', '19.2', '37.3', '27.7', '10.9', '21.7', ' -']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'Accuracy']
['Proposed (MTL + SD + HCL)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Summarization || R-1</th> <th>Summarization || R-2</th> <th>Summarization || R-L</th> <th>Headline Generation || R-1</th> <th>Headline Generation || R-2</th> <th>Headline Generation || R-L</th> <th>Classification || Accuracy</th> </tr> </thead> <tbody> <tr> <td>CNN || Baseline</td> <td>30.7</td> <td>10.6</td> <td>27.3</td> <td>19.5</td> <td>5</td> <td>17</td> <td>43.8</td> </tr> <tr> <td>CNN || Proposed (MTL + SD + HCL)</td> <td>*31.0</td> <td>*10.9</td> <td>*27.8</td> <td>19.6</td> <td>5</td> <td>17.1</td> <td>43.9</td> </tr> <tr> <td>CNN || Lead</td> <td>33.4</td> <td>12.2</td> <td>26.1</td> <td>17.2</td> <td>5</td> <td>11.1</td> <td>-</td> </tr> <tr> <td>DailyMail || Baseline</td> <td>38.4</td> <td>15.8</td> <td>35</td> <td>43.1</td> <td>25.3</td> <td>39.6</td> <td>89</td> </tr> <tr> <td>DailyMail || Proposed (MTL + SD + HCL)</td> <td>*38.9</td> <td>*16.3</td> <td>*35.4</td> <td>*43.7</td> <td>25.5</td> <td>*40.1</td> <td>89.8</td> </tr> <tr> <td>DailyMail || Lead</td> <td>43.8</td> <td>19.2</td> <td>37.3</td> <td>27.7</td> <td>10.9</td> <td>21.7</td> <td>-</td> </tr> </tbody></table>
Table 4
table_4
D19-1315
7
emnlp2019
Automatic evaluation of the CNN-DM dataset. Table 4 shows the results of the CNN and DailyMail datasets, respectively. For both datasets, the proposed method improves the ROUGE scores of the summarization and headline generation.
[2, 1, 1]
['Automatic evaluation of the CNN-DM dataset.', 'Table 4 shows the results of the CNN and DailyMail datasets, respectively.', 'For both datasets, the proposed method improves the ROUGE scores of the summarization and headline generation.']
[None, ['CNN', 'DailyMail'], ['Proposed (MTL + SD + HCL)']]
1
D19-1316table_2
Generation Performance (numbers in brackets correspond to the relaxed measures).
2
[['Method', 'Retrieval-based'], ['Method', 'Seq2seq'], ['Method', 'Case frame-based']]
2
[['PART-TIME', 'Recall'], ['PART-TIME', 'Precision'], ['PART-TIME', 'F-measure'], [' JOB SMOKING', 'Recall'], ['JOB SMOKING', 'Precision'], ['JOB SMOKING', 'F-measure']]
[[' 0.23 (0.25)', ' 0.61 (0.67)', ' 0.34 (0.37)', ' 0.28 (0.30)', ' 0.72 (0.78)', ' 0.41 (0.44)'], [' 0.06 (0.07)', ' 0.07 (0.08)', ' 0.07 (0.08)', ' 0.10 (0.13)', ' 0.11 (0.13)', ' 0.10 (0.13)'], [' 0.10 (0.10)', ' 0.62 (0.62)', ' 0.16 (0.16)', ' 0.05 (0.05)', ' 0.75 (0.75)', ' 0.09 (0.09)']]
column
['Recall', 'Precision', 'F-measure', 'Recall', 'Precision', 'F-measure']
['Retrieval-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PART-TIME || Recall</th> <th>PART-TIME || Precision</th> <th>PART-TIME || F-measure</th> <th>JOB SMOKING || Recall</th> <th>JOB SMOKING || Precision</th> <th>JOB SMOKING || F-measure</th> </tr> </thead> <tbody> <tr> <td>Method || Retrieval-based</td> <td>0.23 (0.25)</td> <td>0.61 (0.67)</td> <td>0.34 (0.37)</td> <td>0.28 (0.30)</td> <td>0.72 (0.78)</td> <td>0.41 (0.44)</td> </tr> <tr> <td>Method || Seq2seq</td> <td>0.06 (0.07)</td> <td>0.07 (0.08)</td> <td>0.07 (0.08)</td> <td>0.10 (0.13)</td> <td>0.11 (0.13)</td> <td>0.10 (0.13)</td> </tr> <tr> <td>Method || Case frame-based</td> <td>0.10 (0.10)</td> <td>0.62 (0.62)</td> <td>0.16 (0.16)</td> <td>0.05 (0.05)</td> <td>0.75 (0.75)</td> <td>0.09 (0.09)</td> </tr> </tbody></table>
Table 2
table_2
D19-1316
7
emnlp2019
6.2 Results. Table 2 shows the results. It turns out that the simple application of the sequence-to-sequence model does not work well at all on this task; note that it is provided with the information about which sentence should have a feedback comment (i.e., tested on only the sentences having feedback comments). Nevertheless, its performance is very poor. This suggests that it requires modifications to achieve better performance with the sequence-to-sequence framework. Case frame-based successfully generates feedback comments in some cases. However, its recall is quite low. In contrast, the neural retrieval-based method achieves a far better performance in recall, achieving a precision comparable to that of case frame-based. At the same time, Table 2 shows that there is still room for improvement. Subsect. 7.1 will investigate the generation results to reveal what has been solved by the methods.
[2, 1, 1, 1, 2, 1, 1, 1, 1]
['6.2 Results.', 'Table 2 shows the results.', 'It turns out that the simple application of the sequence-to-sequence model does not work well at all on this task; note that it is provided with the information about which sentence should have a feedback comment (i.e., tested on only the sentences having feedback comments).', 'Nevertheless, its performance is very poor.', 'This suggests that it requires modifications to achieve better performance with the sequence-to-sequence framework.', 'Case frame-based successfully generates feedback comments in some cases.', 'However, its recall is quite low. In contrast, the neural retrieval-based method achieves a far better performance in recall, achieving a precision comparable to that of case frame-based.', 'At the same time, Table 2 shows that there is still room for improvement.', 'Subsect. 7.1 will investigate the generation results to reveal what has been solved by the methods.']
[None, None, ['Seq2seq'], ['Seq2seq'], None, ['Case frame-based'], ['Case frame-based', 'Recall'], ['Retrieval-based', 'Recall', 'Precision'], None]
1
D19-1317table_4
The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)
1
[['s2s (Du et al., 2017)'], ['NQG++ (Zhou et al., 2017)'], ['M2S+cp (Song et al., 2018)'], ['s2s+MP+GSA (Zhao et al., 2018)'], ['Hybrid model (Sun et al., 2018)'], ['ASs2s (Kim et al., 2019)'], ['Our model']]
2
[['Du Split (Du et al., 2017)', 'B1'], ['Du Split (Du et al., 2017)', 'B2'], ['Du Split (Du et al., 2017)', 'B3'], ['Du Split (Du et al., 2017)', 'B4'], ['Du Split (Du et al., 2017)', 'MET'], ['Du Split (Du et al., 2017)', 'R-L'], [' Zhou Split (Zhou et al. 2017)', 'B1'], [' Zhou Split (Zhou et al. 2017)', 'B2'], [' Zhou Split (Zhou et al. 2017)', 'B3'], [' Zhou Split (Zhou et al. 2017)', 'B4'], [' Zhou Split (Zhou et al. 2017)', 'MET'], [' Zhou Split (Zhou et al. 2017)', 'R-L']]
[['43.09', '25.96', '17.5', '12.28', '16.62', '39.75', '-', ' -', ' -', ' -', ' -', ' -'], ['-', '-', ' -', ' -', ' -', '-', '-', '-', ' -', '13.29', ' -', ' -'], ['-', '-', ' -', '13.98', '18.77', '42.72', '-', '-', ' -', '13.91', ' -', ' -'], ['43.47', '28.23', '20.4', '15.32', '19.29', '43.91', '44.51', '29.07', '21.06', '15.82', '19.67', '44.24'], ['-', '-', ' -', ' -', ' -', '-', '43.02', '28.14', '20.51', '15.64', ' -', ' -'], ['-', '-', ' -', '16.2', '19.92', '43.96', '-', '-', ' -', '16.17', ' -', ' -'], ['45.66', '30.21', '21.82', '16.27', '20.36', '44.35', '44.4', '29.48', '21.54', '16.37', '20.68', '44.73']]
column
['B1', 'B2', 'B3', 'B4', 'MET', 'R-L', 'B1', 'B2', 'B3', 'B4', 'MET', 'R-L']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Du Split (Du et al., 2017) || B1</th> <th>Du Split (Du et al., 2017) || B2</th> <th>Du Split (Du et al., 2017) || B3</th> <th>Du Split (Du et al., 2017) || B4</th> <th>Du Split (Du et al., 2017) || MET</th> <th>Du Split (Du et al., 2017) || R-L</th> <th>Zhou Split (Zhou et al. 2017) || B1</th> <th>Zhou Split (Zhou et al. 2017) || B2</th> <th>Zhou Split (Zhou et al. 2017) || B3</th> <th>Zhou Split (Zhou et al. 2017) || B4</th> <th>Zhou Split (Zhou et al. 2017) || MET</th> <th>Zhou Split (Zhou et al. 2017) || R-L</th> </tr> </thead> <tbody> <tr> <td>s2s (Du et al., 2017)</td> <td>43.09</td> <td>25.96</td> <td>17.5</td> <td>12.28</td> <td>16.62</td> <td>39.75</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>NQG++ (Zhou et al., 2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>13.29</td> <td>-</td> <td>-</td> </tr> <tr> <td>M2S+cp (Song et al., 2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>13.98</td> <td>18.77</td> <td>42.72</td> <td>-</td> <td>-</td> <td>-</td> <td>13.91</td> <td>-</td> <td>-</td> </tr> <tr> <td>s2s+MP+GSA (Zhao et al., 2018)</td> <td>43.47</td> <td>28.23</td> <td>20.4</td> <td>15.32</td> <td>19.29</td> <td>43.91</td> <td>44.51</td> <td>29.07</td> <td>21.06</td> <td>15.82</td> <td>19.67</td> <td>44.24</td> </tr> <tr> <td>Hybrid model (Sun et al., 2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>43.02</td> <td>28.14</td> <td>20.51</td> <td>15.64</td> <td>-</td> <td>-</td> </tr> <tr> <td>ASs2s (Kim et al., 2019)</td> <td>-</td> <td>-</td> <td>-</td> <td>16.2</td> <td>19.92</td> <td>43.96</td> <td>-</td> <td>-</td> <td>-</td> <td>16.17</td> <td>-</td> <td>-</td> </tr> <tr> <td>Our model</td> <td>45.66</td> <td>30.21</td> <td>21.82</td> <td>16.27</td> <td>20.36</td> <td>44.35</td> <td>44.4</td> <td>29.48</td> <td>21.54</td> <td>16.37</td> <td>20.68</td> <td>44.73</td> </tr> </tbody></table>
Table 4
table_4
D19-1317
6
emnlp2019
4.1 Main Results . Table 4 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models (Zhou et al., 2017; Sun et al., 2018) on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.
[2, 1, 1, 2, 2, 2]
['4.1 Main Results .', 'Table 4 shows automatic evaluation results for our model and baselines (copied from their papers).', 'Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models (Zhou et al., 2017; Sun et al., 2018) on both dataset splits.', 'Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments.', 'Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences.', 'All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.']
[None, ['Our model'], ['Our model', 's2s+MP+GSA (Zhao et al., 2018)', 'Hybrid model (Sun et al., 2018)'], None, ['Our model'], None]
1
D19-1321table_5
Automatic evaluation for recipe text generation. Checklist was trained with its own source code. We also re-printed results from (Kiddon et al., 2016) (i.e., Checklist §). We applied bootstrap resampling (Koehn, 2004) for significance test. Scores that are significantly worse than the best results (in bold) are marked with * for p-value < 0.05 or ** for p-value < 0.01.
2
[['Models', 'Checklist §'], ['Models', 'Checklist'], ['Models', 'CVAE'], ['Models', 'Pointer-S2S'], ['Models', 'Link-S2S'], ['Models', 'PHVM (ours)']]
1
[['BLEU (%)'], ['Coverage (%)'], ['Length'], ['Distinct-4 (%)'], ['Repetition-4 (%)']]
[['3', '67.9', '-', '-', '-'], ['2.6**', '66.9*', '67.59', '30.67**', '39.1**'], ['4.6', '63.0**', '57.49**', '52.53**', '38.7**'], ['4.3', '70.4**', '59.18**', '30.72**', '36.4**'], ['1.9**', '53.8**', '40.34**', '24.93**', '31.6**'], ['4.6', '73.2', '70.92', '67.86', '17.3']]
column
['BLEU (%)', 'Coverage (%)', 'Length', 'Distinct-4 (%)', 'Repetition-4 (%)']
['PHVM (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%)</th> <th>Coverage (%)</th> <th>Length</th> <th>Distinct-4 (%)</th> <th>Repetition-4 (%)</th> </tr> </thead> <tbody> <tr> <td>Models || Checklist §</td> <td>3</td> <td>67.9</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || Checklist</td> <td>2.6**</td> <td>66.9*</td> <td>67.59</td> <td>30.67**</td> <td>39.1**</td> </tr> <tr> <td>Models || CVAE</td> <td>4.6</td> <td>63.0**</td> <td>57.49**</td> <td>52.53**</td> <td>38.7**</td> </tr> <tr> <td>Models || Pointer-S2S</td> <td>4.3</td> <td>70.4**</td> <td>59.18**</td> <td>30.72**</td> <td>36.4**</td> </tr> <tr> <td>Models || Link-S2S</td> <td>1.9**</td> <td>53.8**</td> <td>40.34**</td> <td>24.93**</td> <td>31.6**</td> </tr> <tr> <td>Models || PHVM (ours)</td> <td>4.6</td> <td>73.2</td> <td>70.92</td> <td>67.86</td> <td>17.3</td> </tr> </tbody></table>
Table 5
table_5
D19-1321
9
emnlp2019
Table 5 shows the experimental results. Our model outperforms baselines in terms of coverage and diversity; it manages to use more given ingredients and generates more diversified cooking steps. We also found that Checklist / Link-S2S produces the general phrase “all ingredients” in 14.9% / 24.5% of the generated recipes, while CVAE / Pointer-S2S / PHVM produce the phrase in 7.8% / 6.3% / 5.0% of recipes respectively.
[1, 1, 2]
['Table 5 shows the experimental results.', 'Our model outperforms baselines in terms of coverage and diversity; it manages to use more given ingredients and generates more diversified cooking steps.', 'We also found that Checklist / Link-S2S produces the general phrase “all ingredients” in 14.9% / 24.5% of the generated recipes, while CVAE / Pointer-S2S / PHVM produce the phrase in 7.8% / 6.3% / 5.0% of recipes respectively.']
[None, ['PHVM (ours)', 'Coverage (%)'], ['Checklist', 'Link-S2S', 'CVAE', 'Pointer-S2S', 'PHVM (ours)']]
1
D19-1324table_5
Experimental results on the NYT50 dataset. ROUGE-1, -2 and -L F1 is reported. JECS substantially outperforms our Lead-based systems and our extractive model.
2
[['Model', 'Lead'], ['Model', 'LEADDEDUP'], ['Model', 'LEADCOMP'], ['Model', 'EXTRACTION'], ['Model', 'JECS']]
1
[['R-1'], ['R-2'], ['R-L']]
[['41.8', '22.6', '35'], ['42', '22.8', '35'], ['42.4', '22.7', '35.4'], ['44.3', '25.5', '37.1'], ['45.5', '25.3', '38.2']]
column
['R-1', 'R-2', 'R-L']
['JECS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || Lead</td> <td>41.8</td> <td>22.6</td> <td>35</td> </tr> <tr> <td>Model || LEADDEDUP</td> <td>42</td> <td>22.8</td> <td>35</td> </tr> <tr> <td>Model || LEADCOMP</td> <td>42.4</td> <td>22.7</td> <td>35.4</td> </tr> <tr> <td>Model || EXTRACTION</td> <td>44.3</td> <td>25.5</td> <td>37.1</td> </tr> <tr> <td>Model || JECS</td> <td>45.5</td> <td>25.3</td> <td>38.2</td> </tr> </tbody></table>
Table 5
table_5
D19-1324
7
emnlp2019
We also report the results on the full CNNDM and NYT although they are less compressable. Table 5 shows the experimental results on these datasets. Our models still yield strong performance compared to baselines and past work on the CNNDMdataset. The EXTRACTION model achieves comparable results to past successful extractive approaches on CNNDM and JECS improves on this across the datasets. In some cases, our model slightly underperforms on ROUGE-2. One possible reason is that we remove stop words when constructing our oracles, which could underestimate the importance of bigrams containing stopwords for evaluation. Finally, we note that our compressive approach substantially outperforms the compression-augmented LatSum model.
[2, 1, 1, 1, 1, 2, 1]
['We also report the results on the full CNNDM and NYT although they are less compressable.', 'Table 5 shows the experimental results on these datasets.', 'Our models still yield strong performance compared to baselines and past work on the CNNDMdataset.', 'The EXTRACTION model achieves comparable results to past successful extractive approaches on CNNDM and JECS improves on this across the datasets.', 'In some cases, our model slightly underperforms on ROUGE-2.', 'One possible reason is that we remove stop words when constructing our oracles, which could underestimate the importance of bigrams containing stopwords for evaluation.', 'Finally, we note that our compressive approach substantially outperforms the compression-augmented LatSum model.']
[None, None, ['Lead', 'LEADDEDUP', 'LEADCOMP'], ['EXTRACTION', 'JECS'], ['Lead', 'LEADDEDUP', 'LEADCOMP', 'R-2'], None, None]
1
D19-1330table_2
Translation performance on IWSLT datasets. SyncTrans represents our proposed synchronous translation method. All results of our SyncTrans are significantly better than both Indiv and Multi (p < 0.01).
2
[['Method', 'Indiv'], ['Method', 'Indiv + pseudo'], ['Method', 'Multi'], ['Method', 'Multi + pseudo'], ['Method', 'SyncTrans']]
2
[['En-Zh/Ja', 'En-Zh'], ['En-Zh/Ja', 'En-Ja'], ['En-De/Fr', 'En-De'], ['En-De/Fr', 'En-Fr']]
[['15.68', '16.56', '27.11', '40.62'], ['16.72', '18.02', '28.47', '40.39'], ['17.06', '18.31', '27.79', '40.97'], ['17.10', '18.40', '28.56', '40.62'], ['17.97', '19.31', '29.16', '41.53']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['SyncTrans']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-Zh/Ja || En-Zh</th> <th>En-Zh/Ja || En-Ja</th> <th>En-De/Fr || En-De</th> <th>En-De/Fr || En-Fr</th> </tr> </thead> <tbody> <tr> <td>Method || Indiv</td> <td>15.68</td> <td>16.56</td> <td>27.11</td> <td>40.62</td> </tr> <tr> <td>Method || Indiv + pseudo</td> <td>16.72</td> <td>18.02</td> <td>28.47</td> <td>40.39</td> </tr> <tr> <td>Method || Multi</td> <td>17.06</td> <td>18.31</td> <td>27.79</td> <td>40.97</td> </tr> <tr> <td>Method || Multi + pseudo</td> <td>17.10</td> <td>18.40</td> <td>28.56</td> <td>40.62</td> </tr> <tr> <td>Method || SyncTrans</td> <td>17.97</td> <td>19.31</td> <td>29.16</td> <td>41.53</td> </tr> </tbody></table>
Table 2
table_2
D19-1330
4
emnlp2019
5.1 Results on IWSLT . Table 2 shows the main translation results of En¨Zh/Ja and En¨De/Fr on IWSLT datasets. We also conduct a typical one-to-many translation adopting Johnson et al. (2017) method on Transformer as our another baseline model, referred to Multi. Compared with Indiv, we can see that Multi achieves better results on all cases, which can be attributed to that the encoder can be enhanced by extra training data from the other language pair. As for our proposed method, the synchronous translation method performs significantly better than both Indiv and Multi baseline methods, and it can achieve the improvements up to 2.75 BLEU points (19.31 vs. 16.56) on En_Ja.
[2, 1, 2, 1, 1]
['5.1 Results on IWSLT .', 'Table 2 shows the main translation results of En¨Zh/Ja and En¨De/Fr on IWSLT datasets.', 'We also conduct a typical one-to-many translation adopting Johnson et al. (2017) method on Transformer as our another baseline model, referred to Multi.', 'Compared with Indiv, we can see that Multi achieves better results on all cases, which can be attributed to that the encoder can be enhanced by extra training data from the other language pair.', 'As for our proposed method, the synchronous translation method performs significantly better than both Indiv and Multi baseline methods, and it can achieve the improvements up to 2.75 BLEU points (19.31 vs. 16.56) on En_Ja.']
[None, ['En-Zh/Ja'], ['Multi'], ['Indiv', 'Multi'], ['SyncTrans', 'Indiv', 'Multi', 'En-Ja']]
1
D19-1334table_3
Experimental results for robustness analysis.
2
[['Threshold', 'K = 1'], ['Threshold', 'K = 5'], ['Threshold', 'K = 10'], ['Threshold', 'K = max']]
2
[['MultiHop', 'MRR'], [' Meta-KGR', 'Hits@1'], ['MultiHop', 'MRR'], [' Meta-KGR', 'Hits@1']]
[['20.8', '16.9', '22.3', '19.3'], ['25.7', '20.8', '29.6', '26.6'], ['29.1', '25.0', '31.3', '27.2'], ['42.7', '36.7', '46.9', '41.2']]
column
['MRR', 'Hits@1', 'MRR', 'Hits@1']
[' Meta-KGR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiHop || MRR</th> <th>Meta-KGR || Hits@1</th> <th>MultiHop || MRR</th> <th>Meta-KGR || Hits@1</th> </tr> </thead> <tbody> <tr> <td>Threshold || K = 1</td> <td>20.8</td> <td>16.9</td> <td>22.3</td> <td>19.3</td> </tr> <tr> <td>Threshold || K = 5</td> <td>25.7</td> <td>20.8</td> <td>29.6</td> <td>26.6</td> </tr> <tr> <td>Threshold || K = 10</td> <td>29.1</td> <td>25.0</td> <td>31.3</td> <td>27.2</td> </tr> <tr> <td>Threshold || K = max</td> <td>42.7</td> <td>36.7</td> <td>46.9</td> <td>41.2</td> </tr> </tbody></table>
Table 3
table_3
D19-1334
5
emnlp2019
5.4 Robustness Analysis . We can use different frequency thresholds K to select few-shot relations. In this section, we will study the impact of K on the performance of our model. In our experiments, some triples will be removed until every few-shot relation has only K triples. We do link prediction experiments on FB15K-237 and use ConvE as our reward function. The final results are shown in Table 3. K = max means we use the whole datasets in Table 2 and do not remove any triples. From Table 3 we can see our model is robust to K and outperforms MultiHop in every case.
[2, 2, 2, 2, 2, 1, 2, 1]
['5.4 Robustness Analysis .', 'We can use different frequency thresholds K to select few-shot relations.', 'In this section, we will study the impact of K on the performance of our model.', 'In our experiments, some triples will be removed until every few-shot relation has only K triples.', 'We do link prediction experiments on FB15K-237 and use ConvE as our reward function.', 'The final results are shown in Table 3.', 'K = max means we use the whole datasets in Table 2 and do not remove any triples.', 'From Table 3 we can see our model is robust to K and outperforms MultiHop in every case.']
[None, None, None, None, None, None, ['K = max'], [' Meta-KGR', 'K = 1', 'K = 5', 'K = 10', 'K = max', 'MultiHop']]
1
D19-1336table_2
Human evaluation results.
2
[['Model', 'LM (Mikolov et al., 2010)'], ['Model', 'CLM (Mou et al., 2015)'], ['Model', 'CLM+JD (Yu et al., 2018)'], ['Model', 'Pun-GAN'], ['Model', 'Human']]
1
[['Ambiguity'], ['Fluency'], ['Overall']]
[['1.6', '3.1', '2.5'], ['2.0', '2.1', '2.0'], ['3.4', '3.6', '3.5'], ['3.9', '3.7', '3.8'], ['4.3', '4.6', '4.5']]
column
['Ambiguity', 'Fluency', 'Overall']
['Pun-GAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ambiguity</th> <th>Fluency</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Model || LM (Mikolov et al., 2010)</td> <td>1.6</td> <td>3.1</td> <td>2.5</td> </tr> <tr> <td>Model || CLM (Mou et al., 2015)</td> <td>2.0</td> <td>2.1</td> <td>2.0</td> </tr> <tr> <td>Model || CLM+JD (Yu et al., 2018)</td> <td>3.4</td> <td>3.6</td> <td>3.5</td> </tr> <tr> <td>Model || Pun-GAN</td> <td>3.9</td> <td>3.7</td> <td>3.8</td> </tr> <tr> <td>Model || Human</td> <td>4.3</td> <td>4.6</td> <td>4.5</td> </tr> </tbody></table>
Table 2
table_2
D19-1336
5
emnlp2019
3.5 Results . Table 2 show the results of human evaluation. We find that: 1) Pun-GAN achieves the best ambiguity score. 2) Compared with CLM+JD which is actually the same as our pre-trained generator, Pun-GAN has a large improvement in unusualness. 3) Pun-GAN can generate more diverse sentence with different tokens and words. This phenomenon accords with previous work of GANs (Wang and Wan, 2018).
[0, 1, 1, 1, 2, 2]
['3.5 Results .', 'Table 2 show the results of human evaluation.', 'We find that: 1) Pun-GAN achieves the best ambiguity score.', '2) Compared with CLM+JD which is actually the same as our pre-trained generator, Pun-GAN has a large improvement in unusualness.', '3) Pun-GAN can generate more diverse sentence with different tokens and words.', 'This phenomenon accords with previous work of GANs (Wang and Wan, 2018).']
[None, None, ['Pun-GAN', 'Ambiguity'], ['Pun-GAN', 'CLM+JD (Yu et al., 2018)', 'Fluency'], ['Pun-GAN'], None]
1
D19-1342table_2
Experiment results on SemEval 2014 dataset.
2
[['Model', 'AT-LSTM'], ['Model', 'ATAE-LSTM'], ['Model', 'GCAE'], ['Model', 'AT-GRU'], ['Model', 'AT-GRU 2-label'], ['Model', 'D-AT-GRU w/o orthogonal'], ['Model', 'D-AT-GRU']]
1
[['Overall'], ['Conflict']]
[['77.13%', '11.54%'], ['78.00%', '23.08%'], ['78.30%', '25.00%'], ['77.22%', '19.23%'], ['77.02%', '26.92%'], ['77.22%', '26.92%'], ['78.50%', '40.38%']]
column
['accuracy', 'accuracy']
['D-AT-GRU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall</th> <th>Conflict</th> </tr> </thead> <tbody> <tr> <td>Model || AT-LSTM</td> <td>77.13%</td> <td>11.54%</td> </tr> <tr> <td>Model || ATAE-LSTM</td> <td>78.00%</td> <td>23.08%</td> </tr> <tr> <td>Model || GCAE</td> <td>78.30%</td> <td>25.00%</td> </tr> <tr> <td>Model || AT-GRU</td> <td>77.22%</td> <td>19.23%</td> </tr> <tr> <td>Model || AT-GRU 2-label</td> <td>77.02%</td> <td>26.92%</td> </tr> <tr> <td>Model || D-AT-GRU w/o orthogonal</td> <td>77.22%</td> <td>26.92%</td> </tr> <tr> <td>Model || D-AT-GRU</td> <td>78.50%</td> <td>40.38%</td> </tr> </tbody></table>
Table 2
table_2
D19-1342
4
emnlp2019
Table 2 shows that D-AT-GRU model outperforms all baseline methods. Given that AT-LSTM (Wang et al., 2016) has strong correlation to our base model (AT-GRU), their work can be categorized as a baseline to our model. In contrast, the results prove that the additional components are helpful to recognize conflict opinions. We also compare our model to the recently proposed GCAE (Xue and Li, 2018), which is based on gated CNN. D-AT-GRU performs competitively with GCAE overall and significantly better on conflict category.
[1, 1, 1, 1, 1]
['Table 2 shows that D-AT-GRU model outperforms all baseline methods.', 'Given that AT-LSTM (Wang et al., 2016) has strong correlation to our base model (AT-GRU), their work can be categorized as a baseline to our model.', 'In contrast, the results prove that the additional components are helpful to recognize conflict opinions.', 'We also compare our model to the recently proposed GCAE (Xue and Li, 2018), which is based on gated CNN.', 'D-AT-GRU performs competitively with GCAE overall and significantly better on conflict category.']
[['D-AT-GRU'], ['AT-LSTM', 'AT-GRU'], ['D-AT-GRU', 'Conflict'], ['GCAE'], ['D-AT-GRU', 'GCAE']]
1
D19-1345table_2
text classification datasets. Model with ”*” means that all word vectors are initialized by Glove word embeddings. We run all models 10 times and report mean results.
2
[['Model', 'CNN'], ['Model', 'LSTM'], ['Model', 'Graph-CNN'], ['Model', 'Text-GCN'], ['Model', 'CNN*'], ['Model', 'LSTM*'], ['Model', 'Bi-LSTM*'], ['Model', 'fastText*'], ['Model', 'Text-GCN*'], ['Model', 'Our Model*']]
1
[['R8'], ['R52'], ['Ohsumed']]
[['94.0±0.5', '85.3±0.5', '43.9±1.0'], ['93.7±0.8', '85.6±1.0', '41.1±1.0'], ['97.0±0.2', '92.8±0.2', '63.9±0.5'], ['97.1±0.1', '93.6±0.2', '68.4±0.6'], ['95.7±0.5', '87.6±0.5', '58.4±1.0'], ['96.1±0.2', '90.5±0.8', '51.1±1.5'], ['96.3±0.3', '90.5±0.9', '49.3±1.0'], ['96.1±0.2', '92.8±0.1', '57.7±0.5'], ['97.0±0.1', '93.7±0.1', '67.7±0.3'], ['97.8±0.2', '94.6±0.3', '69.4±0.6']]
column
['accuracy', 'accuracy', 'accuracy']
['Our Model*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R8</th> <th>R52</th> <th>Ohsumed</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>94.0±0.5</td> <td>85.3±0.5</td> <td>43.9±1.0</td> </tr> <tr> <td>Model || LSTM</td> <td>93.7±0.8</td> <td>85.6±1.0</td> <td>41.1±1.0</td> </tr> <tr> <td>Model || Graph-CNN</td> <td>97.0±0.2</td> <td>92.8±0.2</td> <td>63.9±0.5</td> </tr> <tr> <td>Model || Text-GCN</td> <td>97.1±0.1</td> <td>93.6±0.2</td> <td>68.4±0.6</td> </tr> <tr> <td>Model || CNN*</td> <td>95.7±0.5</td> <td>87.6±0.5</td> <td>58.4±1.0</td> </tr> <tr> <td>Model || LSTM*</td> <td>96.1±0.2</td> <td>90.5±0.8</td> <td>51.1±1.5</td> </tr> <tr> <td>Model || Bi-LSTM*</td> <td>96.3±0.3</td> <td>90.5±0.9</td> <td>49.3±1.0</td> </tr> <tr> <td>Model || fastText*</td> <td>96.1±0.2</td> <td>92.8±0.1</td> <td>57.7±0.5</td> </tr> <tr> <td>Model || Text-GCN*</td> <td>97.0±0.1</td> <td>93.7±0.1</td> <td>67.7±0.3</td> </tr> <tr> <td>Model || Our Model*</td> <td>97.8±0.2</td> <td>94.6±0.3</td> <td>69.4±0.6</td> </tr> </tbody></table>
Table 2
table_2
D19-1345
4
emnlp2019
3.3 Experimental. Results Table 2 reports the results of our models against other baseline methods. We can see that our model can achieve the state-of-the-art result. We note that the results of graph-based models are better than traditional models like CNN, LSTM, and fastText. That is likely due to the characteristics of the graph structure. Graph structure allows a different number of neighbor nodes to exist, which enables word nodes to learn more accurate representations through different collocations. Besides, the relationship between words can be recorded in the edge weights and shared globally. These are all impossible for traditional models.
[2, 1, 1, 1, 2, 2, 2]
['3.3 Experimental.', 'Results Table 2 reports the results of our models against other baseline methods.', 'We can see that our model can achieve the state-of-the-art result.', 'We note that the results of graph-based models are better than traditional models like CNN, LSTM, and fastText.', 'That is likely due to the characteristics of the graph structure. Graph structure allows a different number of neighbor nodes to exist, which enables word nodes to learn more accurate representations through different collocations.', 'Besides, the relationship between words can be recorded in the edge weights and shared globally.', 'These are all impossible for traditional models.']
[None, ['Our Model*'], ['Our Model*'], ['Graph-CNN', 'CNN*', 'LSTM*', 'fastText*'], None, None, None]
1
D19-1350table_1
Perplexity and topic coherence results of difference models. ‘frequency-based vocab.’ denotes that the vocabulary is constructed by filtering out rare words while ‘RL-based vocab.’ denotes that the vocabulary is dynamically generated by our model using RL.
3
[['#Topics = 30, frequency-based vocab.', 'Methods', 'LDA'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'NVDM'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'NGTM'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'Scholar'], ['#Topics = 30, RL-based vocab.', 'Methods', 'LDA'], ['#Topics = 30, RL-based vocab.', 'Methods', 'NVDM'], ['#Topics = 30, RL-based vocab.', 'Methods', 'NGTM'], ['#Topics = 30, RL-based vocab.', 'Methods', 'Scholar'], ['#Topics = 30, RL-based vocab.', 'Methods', 'VTMRL'], ['#Topics = 50, frequency-based vocab.', 'Methods', 'LDA'], ['#Topics = 50, frequency-based vocab.', 'Methods', 'NVDM'], ['#Topics = 50, frequency-based vocab.', 'Methods', 'NGTM'], ['#Topics = 50, frequency-based vocab.', 'Methods', 'Scholar'], ['#Topics = 50, RL-based vocab.', 'Methods', 'LDA'], ['#Topics = 50, RL-based vocab.', 'Methods', 'NVDM'], ['#Topics = 50, RL-based vocab.', 'Methods', 'NGTM'], ['#Topics = 50, RL-based vocab.', 'Methods', 'Scholar'], ['#Topics = 50, RL-based vocab.', 'Methods', 'VTMRL']]
2
[['20News', 'PPL'], [' 20News', ' Cv'], ['NIPS', 'PPL'], [' NIPS', 'Cv']]
[['1,213.1', '0.503', '1,042.7', '0.507'], ['980.8', '0.497', '931.6', '0.492'], ['929.3', '0.479', '938.9', '0.503'], ['1,345.9', '0.537', '1,350.9', '0.512'], ['1,451.7', '0.522', '1,093.1', '0.534'], ['845.8', '0.510', '768.7', '0.509'], ['791.5', '0.517', '757.2', '0.527'], ['1,158.4', '0.560', '1,273.6', '0.548'], ['803.7', '0.577', '730.6', '0.568'], ['1,015.9', '0.501', '995.5', '0.503'], ['1,014.0', '0.471', '927.6', '0.506'], ['903.5', '0.491', '908.8', '0.498'], ['1,514.5', '0.521', '1,373.2', '0.508'], ['1,251.6', '0.518', '921.4', '0.527'], ['837.9', '0.502', '767.0', '0.514'], ['772.2', '0.514', '749.7', '0.511'], ['1,335.9', '0.526', '1,299.8', '0.530'], ['725.2', '0.559', '712.2', '0.566']]
column
['PPl', 'Cv', 'PPl', 'Cv']
['VTMRL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20News || PPL</th> <th>20News || Cv</th> <th>NIPS || PPL</th> <th>NIPS || Cv</th> </tr> </thead> <tbody> <tr> <td>#Topics = 30, frequency-based vocab. || Methods || LDA</td> <td>1,213.1</td> <td>0.503</td> <td>1,042.7</td> <td>0.507</td> </tr> <tr> <td>#Topics = 30, frequency-based vocab. || Methods || NVDM</td> <td>980.8</td> <td>0.497</td> <td>931.6</td> <td>0.492</td> </tr> <tr> <td>#Topics = 30, frequency-based vocab. || Methods || NGTM</td> <td>929.3</td> <td>0.479</td> <td>938.9</td> <td>0.503</td> </tr> <tr> <td>#Topics = 30, frequency-based vocab. || Methods || Scholar</td> <td>1,345.9</td> <td>0.537</td> <td>1,350.9</td> <td>0.512</td> </tr> <tr> <td>#Topics = 30, RL-based vocab. || Methods || LDA</td> <td>1,451.7</td> <td>0.522</td> <td>1,093.1</td> <td>0.534</td> </tr> <tr> <td>#Topics = 30, RL-based vocab. || Methods || NVDM</td> <td>845.8</td> <td>0.510</td> <td>768.7</td> <td>0.509</td> </tr> <tr> <td>#Topics = 30, RL-based vocab. || Methods || NGTM</td> <td>791.5</td> <td>0.517</td> <td>757.2</td> <td>0.527</td> </tr> <tr> <td>#Topics = 30, RL-based vocab. || Methods || Scholar</td> <td>1,158.4</td> <td>0.560</td> <td>1,273.6</td> <td>0.548</td> </tr> <tr> <td>#Topics = 30, RL-based vocab. || Methods || VTMRL</td> <td>803.7</td> <td>0.577</td> <td>730.6</td> <td>0.568</td> </tr> <tr> <td>#Topics = 50, frequency-based vocab. || Methods || LDA</td> <td>1,015.9</td> <td>0.501</td> <td>995.5</td> <td>0.503</td> </tr> <tr> <td>#Topics = 50, frequency-based vocab. || Methods || NVDM</td> <td>1,014.0</td> <td>0.471</td> <td>927.6</td> <td>0.506</td> </tr> <tr> <td>#Topics = 50, frequency-based vocab. || Methods || NGTM</td> <td>903.5</td> <td>0.491</td> <td>908.8</td> <td>0.498</td> </tr> <tr> <td>#Topics = 50, frequency-based vocab. || Methods || Scholar</td> <td>1,514.5</td> <td>0.521</td> <td>1,373.2</td> <td>0.508</td> </tr> <tr> <td>#Topics = 50, RL-based vocab. || Methods || LDA</td> <td>1,251.6</td> <td>0.518</td> <td>921.4</td> <td>0.527</td> </tr> <tr> <td>#Topics = 50, RL-based vocab. || Methods || NVDM</td> <td>837.9</td> <td>0.502</td> <td>767.0</td> <td>0.514</td> </tr> <tr> <td>#Topics = 50, RL-based vocab. || Methods || NGTM</td> <td>772.2</td> <td>0.514</td> <td>749.7</td> <td>0.511</td> </tr> <tr> <td>#Topics = 50, RL-based vocab. || Methods || Scholar</td> <td>1,335.9</td> <td>0.526</td> <td>1,299.8</td> <td>0.530</td> </tr> <tr> <td>#Topics = 50, RL-based vocab. || Methods || VTMRL</td> <td>725.2</td> <td>0.559</td> <td>712.2</td> <td>0.566</td> </tr> </tbody></table>
Table 1
table_1
D19-1350
4
emnlp2019
In our experiments, the models are evaluated based on the perplexity (PPL, lower is better) and topic coherence measure (Cv) based on external corpus (R¨oder et al., 2015) (higher is better). The results with 30 and 50 topics are shown in Table 1. LDA is a conventional topic model, while all the other models are neural topic models. It can be observed from Table 1 that NVDM and NGTM achieve better perplexities compared to LDA. However, in terms of topic coherence measure, NVDM and NGTM perform slightly worse than LDA. A similar observation has been reported in (Card et al., 2017). Scholar achieves better coherence compared to other neural models. Nevertheless, after using reinforcement learning based on the topic coherence scores in our proposed model, VTMRL outperforms all the other models on the topic coherence measure by a large margin. RL could activate words which are semantically related to topics regardless of their occurrence frequency. The inclusion of some rare words would impact the models’ predictive probabilities. As such, we observe worse perplexity results for models trained with RL-based vocabulary compared to frequency-based vocabulary in 20 Newsgroups, though the converse is true for NIPS. Nevertheless, the coherence scores improve for all the models with RL-based vocabulary.
[2, 1, 2, 1, 1, 1, 1, 1, 2, 2, 1, 1]
['In our experiments, the models are evaluated based on the perplexity (PPL, lower is better) and topic coherence measure (Cv) based on external corpus (R¨oder et al., 2015) (higher is better).', 'The results with 30 and 50 topics are shown in Table 1.', 'LDA is a conventional topic model, while all the other models are neural topic models.', 'It can be observed from Table 1 that NVDM and NGTM achieve better perplexities compared to LDA.', 'However, in terms of topic coherence measure, NVDM and NGTM perform slightly worse than LDA.', 'A similar observation has been reported in (Card et al., 2017).', 'Scholar achieves better coherence compared to other neural models.', 'Nevertheless, after using reinforcement learning based on the topic coherence scores in our proposed model, VTMRL outperforms all the other models on the topic coherence measure by a large margin.', 'RL could activate words which are semantically related to topics regardless of their occurrence frequency.', 'The inclusion of some rare words would impact the models’ predictive probabilities.', 'As such, we observe worse perplexity results for models trained with RL-based vocabulary compared to frequency-based vocabulary in 20 Newsgroups, though the converse is true for NIPS.', 'Nevertheless, the coherence scores improve for all the models with RL-based vocabulary.']
[['PPL', ' Cv'], ['#Topics = 30, frequency-based vocab.', '#Topics = 30, RL-based vocab.', '#Topics = 50, frequency-based vocab.', '#Topics = 50, RL-based vocab.'], ['LDA'], ['NVDM', 'NGTM', 'PPL', 'LDA'], [' Cv', 'NVDM', 'NGTM', 'LDA'], None, ['Scholar', 'NVDM', 'NGTM'], ['VTMRL'], ['VTMRL'], None, ['PPL', '#Topics = 30, RL-based vocab.', '#Topics = 50, RL-based vocab.', '#Topics = 30, frequency-based vocab.', '#Topics = 50, frequency-based vocab.', '20News', 'NIPS'], ['#Topics = 30, RL-based vocab.', '#Topics = 50, RL-based vocab.', ' Cv']]
1
D19-1359table_3
F1 scores (%) of UKB+SyntagNet against the best supervised systems for English all-words WSD. Reported systems: • Yuan et al. (2016), ∞ Melacci et al. (2018), (cid:52) Uslu et al. (2018). Statisticallysignificant differences against our results are underlined according to a χ2 test, p < 0.01.
2
[['System', 'LSTMLP'], ['System', 'IMSC2V+PR'], ['System', 'fastSense'], ['System', 'UKB+SyntagNet']]
1
[['Sens2'], ['Sens3'], ['Sem07'], ['Sem13'], ['Sem15'], ['All']]
[['73.8', '71.8', '63.5', '69.5', '72.6', '71.5'], ['73.8', '71.9', '63.3', '68.2', '72.8', '71.2'], ['73.5', '73.5', '62.4', '66.2', '73.2', '71.1'], ['71.2', '71.6', '59.6', '72.4', '75.6', '71.5']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['UKB+SyntagNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sens2</th> <th>Sens3</th> <th>Sem07</th> <th>Sem13</th> <th>Sem15</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>System || LSTMLP</td> <td>73.8</td> <td>71.8</td> <td>63.5</td> <td>69.5</td> <td>72.6</td> <td>71.5</td> </tr> <tr> <td>System || IMSC2V+PR</td> <td>73.8</td> <td>71.9</td> <td>63.3</td> <td>68.2</td> <td>72.8</td> <td>71.2</td> </tr> <tr> <td>System || fastSense</td> <td>73.5</td> <td>73.5</td> <td>62.4</td> <td>66.2</td> <td>73.2</td> <td>71.1</td> </tr> <tr> <td>System || UKB+SyntagNet</td> <td>71.2</td> <td>71.6</td> <td>59.6</td> <td>72.4</td> <td>75.6</td> <td>71.5</td> </tr> </tbody></table>
Table 3
table_3
D19-1359
5
emnlp2019
Table 3 compares UKB + SyntagNet against the best supervised English WSD systems (Yuan et al., 2016; Melacci et al., 2018; Uslu et al., 2018). None of the differences across datasets between the best performing supervised system and SyntagNet is statistically significant according to chi-square test (p < 0.01), meaning that SyntagNet enables knowledge-based WSD to rival current supervised approaches.
[1, 1]
['Table 3 compares UKB + SyntagNet against the best supervised English WSD systems (Yuan et al., 2016; Melacci et al., 2018; Uslu et al., 2018).', 'None of the differences across datasets between the best performing supervised system and SyntagNet is statistically significant according to chi-square test (p < 0.01), meaning that SyntagNet enables knowledge-based WSD to rival current supervised approaches.']
[['UKB+SyntagNet'], ['UKB+SyntagNet']]
1
D19-1368table_2
Performance on different datasets against baselines, where h@k denotes hits at k. Results are reported on test sets with the best parameters found in grid search for each model.
2
[['FB15K-237', 'SimplE'], ['FB15K-238', 'DistMult'], ['FB15K-239', 'ComplEx'], ['FB15K-240', 'JoBi SimplE'], ['FB15K-241', 'JoBi DistMult'], ['FB15K-242', 'JoBi ComplEx'], ['FB15K', 'DistMult'], ['FB15K', 'ComplEx'], ['FB15K', 'JoBi ComplEx'], ['YAGO3-10', 'DistMult'], ['YAGO3-11', 'ComplEx'], ['YAGO3-12', 'JoBi ComplEx']]
1
[['h@1'], ['h@3'], ['h@10'], ['MRR']]
[['0.160', '0.268', '0.43', '0.248'], ['0.158', '0.271', '0.432', '0.247'], ['0.159', '0.275', '0.441', '0.25'], ['0.188', '0.301', '0.461', '0.277'], ['0.205', '0.316', '0.466', '0.29'], ['0.199', '0.319', '0.479', '0.29'], ['0.587', '0.785', '0.867', '0.697'], ['0.617', '0.803', '0.874', '0.72'], ['0.681', '0.824', '0.883', '0.761'], ['0.252', '0.407', '0.568', '0.357'], ['0.277', '0.44', '0.589', '0.383'], ['0.333', '0.477', '0.617', '0.428']]
column
['h@1', 'h@3', 'h@10', 'MRR']
['JoBi ComplEx']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>h@1</th> <th>h@3</th> <th>h@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>FB15K-237 || SimplE</td> <td>0.160</td> <td>0.268</td> <td>0.43</td> <td>0.248</td> </tr> <tr> <td>FB15K-238 || DistMult</td> <td>0.158</td> <td>0.271</td> <td>0.432</td> <td>0.247</td> </tr> <tr> <td>FB15K-239 || ComplEx</td> <td>0.159</td> <td>0.275</td> <td>0.441</td> <td>0.25</td> </tr> <tr> <td>FB15K-240 || JoBi SimplE</td> <td>0.188</td> <td>0.301</td> <td>0.461</td> <td>0.277</td> </tr> <tr> <td>FB15K-241 || JoBi DistMult</td> <td>0.205</td> <td>0.316</td> <td>0.466</td> <td>0.29</td> </tr> <tr> <td>FB15K-242 || JoBi ComplEx</td> <td>0.199</td> <td>0.319</td> <td>0.479</td> <td>0.29</td> </tr> <tr> <td>FB15K || DistMult</td> <td>0.587</td> <td>0.785</td> <td>0.867</td> <td>0.697</td> </tr> <tr> <td>FB15K || ComplEx</td> <td>0.617</td> <td>0.803</td> <td>0.874</td> <td>0.72</td> </tr> <tr> <td>FB15K || JoBi ComplEx</td> <td>0.681</td> <td>0.824</td> <td>0.883</td> <td>0.761</td> </tr> <tr> <td>YAGO3-10 || DistMult</td> <td>0.252</td> <td>0.407</td> <td>0.568</td> <td>0.357</td> </tr> <tr> <td>YAGO3-11 || ComplEx</td> <td>0.277</td> <td>0.44</td> <td>0.589</td> <td>0.383</td> </tr> <tr> <td>YAGO3-12 || JoBi ComplEx</td> <td>0.333</td> <td>0.477</td> <td>0.617</td> <td>0.428</td> </tr> </tbody></table>
Table 2
table_2
D19-1368
4
emnlp2019
Discussion. It could be seen in Table 2 that JoBi ComplEx outperforms both ComplEx and Dist-Mult on all three standard datasets, on all the metrics we consider. For Hits@1, JoBi Complex out-performs baseline ComplEx by 4% on FB15K-237, 6.4% on FB15K and 5.6% on YAGO3-10. Moreover, results in Table 2 demonstrate that JoBi improves performance on DistMult and SimplE. It should be noted that on FB15K-237, all JoBi models outperform all the baseline models, regardless of the base model used.
[2, 1, 1, 1, 1]
['Discussion.', 'It could be seen in Table 2 that JoBi ComplEx outperforms both ComplEx and Dist-Mult on all three standard datasets, on all the metrics we consider.', 'For Hits@1, JoBi Complex out-performs baseline ComplEx by 4% on FB15K-237, 6.4% on FB15K and 5.6% on YAGO3-10.', 'Moreover, results in Table 2 demonstrate that JoBi improves performance on DistMult and SimplE.', 'It should be noted that on FB15K-237, all JoBi models outperform all the baseline models, regardless of the base model used.']
[None, ['JoBi ComplEx', 'ComplEx', 'DistMult'], ['h@1', 'JoBi ComplEx', 'ComplEx', 'FB15K-237', 'FB15K', 'YAGO3-10'], ['JoBi SimplE', 'JoBi DistMult', 'JoBi ComplEx', 'DistMult', 'SimplE'], ['FB15K-237']]
1
D19-1368table_6
Results of ablation study on ComplEx model.
1
[['Baseline'], ['BiasedNeg'], ['Joint'], ['JoBi']]
1
[['h@1'], ['h@3'], ['h@10'], ['MRR']]
[['0.277', '0.44', '0.589', '0.383'], ['0.276', '0.427', '0.568', '0.375'], ['0.287', '0.447', '0.601', '0.392'], ['0.333', '0.477', '0.617', '0.428']]
column
['h@1', 'h@3', 'h@10', 'MRR']
['BiasedNeg', 'Joint', 'JoBi']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>h@1</th> <th>h@3</th> <th>h@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>0.277</td> <td>0.44</td> <td>0.589</td> <td>0.383</td> </tr> <tr> <td>BiasedNeg</td> <td>0.276</td> <td>0.427</td> <td>0.568</td> <td>0.375</td> </tr> <tr> <td>Joint</td> <td>0.287</td> <td>0.447</td> <td>0.601</td> <td>0.392</td> </tr> <tr> <td>JoBi</td> <td>0.333</td> <td>0.477</td> <td>0.617</td> <td>0.428</td> </tr> </tbody></table>
Table 6
table_6
D19-1368
5
emnlp2019
In Table 6 it can be seen that Joint on its own gives a slight performance boost over the baseline, and BiasedNeg performs slightly under the baseline on all measures. However, combining our two techniques in JoBi gives 5.6% points improvement on hits@1. This suggests that biased negative sampling increases the efficacy of joint training greatly, but is not very effective on its own.
[1, 1, 2]
['In Table 6 it can be seen that Joint on its own gives a slight performance boost over the baseline, and BiasedNeg performs slightly under the baseline on all measures.', 'However, combining our two techniques in JoBi gives 5.6% points improvement on hits@1.', 'This suggests that biased negative sampling increases the efficacy of joint training greatly, but is not very effective on its own.']
[['Joint', 'Baseline', 'BiasedNeg'], ['JoBi', 'h@1'], None]
1
D19-1370table_2
Results on PTB test with encoder pretraining. PPL↓ Recon↓ AU↑ KL -ELBO Method AE 101.27 VAE 101.46 + pretrain + pretrain + anneal 100.68
2
[['Method', 'AE'], ['Method', 'VAE'], ['Method', '+ pretrain'], ['Method', '+ pretrain + anneal']]
1
[['PPL#'], ['Recon#'], ['AU'], ['KL'], ['-ELBO']]
[['-', '70.36', '32', '-', '-'], ['101.39', '101.27', '0', '0.00', '101.27'], ['102.26', '101.46', '0', '0.00', '101.46'], ['97.74', '99.67', '2', '1.01', '100.68']]
column
['PPL#', 'Recon#', 'AU', 'KL', '-ELBO']
['+ pretrain + anneal']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL#</th> <th>Recon#</th> <th>AU</th> <th>KL</th> <th>-ELBO</th> </tr> </thead> <tbody> <tr> <td>Method || AE</td> <td>-</td> <td>70.36</td> <td>32</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || VAE</td> <td>101.39</td> <td>101.27</td> <td>0</td> <td>0.00</td> <td>101.27</td> </tr> <tr> <td>Method || + pretrain</td> <td>102.26</td> <td>101.46</td> <td>0</td> <td>0.00</td> <td>101.46</td> </tr> <tr> <td>Method || + pretrain + anneal</td> <td>97.74</td> <td>99.67</td> <td>2</td> <td>1.01</td> <td>100.68</td> </tr> </tbody></table>
Table 2
table_2
D19-1370
3
emnlp2019
2.3 Autoencoder-based Initialization . Based on the observations above we hypothesize that VAEs might benefit from initialization with an non-collapsed encoder, trained via an AE objective. Intuitively, if the encoder is providing useful information from the beginning of training, the decoder is more likely to make use of the latent code. In Table 2 we show the results of exploring this hypothesis on PTB. Even with encoder pretraining, we see that posterior collapse occurs immediately after beginning to update both encoder and decoder using the full ELBO objective. This indicates that the gradients of ELBO point towards a collapsed local optimum, even with biased initialization. When pretraining is combined with annealing, PPL improves substantially. However, the pretraining and anneal combination only has 2 active units and has small KL value – the latent representation is likely unsatisfactory. We speculate that this is because the annealing schedule eventually returns to the full ELBO objective which guides learning towards a (nearly) collapsed latent space. In the next section, we present an alternate approach using the KL thresholding / free bits method.
[2, 2, 2, 1, 1, 2, 1, 1, 2, 2]
['2.3 Autoencoder-based Initialization .', 'Based on the observations above we hypothesize that VAEs might benefit from initialization with an non-collapsed encoder, trained via an AE objective.', 'Intuitively, if the encoder is providing useful information from the beginning of training, the decoder is more likely to make use of the latent code.', 'In Table 2 we show the results of exploring this hypothesis on PTB.', 'Even with encoder pretraining, we see that posterior collapse occurs immediately after beginning to update both encoder and decoder using the full ELBO objective.', 'This indicates that the gradients of ELBO point towards a collapsed local optimum, even with biased initialization.', 'When pretraining is combined with annealing, PPL improves substantially.', 'However, the pretraining and anneal combination only has 2 active units and has small KL value – the latent representation is likely unsatisfactory.', 'We speculate that this is because the annealing schedule eventually returns to the full ELBO objective which guides learning towards a (nearly) collapsed latent space.', 'In the next section, we present an alternate approach using the KL thresholding / free bits method.']
[None, None, None, None, ['VAE', '+ pretrain', '-ELBO'], ['-ELBO'], ['+ pretrain + anneal', 'PPL#'], ['+ pretrain + anneal', 'AU', 'KL'], ['+ pretrain + anneal', '-ELBO'], ['KL']]
1
D19-1372table_3
Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.
2
[['Methods', 'Word2Vec+HCF'], ['Methods', 'CNN'], ['Methods', 'CNN+F'], ['Methods', 'CNN+HN'], ['Methods', 'CNN+F+HN'], ['Methods', 'Transformer']]
1
[['Accuracy'], ['Precision'], ['Recall'], ['F1']]
[['0.797', '0.776', '0.836', '0.705'], ['0.867', '0.88', '0.859', '0.869'], ['0.892', '0.886', '0.907', '0.896'], ['0.892', '0.889', '0.903', '0.896'], ['0.894', '0.866', '0.94', '0.901'], ['0.93', '0.93', '0.931', '0.931']]
column
['Accuracy', 'Precision', 'Recall', 'F1']
['Transformer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Word2Vec+HCF</td> <td>0.797</td> <td>0.776</td> <td>0.836</td> <td>0.705</td> </tr> <tr> <td>Methods || CNN</td> <td>0.867</td> <td>0.88</td> <td>0.859</td> <td>0.869</td> </tr> <tr> <td>Methods || CNN+F</td> <td>0.892</td> <td>0.886</td> <td>0.907</td> <td>0.896</td> </tr> <tr> <td>Methods || CNN+HN</td> <td>0.892</td> <td>0.889</td> <td>0.903</td> <td>0.896</td> </tr> <tr> <td>Methods || CNN+F+HN</td> <td>0.894</td> <td>0.866</td> <td>0.94</td> <td>0.901</td> </tr> <tr> <td>Methods || Transformer</td> <td>0.93</td> <td>0.93</td> <td>0.931</td> <td>0.931</td> </tr> </tbody></table>
Table 3
table_3
D19-1372
4
emnlp2019
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
[1, 1, 1]
['The results on the Pun of the Day dataset are shown in Table 3 above.', 'It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed.', 'Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.']
[None, ['CNN+F+HN', 'Transformer'], ['CNN']]
1
D19-1374table_1
Macro-averaged F1 comparison of per-language models and multilingual models over 48 languages. For non-multilingual models, F1 is the average over each per-language model trained.
4
[['Model', 'Meta-LSTM', 'Multilingual?', 'No'], ['Model', 'BERT', 'Multilingual?', 'No'], ['Model', 'Meta-LSTM', 'Multilingual?', 'Yes'], ['Model', 'BERT', 'Multilingual?', 'Yes']]
1
[['Part-of-Speech F1'], [' Morphology F1']]
[['94.5', '92.5'], ['95.1', '93'], ['91.1', '82.9'], ['94.5', '91']]
column
['Part-of-Speech F1', 'Morphology F1']
['BERT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Part-of-Speech F1</th> <th>Morphology F1</th> </tr> </thead> <tbody> <tr> <td>Model || Meta-LSTM || Multilingual? || No</td> <td>94.5</td> <td>92.5</td> </tr> <tr> <td>Model || BERT || Multilingual? || No</td> <td>95.1</td> <td>93</td> </tr> <tr> <td>Model || Meta-LSTM || Multilingual? || Yes</td> <td>91.1</td> <td>82.9</td> </tr> <tr> <td>Model || BERT || Multilingual? || Yes</td> <td>94.5</td> <td>91</td> </tr> </tbody></table>
Table 1
table_1
D19-1374
4
emnlp2019
The results in Table 1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings. However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230ms on an Intel Xeon CPU. Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks.
[1, 1, 2]
['The results in Table 1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings.', 'However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230ms on an Intel Xeon CPU.', 'Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks.']
[['BERT', 'Meta-LSTM'], ['BERT'], None]
1
D19-1376table_3
Unlabeled unsupervised parsing F1 on WSJ40. ‡ trains on the training split of WSJ, while † trains on AllNLI (Htut et al., 2018). The PRPN result is taken from Drozdov et al. (2019).
2
[['Model', 'Right Branching'], ['Model', 'yDIORA'], ['Model', 'zPRPN'], ['Model', 'zPaLM-U']]
1
[['Unlabeled F1']]
[['40.7'], ['60.6'], ['52.4'], ['42.0']]
column
['Unlabeled F1']
['zPaLM-U']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Unlabeled F1</th> </tr> </thead> <tbody> <tr> <td>Model || Right Branching</td> <td>40.7</td> </tr> <tr> <td>Model || yDIORA</td> <td>60.6</td> </tr> <tr> <td>Model || zPRPN</td> <td>52.4</td> </tr> <tr> <td>Model || zPaLM-U</td> <td>42.0</td> </tr> </tbody></table>
Table 3
table_3
D19-1376
5
emnlp2019
In addition to PRPN, we compare to DIORA (Drozdov et al., 2019), which uses an inside-outside dynamic program in an autoencoder. Table 3 shows the F1 results. PaLM outperforms the right branching baseline, but is not as accurate as the other models. This indicates that the type of syntactic trees learned by it, albeit useful to the LM component, do not correspond well to PTB-like syntactic trees.
[0, 1, 1, 2]
['In addition to PRPN, we compare to DIORA (Drozdov et al., 2019), which uses an inside-outside dynamic program in an autoencoder.', 'Table 3 shows the F1 results.', 'PaLM outperforms the right branching baseline, but is not as accurate as the other models.', 'This indicates that the type of syntactic trees learned by it, albeit useful to the LM component, do not correspond well to PTB-like syntactic trees.']
[None, ['Unlabeled F1'], ['zPaLM-U', 'Right Branching'], None]
1
D19-1379table_3
Performance comparison between LM finetuning on target domain unlabeled data of the same size as each test set, “Controlled Unlabeled data (CU),” and transductive LM fine-tuning on each test set (T). Cells show the F1 scores averaged across the target domains.
1
[['BC'], ['BN'], ['MZ'], ['NW'], ['PT'], ['TC'], ['WB']]
2
[['Syntactic chunking', 'CU'], ['Syntactic chunking', 'T'], ['Semantic role labeling', 'CU'], ['Semantic role labeling', 'T']]
[['90.4', '90.8', '78.6', '79.3'], ['91.1', '91.6', '79.8', '80.4'], ['90', '90.4', '77.9', '78.5'], ['92.1', '92.3', '81.1', '81.7'], ['87.1', '87.3', '73.5', '74'], ['87.1', '87.6', '71.3', '71.6'], ['91.8', '92', '76.6', '77.1']]
column
['F1', 'F1', 'F1', 'F1']
['T']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic chunking || CU</th> <th>Syntactic chunking || T</th> <th>Semantic role labeling || CU</th> <th>Semantic role labeling || T</th> </tr> </thead> <tbody> <tr> <td>BC</td> <td>90.4</td> <td>90.8</td> <td>78.6</td> <td>79.3</td> </tr> <tr> <td>BN</td> <td>91.1</td> <td>91.6</td> <td>79.8</td> <td>80.4</td> </tr> <tr> <td>MZ</td> <td>90</td> <td>90.4</td> <td>77.9</td> <td>78.5</td> </tr> <tr> <td>NW</td> <td>92.1</td> <td>92.3</td> <td>81.1</td> <td>81.7</td> </tr> <tr> <td>PT</td> <td>87.1</td> <td>87.3</td> <td>73.5</td> <td>74</td> </tr> <tr> <td>TC</td> <td>87.1</td> <td>87.6</td> <td>71.3</td> <td>71.6</td> </tr> <tr> <td>WB</td> <td>91.8</td> <td>92</td> <td>76.6</td> <td>77.1</td> </tr> </tbody></table>
Table 3
table_3
D19-1379
4
emnlp2019
Comparison between unsupervised domain adaptation and transduction. In unsupervised domain adaptation, target domain unlabeled data (the texts whose domain is the same as that of a test set) is used for adaptation. Although the domain is identical between target domain data and a test set, their word distributions are somewhat different. In transductive learning, because an unlabeled test set can be used for training, it is possible to adapt LMs directly to the word distributions of the test set. Here, we investigate whether adapting LMs directly to each test set is more effective than adapting LMs to each target domain unlabeled data. Similarly to our transductive method shown in Figure 1, we first train LMs on the largescale unlabeled corpus (the 1B word benchmark corpus) and then fine-tune them on the unlabeled target domain data8. In addition, we control the sizes of the target domain unlabeled data and test sets. That is, we use the same number of sentences in the unlabeled data of each target domain as in each test set. Table 3 shows the F1 scores averaged across all the target domains. The transductive models (T) consistently outperformed the domain-adapted models (CU). This demonstrates that adapting LMs directly to test sets is more effective than adapting them to target domain unlabeled data.
[2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1]
['Comparison between unsupervised domain adaptation and transduction.', 'In unsupervised domain adaptation, target domain unlabeled data (the texts whose domain is the same as that of a test set) is used for adaptation.', 'Although the domain is identical between target domain data and a test set, their word distributions are somewhat different.', 'In transductive learning, because an unlabeled test set can be used for training, it is possible to adapt LMs directly to the word distributions of the test set.', ' Here, we investigate whether adapting LMs directly to each test set is more effective than adapting LMs to each target domain unlabeled data.', 'Similarly to our transductive method shown in Figure 1, we first train LMs on the largescale unlabeled corpus (the 1B word benchmark corpus) and then fine-tune them on the unlabeled target domain data8.', 'In addition, we control the sizes of the target domain unlabeled data and test sets.', 'That is, we use the same number of sentences in the unlabeled data of each target domain as in each test set.', 'Table 3 shows the F1 scores averaged across all the target domains.', 'The transductive models (T) consistently outperformed the domain-adapted models (CU).', 'This demonstrates that adapting LMs directly to test sets is more effective than adapting them to target domain unlabeled data.']
[None, None, None, None, None, None, None, None, None, ['T', 'CU'], ['T', 'CU']]
1
D19-1379table_4
Performance comparison between LM finetuning on target domain unlabeled data (U) and on the combination of the unlabeled data and test sets (U + T). Cells show the F1 scores averaged across the target domains.
1
[['BC'], ['BN'], ['MZ'], ['NW'], ['PT'], ['TC'], ['WB']]
2
[['Syntactic chunking', 'U'], ['Syntactic chunking', 'U + T'], ['Semantic role labeling', 'U'], ['Semantic role labeling', 'U + T']]
[['90.5', '91.0', '79.0', '79.4'], ['91.3', '91.6', '80.1', '80.6'], ['90.2', '90.6', '78.3', '78.7'], ['92.1', '92.5', '81.5', '81.9'], ['87.3', '87.7', '73.6', '74.3'], ['87.2', '87.6', '71.4', '72.0'], ['91.8', '92.2', '76.8', '77.2']]
column
['F1', 'F1', 'F1', 'F1']
['U', 'U + T']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic chunking || U</th> <th>Syntactic chunking || U + T</th> <th>Semantic role labeling || U</th> <th>Semantic role labeling || U + T</th> </tr> </thead> <tbody> <tr> <td>BC</td> <td>90.5</td> <td>91.0</td> <td>79.0</td> <td>79.4</td> </tr> <tr> <td>BN</td> <td>91.3</td> <td>91.6</td> <td>80.1</td> <td>80.6</td> </tr> <tr> <td>MZ</td> <td>90.2</td> <td>90.6</td> <td>78.3</td> <td>78.7</td> </tr> <tr> <td>NW</td> <td>92.1</td> <td>92.5</td> <td>81.5</td> <td>81.9</td> </tr> <tr> <td>PT</td> <td>87.3</td> <td>87.7</td> <td>73.6</td> <td>74.3</td> </tr> <tr> <td>TC</td> <td>87.2</td> <td>87.6</td> <td>71.4</td> <td>72.0</td> </tr> <tr> <td>WB</td> <td>91.8</td> <td>92.2</td> <td>76.8</td> <td>77.2</td> </tr> </tbody></table>
Table 4
table_4
D19-1379
5
emnlp2019
Combination of unsupervised domain adaptation and transduction. In real-world situations, large-scale unlabeled data of target domains is sometimes available. In such cases, LMs can be trained on both the target domain unlabeled data and the test sets. Here, we investigate the effectiveness of using both datasets. Table 4 shows the F1 scores averaged across all the target domains. Fine-tuning the LMs on the target domain unlabeled data as well as each test set (U + T) showed better performance than fine-tuning them only on the target domain unlabeled data (U). This combination of tranduction with unsupervised domain adaptation further improves performance.
[2, 2, 2, 2, 1, 1, 2]
['Combination of unsupervised domain adaptation and transduction.', 'In real-world situations, large-scale unlabeled data of target domains is sometimes available.', 'In such cases, LMs can be trained on both the target domain unlabeled data and the test sets.', 'Here, we investigate the effectiveness of using both datasets.', 'Table 4 shows the F1 scores averaged across all the target domains.', 'Fine-tuning the LMs on the target domain unlabeled data as well as each test set (U + T) showed better performance than fine-tuning them only on the target domain unlabeled data (U).', 'This combination of tranduction with unsupervised domain adaptation further improves performance.']
[None, None, None, None, None, ['U + T', 'U'], None]
1
D19-1379table_5
Standard benchmark results. Cells show the F1 scores on each test set. The CoNLL-2000 and CoNLL-2005/2012 datasets are used for syntactic chunking and SRL, respectively. Results of the transductive models (TRANS) marked with * are statistically significant compared to the baselines (BASE) using the permutation test (p < 0.05).
2
[['CoNLL', 'BASE'], ['CoNLL', 'TRANS'], ['CoNLL', 'Clark et al. (2018)'], ['CoNLL', 'Peters et al. (2017)'], ['CoNLL', 'Hashimoto et al. (2017)'], ['CoNLL', 'Wang et al. (2019)'], ['CoNLL', 'Li et al. (2019)'], ['CoNLL', 'Ouchi et al. (2018)'], ['CoNLL', 'He et al. (2018)']]
1
[['2000'], ['2005 WSJ'], ['2005 Brown'], ['2012']]
[[' 96.6', ' 87.7', ' 78.3', ' 86.2'], [' 96.7', ' 87.9*', ' 79.5*', ' 86.6*'], [' 97.0', '-', '-', '-'], [' 96.4', '-', '-', '-'], [' 95.8', '-', '-', '-'], ['-', ' 88.2', ' 79.3', ' 86.4'], ['-', ' 87.7', ' 80.5', ' 86.0'], ['-', ' 87.6', ' 78.7', ' 86.2'], ['-', ' 87.4', ' 80.4', ' 85.5']]
column
['F1', 'F1', 'F1', 'F1']
['TRANS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2000</th> <th>2005 WSJ</th> <th>2005 Brown</th> <th>2012</th> </tr> </thead> <tbody> <tr> <td>CoNLL || BASE</td> <td>96.6</td> <td>87.7</td> <td>78.3</td> <td>86.2</td> </tr> <tr> <td>CoNLL || TRANS</td> <td>96.7</td> <td>87.9*</td> <td>79.5*</td> <td>86.6*</td> </tr> <tr> <td>CoNLL || Clark et al. (2018)</td> <td>97.0</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CoNLL || Peters et al. (2017)</td> <td>96.4</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CoNLL || Hashimoto et al. (2017)</td> <td>95.8</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CoNLL || Wang et al. (2019)</td> <td>-</td> <td>88.2</td> <td>79.3</td> <td>86.4</td> </tr> <tr> <td>CoNLL || Li et al. (2019)</td> <td>-</td> <td>87.7</td> <td>80.5</td> <td>86.0</td> </tr> <tr> <td>CoNLL || Ouchi et al. (2018)</td> <td>-</td> <td>87.6</td> <td>78.7</td> <td>86.2</td> </tr> <tr> <td>CoNLL || He et al. (2018)</td> <td>-</td> <td>87.4</td> <td>80.4</td> <td>85.5</td> </tr> </tbody></table>
Table 5
table_5
D19-1379
5
emnlp2019
Effects in standard benchmarks. Some studies indicated that when promising new techniques are only evaluated on very basic models, determining how much (if any) improvement will carry over to stronger models can be difficult (Denkowski and Neubig, 2017; Suzuki et al., 2018). Motivated by such studies, we provide the results in standard benchmark settings. For syntactic chunking, we use the CoNLL-2000 dataset (Sang and Buchholz, 2000) and follow the standard experimental protocol (Hashimoto et al., 2017). For SRL, we use the CoNLL-2005 (Carreras and M`arquez, 2005) and CoNLL-2012 datasets (Pradhan et al., 2012) and follow the standard experimental protocol (Ouchi et al., 2018). Table 5 shows the F1 scores of our models and those of existing models. The results of the baseline model were comparable with those of the state-of-the-art models, and the transductive model consistently outperformed the baseline model. Note that we cannot fairly compare the transductive and existing models due to the difference in settings. These results, however, demonstrate that transductive LM fine-tuning improves state-of-the-art chunking and SRL models.
[0, 0, 0, 2, 2, 1, 1, 2, 1]
['Effects in standard benchmarks.', 'Some studies indicated that when promising new techniques are only evaluated on very basic models, determining how much (if any) improvement will carry over to stronger models can be difficult (Denkowski and Neubig, 2017; Suzuki et al., 2018).', 'Motivated by such studies, we provide the results in standard benchmark settings.', 'For syntactic chunking, we use the CoNLL-2000 dataset (Sang and Buchholz, 2000) and follow the standard experimental protocol (Hashimoto et al., 2017).', 'For SRL, we use the CoNLL-2005 (Carreras and M`arquez, 2005) and CoNLL-2012 datasets (Pradhan et al., 2012) and follow the standard experimental protocol (Ouchi et al., 2018).', 'Table 5 shows the F1 scores of our models and those of existing models.', 'The results of the baseline model were comparable with those of the state-of-the-art models, and the transductive model consistently outperformed the baseline model.', 'Note that we cannot fairly compare the transductive and existing models due to the difference in settings.', 'These results, however, demonstrate that transductive LM fine-tuning improves state-of-the-art chunking and SRL models.']
[None, None, None, None, None, None, ['BASE', 'TRANS'], None, ['TRANS', 'BASE']]
1
D19-1380table_4
Performance in text classification (20-NG, R-8) and sentiment (SST-5) tasks of various models as reported in (Kayal and Tsatsaronis, 2019), where DCT* refers to the implementation in (Kayal and Tsatsaronis, 2019). Our DCT embeddings are denoted as ck in the bottom row. Bold indicates the best result, and italic indicates secondbest.
2
[['Model', 'PCA'], ['Model', 'DCT*'], ['Model', 'Avg. vec.'], ['Model', 'p-means'], ['Model', 'ELMo'], ['Model', 'BERT'], ['Model', 'EigenSent'], ['Model', 'EigenSent⊕Avg'], ['Model', 'ck']]
2
[['20-NG', 'P'], ['20-NG', 'R'], ['20-NG', 'F1'], ['R-8', 'P'], ['R-9', 'R'], ['R-10', 'F1'], ['SST-5', 'P'], ['SST-6', 'R'], ['SST-7', 'F1']]
[['55.43', '54.67', '54.77', '83.83', '83.42', '83.41', '26.47', '25.08', '25.23'], ['61.07', '59.16', '59.78', '90.41', '90.78', '90.38', '30.11', '30.09', '29.53'], ['68.72', '68.19', '68.25', '96.34', '96.3', '96.27', '27.88', '26.44', '24.81'], ['72.2', '71.65', '71.79', '96.69', '96.67', '96.65', '33.77', '33.41', '33.26'], ['71.2', '71.79', '71.36', '94.54', '91.32', '91.32', '42.35', '41.51', '41.54'], ['70.89', '70.79', '70.88', '95.52', '95.39', '95.39', '39.92', '39.38', '39.35'], ['66.98', '66.4', '66.54', '95.91', '95.8', '95.76', '35.32', '33.69', '33.91'], ['72.24', '71.62', '71.78', '97.18', '97.13', '97.14', '42.77', '41.67', '41.81'], ['72.2', '71.58', '71.73', '96.98', '96.98', '96.94', '37.67', '34.47', '34.54']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['ck']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20-NG || P</th> <th>20-NG || R</th> <th>20-NG || F1</th> <th>R-8 || P</th> <th>R-9 || R</th> <th>R-10 || F1</th> <th>SST-5 || P</th> <th>SST-6 || R</th> <th>SST-7 || F1</th> </tr> </thead> <tbody> <tr> <td>Model || PCA</td> <td>55.43</td> <td>54.67</td> <td>54.77</td> <td>83.83</td> <td>83.42</td> <td>83.41</td> <td>26.47</td> <td>25.08</td> <td>25.23</td> </tr> <tr> <td>Model || DCT*</td> <td>61.07</td> <td>59.16</td> <td>59.78</td> <td>90.41</td> <td>90.78</td> <td>90.38</td> <td>30.11</td> <td>30.09</td> <td>29.53</td> </tr> <tr> <td>Model || Avg. vec.</td> <td>68.72</td> <td>68.19</td> <td>68.25</td> <td>96.34</td> <td>96.3</td> <td>96.27</td> <td>27.88</td> <td>26.44</td> <td>24.81</td> </tr> <tr> <td>Model || p-means</td> <td>72.2</td> <td>71.65</td> <td>71.79</td> <td>96.69</td> <td>96.67</td> <td>96.65</td> <td>33.77</td> <td>33.41</td> <td>33.26</td> </tr> <tr> <td>Model || ELMo</td> <td>71.2</td> <td>71.79</td> <td>71.36</td> <td>94.54</td> <td>91.32</td> <td>91.32</td> <td>42.35</td> <td>41.51</td> <td>41.54</td> </tr> <tr> <td>Model || BERT</td> <td>70.89</td> <td>70.79</td> <td>70.88</td> <td>95.52</td> <td>95.39</td> <td>95.39</td> <td>39.92</td> <td>39.38</td> <td>39.35</td> </tr> <tr> <td>Model || EigenSent</td> <td>66.98</td> <td>66.4</td> <td>66.54</td> <td>95.91</td> <td>95.8</td> <td>95.76</td> <td>35.32</td> <td>33.69</td> <td>33.91</td> </tr> <tr> <td>Model || EigenSent⊕Avg</td> <td>72.24</td> <td>71.62</td> <td>71.78</td> <td>97.18</td> <td>97.13</td> <td>97.14</td> <td>42.77</td> <td>41.67</td> <td>41.81</td> </tr> <tr> <td>Model || ck</td> <td>72.2</td> <td>71.58</td> <td>71.73</td> <td>96.98</td> <td>96.98</td> <td>96.94</td> <td>37.67</td> <td>34.47</td> <td>34.54</td> </tr> </tbody></table>
Table 4
table_4
D19-1380
5
emnlp2019
For fair comparison, we use the same sentiment and text classification datasets, the SST-5, 20 newsgroups (20-NG) and Reuters-8 (R-8), as those used in Kayal and Tsatsaronis (2019). We also evaluate using the same pre-trained word embedding, framework and approaches as described in their work. Table 4 shows the best results for the various models as reported in Kayal and Tsatsaronis (2019), in addition to the best performance of our model denoted as ck. Note that the DCT-based model, DCT*, described in Kayal and Tsatsaronis (2019) performed relatively poorly in all tasks, while our model achieved close to state-of-the-art performance in both the 20-NG and R-8 tasks. Our model outperformed EignSent on all tasks and generally performed better than or on par with p-means, ELMo, BERT, and EigenSent⊕Avg on both the 20-NG and R-8. On the other hand, both EigenSent⊕Avg and ELMo performed better than all other models on SST-5.
[2, 2, 1, 1, 1, 1]
['For fair comparison, we use the same sentiment and text classification datasets, the SST-5, 20 newsgroups (20-NG) and Reuters-8 (R-8), as those used in Kayal and Tsatsaronis (2019).', 'We also evaluate using the same pre-trained word embedding, framework and approaches as described in their work.', 'Table 4 shows the best results for the various models as reported in Kayal and Tsatsaronis (2019), in addition to the best performance of our model denoted as ck.', 'Note that the DCT-based model, DCT*, described in Kayal and Tsatsaronis (2019) performed relatively poorly in all tasks, while our model achieved close to state-of-the-art performance in both the 20-NG and R-8 tasks.', 'Our model outperformed EignSent on all tasks and generally performed better than or on par with p-means, ELMo, BERT, and EigenSent⊕Avg on both the 20-NG and R-8.', 'On the other hand, both EigenSent⊕Avg and ELMo performed better than all other models on SST-5.']
[None, None, ['ck'], ['DCT*', 'ck', '20-NG', 'R-8'], ['ck', 'p-means', 'ELMo', 'BERT', '20-NG', 'R-8'], ['ELMo', 'SST-5']]
1
D19-1381table_1
Event detection performance on the CG task 2013 test dataset.
2
[['Model', 'TEES'], ['Model', 'SBNN']]
1
[['P'], ['R'], ['F (%)']]
[['61.42', '52.93', '56.86'], ['63.67', '51.43', '56.9']]
column
['P', 'R', 'F (%)']
['TEES']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F (%)</th> </tr> </thead> <tbody> <tr> <td>Model || TEES</td> <td>61.42</td> <td>52.93</td> <td>56.86</td> </tr> <tr> <td>Model || SBNN</td> <td>63.67</td> <td>51.43</td> <td>56.9</td> </tr> </tbody></table>
Table 1
table_1
D19-1381
5
emnlp2019
Table 1 shows the event detection performance of the models on the test set. Our model achieves performance comparable to the state-of-the-art TEES event detection module without the use of any syntactic and hand-engineered features, suggesting it can be applied to other domains with no need for feature engineering. We validated it to have no significant statistical difference with the TEES model (the Approximate Randomisation test (Yeh, 2000; Noreen, 1989)).
[1, 1, 1]
['Table 1 shows the event detection performance of the models on the test set.', 'Our model achieves performance comparable to the state-of-the-art TEES event detection module without the use of any syntactic and hand-engineered features, suggesting it can be applied to other domains with no need for feature engineering.', 'We validated it to have no significant statistical difference with the TEES model (the Approximate Randomisation test (Yeh, 2000; Noreen, 1989)).']
[None, ['TEES'], ['TEES']]
1
D19-1381table_3
Comparison on computational efficiency on the CG task 2013 development dataset.
2
[['Model', 'TEES'], ['Model', 'SBNN k = 8']]
1
[['Number of Classification'], ['Running Time (s)']]
[['6141', '155'], ['4093', '131']]
column
['Number of Classification', 'Running Time (s)']
['SBNN k = 8', 'TEES']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Number of Classification</th> <th>Running Time (s)</th> </tr> </thead> <tbody> <tr> <td>Model || TEES</td> <td>6141</td> <td>155</td> </tr> <tr> <td>Model || SBNN k = 8</td> <td>4093</td> <td>131</td> </tr> </tbody></table>
Table 3
table_3
D19-1381
5
emnlp2019
Table 3 shows the number of classifications (or action scoring function calls in our model) performed by each model with the corresponding actual running time. SBNN performs fewer classifications and in less time than TEES, implying it is more computationally efficient.
[1, 1]
['Table 3 shows the number of classifications (or action scoring function calls in our model) performed by each model with the corresponding actual running time.', 'SBNN performs fewer classifications and in less time than TEES, implying it is more computationally efficient.']
[None, ['SBNN k = 8', 'TEES']]
1
D19-1383table_4
Results on CSPUBSUM
2
[['Model', 'SAF + F Ens (Collins et al., 2017)'], ['Model', 'BERT +Transformer'], ['Model', 'Our model'], ['Model', 'Our model + ABSTRACTROUGE']]
1
[['ROUGE-L']]
[['0.313'], ['0.287'], ['0.306'], ['0.314']]
column
['ROUGE-L']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || SAF + F Ens (Collins et al., 2017)</td> <td>0.313</td> </tr> <tr> <td>Model || BERT +Transformer</td> <td>0.287</td> </tr> <tr> <td>Model || Our model</td> <td>0.306</td> </tr> <tr> <td>Model || Our model + ABSTRACTROUGE</td> <td>0.314</td> </tr> </tbody></table>
Table 4
table_4
D19-1383
4
emnlp2019
Table 4 summarizes results on CSPUB-SUM. Following Collins et al. (2017) we take the top 10 predicted sentences as the summary and use ROUGE-L scores for evaluation. It is clear that our approach outperforms BERT+TRANSFORMER. The BERT+TRANSFORMER+CRF baseline is not included here because, as mentioned in section 3, we train our model to predict ROUGE, not binary labels as in Collins et al. (2017). As in Collins et al. (2017), we found the ABSTRACT-ROUGE feature to be useful. Our model augmented with this feature slightly outperforms Collins et al. (2017)’s model, which is a relatively complex ensemble model and uses a number of carefully engineered features for the task. Our model is a single model with only one added feature.
[1, 1, 1, 2, 1, 1, 2]
['Table 4 summarizes results on CSPUB-SUM.', 'Following Collins et al. (2017) we take the top 10 predicted sentences as the summary and use ROUGE-L scores for evaluation.', 'It is clear that our approach outperforms BERT+TRANSFORMER.', 'The BERT+TRANSFORMER+CRF baseline is not included here because, as mentioned in section 3, we train our model to predict ROUGE, not binary labels as in Collins et al. (2017).', 'As in Collins et al. (2017), we found the ABSTRACT-ROUGE feature to be useful.', 'Our model augmented with this feature slightly outperforms Collins et al. (2017)’s model, which is a relatively complex ensemble model and uses a number of carefully engineered features for the task.', 'Our model is a single model with only one added feature.']
[None, ['ROUGE-L'], ['Our model', 'BERT +Transformer'], None, ['Our model + ABSTRACTROUGE'], ['Our model + ABSTRACTROUGE', 'SAF + F Ens (Collins et al., 2017)'], None]
1
D19-1387table_3
ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software. Table cells are filled with — whenever results are not available.
3
[['Model', ' -', 'ORACLE'], ['Model', ' -', 'LEAD-3'], ['Model', 'Extractive', 'COMPRESS (Durrett et al. 2016)'], ['Model', 'Extractive', 'SUMO (Liu et al. 2019)'], ['Model', 'Extractive', 'TransformerEXT'], ['Model', 'Abstractive', 'PTGEN (See et al. 2017)'], ['Model', 'Abstractive', 'PTGEN + COV (See et al. 2017)'], ['Model', 'Abstractive', 'DRM (Paulus et al. 2018)'], ['Model', 'Abstractive', 'TransformerABS'], ['Model', 'BERT-based', 'BERTSUMEXT'], ['Model', 'BERT-based', 'BERTSUMABS'], ['Model', 'BERT-based', 'BERTSUMEXTABS']]
1
[['R1'], ['R2'], ['RL']]
[['49.18', '33.24', '46.02'], ['39.58', '20.11', '35.78'], ['42.2', '24.9', ' -'], ['42.3', '22.7', '38.6'], ['41.95', '22.68', '38.51'], ['42.47', '25.61', ' -'], ['43.71', '26.4', ' -'], ['42.94', '26.02', ' -'], ['35.75', '17.23', '31.41'], ['46.66', '26.35', '42.62'], ['48.92', '30.84', '45.41'], ['49.02', '31.02', '45.55']]
column
['R1', 'R2', 'RL']
['BERT-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Model || - || ORACLE</td> <td>49.18</td> <td>33.24</td> <td>46.02</td> </tr> <tr> <td>Model || - || LEAD-3</td> <td>39.58</td> <td>20.11</td> <td>35.78</td> </tr> <tr> <td>Model || Extractive || COMPRESS (Durrett et al. 2016)</td> <td>42.2</td> <td>24.9</td> <td>-</td> </tr> <tr> <td>Model || Extractive || SUMO (Liu et al. 2019)</td> <td>42.3</td> <td>22.7</td> <td>38.6</td> </tr> <tr> <td>Model || Extractive || TransformerEXT</td> <td>41.95</td> <td>22.68</td> <td>38.51</td> </tr> <tr> <td>Model || Abstractive || PTGEN (See et al. 2017)</td> <td>42.47</td> <td>25.61</td> <td>-</td> </tr> <tr> <td>Model || Abstractive || PTGEN + COV (See et al. 2017)</td> <td>43.71</td> <td>26.4</td> <td>-</td> </tr> <tr> <td>Model || Abstractive || DRM (Paulus et al. 2018)</td> <td>42.94</td> <td>26.02</td> <td>-</td> </tr> <tr> <td>Model || Abstractive || TransformerABS</td> <td>35.75</td> <td>17.23</td> <td>31.41</td> </tr> <tr> <td>Model || BERT-based || BERTSUMEXT</td> <td>46.66</td> <td>26.35</td> <td>42.62</td> </tr> <tr> <td>Model || BERT-based || BERTSUMABS</td> <td>48.92</td> <td>30.84</td> <td>45.41</td> </tr> <tr> <td>Model || BERT-based || BERTSUMEXTABS</td> <td>49.02</td> <td>31.02</td> <td>45.55</td> </tr> </tbody></table>
Table 3
table_3
D19-1387
7
emnlp2019
Table 3 presents results on the NYT dataset. Following the evaluation protocol in Durrett et al. (2016), we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the ORACLE upper bound and LEAD-3 baseline. The second block in the table contains previously proposed extractive models as well as our own Transformer baseline. COMPRESS (Durrett et al., 2016) is an ILP-based model which combines compression and anaphoricity constraints. The third block includes abstractive models from the literature, and our Transformer baseline. BERT-based models are shown in the fourth block. Again, we observe that they outperform previously proposed approaches. On this dataset, abstractive BERT models generally perform better compared to BERTSUMEXT, almost approaching ORACLE performance.
[1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 presents results on the NYT dataset.', 'Following the evaluation protocol in Durrett et al. (2016), we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries.', 'Again, we report the performance of the ORACLE upper bound and LEAD-3 baseline.', 'The second block in the table contains previously proposed extractive models as well as our own Transformer baseline.', 'COMPRESS (Durrett et al., 2016) is an ILP-based model which combines compression and anaphoricity constraints.', 'The third block includes abstractive models from the literature, and our Transformer baseline.', 'BERT-based models are shown in the fourth block.', 'Again, we observe that they outperform previously proposed approaches.', 'On this dataset, abstractive BERT models generally perform better compared to BERTSUMEXT, almost approaching ORACLE performance.']
[None, None, ['ORACLE', 'LEAD-3'], ['Extractive', 'TransformerEXT'], ['COMPRESS (Durrett et al. 2016)'], ['Abstractive', 'TransformerABS'], ['BERT-based'], ['BERT-based'], ['BERTSUMEXTABS', 'BERTSUMEXT', 'ORACLE']]
1
D19-1388table_4
Fluency and consistency comparison by human evaluation.
1
[['Uni-model'], ['Re 3 Sum'], ['PESG']]
1
[['Fluency'], ['Consistency']]
[['1.61', '1.53'], ['1.53', '1.14'], ['1.86*', '1.73*']]
column
['Fluency', 'Consistency']
['PESG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Consistency</th> </tr> </thead> <tbody> <tr> <td>Uni-model</td> <td>1.61</td> <td>1.53</td> </tr> <tr> <td>Re 3 Sum</td> <td>1.53</td> <td>1.14</td> </tr> <tr> <td>PESG</td> <td>1.86*</td> <td>1.73*</td> </tr> </tbody></table>
Table 4
table_4
D19-1388
7
emnlp2019
For the human evaluation, we asked annotators to rate each summary according to its consistency and fluency. The rating score ranges from 1 to 3, with 3 being the best. Table 4 lists the average scores of each model, showing that PESG outperforms the other baseline models in both fluency and consistency. The kappa statistics are 0.33 and 0.29 for fluency and consistency respectively, and that indicates the moderate agreement between annotators. To prove the significance of these results, we also conduct the paired student t-test between our model and Re 3 Sum (row with shaded background). We obtain a p-value of 2 x 10^(-7) and 9 x 10^(-12) for fluency and consistency, respectively.
[2, 2, 1, 2, 2, 2]
['For the human evaluation, we asked annotators to rate each summary according to its consistency and fluency.', 'The rating score ranges from 1 to 3, with 3 being the best.', 'Table 4 lists the average scores of each model, showing that PESG outperforms the other baseline models in both fluency and consistency.', 'The kappa statistics are 0.33 and 0.29 for fluency and consistency respectively, and that indicates the moderate agreement between annotators.', 'To prove the significance of these results, we also conduct the paired student t-test between our model and Re 3 Sum (row with shaded background).', 'We obtain a p-value of 2 x 10^(-7) and 9 x 10^(-12) for fluency and consistency, respectively.']
[None, None, ['PESG', 'Fluency', 'Consistency'], ['Fluency', 'Consistency'], None, ['Fluency', 'Consistency']]
1
D19-1399table_6
Performance on the CoNLL-2003 English dataset.
2
[['Model', 'Peters et al. (2018a) ELMo'], ['Model', 'BiLSTM-CRF + ELMo (L = 2)'], ['Model', 'DGLSTM-CRF + ELMo (L = 2)']]
1
[['Prec.'], ['Rec.'], ['F1']]
[['-', '-', '92.2'], ['92.1', '92.3', '92.2'], ['92.2', '92.5', '92.4']]
column
['Prec.', 'Rec.', 'F1']
['DGLSTM-CRF + ELMo (L = 2)', 'BiLSTM-CRF + ELMo (L = 2)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Peters et al. (2018a) ELMo</td> <td>-</td> <td>-</td> <td>92.2</td> </tr> <tr> <td>Model || BiLSTM-CRF + ELMo (L = 2)</td> <td>92.1</td> <td>92.3</td> <td>92.2</td> </tr> <tr> <td>Model || DGLSTM-CRF + ELMo (L = 2)</td> <td>92.2</td> <td>92.5</td> <td>92.4</td> </tr> </tbody></table>
Table 6
table_6
D19-1399
8
emnlp2019
4.2 Additional Experiments CoNLL-2003 English Table 6 shows the performance on the CoNLL-2003 English dataset. The dependencies are predicted from Spacy (Honnibal and Montani, 2017). With the contextualized word representations, DGLSTM-CRF outperforms BiLSTM-CRF with 0.2 points in F1 (p < 0.09). The improvement is not significant due to the relatively lower equality of the dependency trees. To further study the effect of the dependencies, we modified the predicted dependencies to ensure each entity form a subtree in the complete dataset. Such modification improves the F1 to 92.7, which is significantly better (p < 0.05) than the BiLSTM-CRF.
[1, 2, 1, 2, 2, 1]
['4.2 Additional Experiments CoNLL-2003 English Table 6 shows the performance on the CoNLL-2003 English dataset.', 'The dependencies are predicted from Spacy (Honnibal and Montani, 2017).', 'With the contextualized word representations, DGLSTM-CRF outperforms BiLSTM-CRF with 0.2 points in F1 (p < 0.09).', 'The improvement is not significant due to the relatively lower equality of the dependency trees.', 'To further study the effect of the dependencies, we modified the predicted dependencies to ensure each entity form a subtree in the complete dataset.', 'Such modification improves the F1 to 92.7, which is significantly better (p < 0.05) than the BiLSTM-CRF.']
[None, None, ['DGLSTM-CRF + ELMo (L = 2)', 'BiLSTM-CRF + ELMo (L = 2)'], None, None, ['DGLSTM-CRF + ELMo (L = 2)']]
1
D19-1400table_4
AUC performance for various representation methods. AVG refers to a simple unweighted average of word vectors, IDF refers to a document-frequency-based weighting according to equation 1. LANG refers to a weighting scheme that takes the language of origin into consideration, based on Equation 4. The best result per dataset is marked in bold, “*” indicates statistically significant difference of the leading method from both other methods.
3
[['Dataset', 'formality classification', 'Amazon Motors MT'], ['Dataset', 'formality classification', 'Amazon Motors NN'], ['Dataset', 'formality classification', 'Amazon Fashion MT'], ['Dataset', 'formality classification', 'Amazon Fashion NN'], ['Dataset', 'formality classification', 'New York Times MT'], ['Dataset', 'formality classification', 'New York Times NN'], ['Dataset', 'formality classification', 'Answers MT'], ['Dataset', 'formality classification', 'Answers NN'], ['Dataset', 'formality classification', 'Blog MT'], ['Dataset', 'formality classification', 'Blog NN'], ['Dataset', 'formality classification', 'Email MT'], ['Dataset', 'formality classification', 'Email NN'], ['Dataset', 'formality classification', 'News MT'], ['Dataset', 'formality classification', 'News NN'], ['Dataset', 'Sarcasm detection', 'Sarcasm Gen MT'], ['Dataset', 'Sarcasm detection', 'Sarcasm Gen NN'], ['Dataset', 'Sarcasm detection', 'Sarcasm RQ MT'], ['Dataset', 'Sarcasm detection', 'Sarcasm RQ NN'], ['Dataset', 'Sarcasm detection', 'Sarcasm Hyp MT'], ['Dataset', 'Sarcasm detection', 'Sarcasm Hyp NN']]
1
[['AVG'], ['IDF'], ['LANG']]
[['95.26', '95.47', '95.61'], ['76.93', '86.46', '91.76*'], ['90.31', '91.19', '91.67'], ['61.3', '75.83', '84.59*'], ['82.45', '82.27', '84.3'], ['70.64', '75.14', '81.02*'], ['87.88', '87.82', '91.16*'], ['80.1', '80.6', '88.60*'], ['78.63', '77.89', '79.29'], ['65.56', '68.8', '78.01*'], ['87.22', '88.21', '88.91'], ['73.8', '77.7', '88.30*'], ['77.71', '77.84', '78.04'], ['65.3', '69.4', '77.70*'], ['76.68', '76.82', '76.99'], ['61.4', '65.00*', '62.1'], ['76.81', '77.34', '77.16'], ['61.13', '63.70*', '60.85'], ['64.31', '65.12', '65.32'], ['50.94', '53.32', '54.13']]
column
['AUC', 'AUC', 'AUC']
['LANG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AVG</th> <th>IDF</th> <th>LANG</th> </tr> </thead> <tbody> <tr> <td>Dataset || formality classification || Amazon Motors MT</td> <td>95.26</td> <td>95.47</td> <td>95.61</td> </tr> <tr> <td>Dataset || formality classification || Amazon Motors NN</td> <td>76.93</td> <td>86.46</td> <td>91.76*</td> </tr> <tr> <td>Dataset || formality classification || Amazon Fashion MT</td> <td>90.31</td> <td>91.19</td> <td>91.67</td> </tr> <tr> <td>Dataset || formality classification || Amazon Fashion NN</td> <td>61.3</td> <td>75.83</td> <td>84.59*</td> </tr> <tr> <td>Dataset || formality classification || New York Times MT</td> <td>82.45</td> <td>82.27</td> <td>84.3</td> </tr> <tr> <td>Dataset || formality classification || New York Times NN</td> <td>70.64</td> <td>75.14</td> <td>81.02*</td> </tr> <tr> <td>Dataset || formality classification || Answers MT</td> <td>87.88</td> <td>87.82</td> <td>91.16*</td> </tr> <tr> <td>Dataset || formality classification || Answers NN</td> <td>80.1</td> <td>80.6</td> <td>88.60*</td> </tr> <tr> <td>Dataset || formality classification || Blog MT</td> <td>78.63</td> <td>77.89</td> <td>79.29</td> </tr> <tr> <td>Dataset || formality classification || Blog NN</td> <td>65.56</td> <td>68.8</td> <td>78.01*</td> </tr> <tr> <td>Dataset || formality classification || Email MT</td> <td>87.22</td> <td>88.21</td> <td>88.91</td> </tr> <tr> <td>Dataset || formality classification || Email NN</td> <td>73.8</td> <td>77.7</td> <td>88.30*</td> </tr> <tr> <td>Dataset || formality classification || News MT</td> <td>77.71</td> <td>77.84</td> <td>78.04</td> </tr> <tr> <td>Dataset || formality classification || News NN</td> <td>65.3</td> <td>69.4</td> <td>77.70*</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm Gen MT</td> <td>76.68</td> <td>76.82</td> <td>76.99</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm Gen NN</td> <td>61.4</td> <td>65.00*</td> <td>62.1</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm RQ MT</td> <td>76.81</td> <td>77.34</td> <td>77.16</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm RQ NN</td> <td>61.13</td> <td>63.70*</td> <td>60.85</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm Hyp MT</td> <td>64.31</td> <td>65.12</td> <td>65.32</td> </tr> <tr> <td>Dataset || Sarcasm detection || Sarcasm Hyp NN</td> <td>50.94</td> <td>53.32</td> <td>54.13</td> </tr> </tbody></table>
Table 4
table_4
D19-1400
8
emnlp2019
Effect of Document Representation. Table 4 considers the effect of the document representation component of the transformation discussed in Section 3.2.3 across the tasks, datasets, and translation methods. The first result column shows the performance of a simple unweighted average of the word vectors. The second result column shows a document-frequency-based weighting according to Equation 1. The third result column shows the performance of a weighting scheme that takes the language of origin into consideration, based on Equation 4. Examining the table, we observe that using a non-uniform weighting scheme generally gives improved performance over the naive unweighted baseline. The effect of the document representation method is significant when paired with the NN translation approach. Thus, the translation and document representation components of the compound transformation are complementary in the sense that when translation is of high quality, a naive document representation suffices. Conversely, when translation quality is sub-optimal, the choice of document representation can significantly impact performance.
[2, 1, 2, 2, 2, 1, 2, 2, 2]
['Effect of Document Representation.', 'Table 4 considers the effect of the document representation component of the transformation discussed in Section 3.2.3 across the tasks, datasets, and translation methods.', 'The first result column shows the performance of a simple unweighted average of the word vectors.', 'The second result column shows a document-frequency-based weighting according to Equation 1.', 'The third result column shows the performance of a weighting scheme that takes the language of origin into consideration, based on Equation 4.', 'Examining the table, we observe that using a non-uniform weighting scheme generally gives improved performance over the naive unweighted baseline.', 'The effect of the document representation method is significant when paired with the NN translation approach.', 'Thus, the translation and document representation components of the compound transformation are complementary in the sense that when translation is of high quality, a naive document representation suffices.', 'Conversely, when translation quality is sub-optimal, the choice of document representation can significantly impact performance.']
[None, None, ['AVG'], ['IDF'], ['LANG'], ['LANG', 'IDF', 'AVG'], None, None, None]
1
D19-1402table_2
Short Text Classification On-device Results & Comparisons to Prior Work
2
[['Model', 'ProSeqo (our on-device model)'], ['Model', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['Model', 'RNN(Khanpour et al., 2016)'], ['Model', 'RNN+Attention(Ortega and Vu, 2017)'], ['Model', 'CNN(Lee and Dernoncourt, 2016)'], ['Model', 'GatedIntentAtten.(Goo et al., 2018)'], ['Model', 'GatedFullAtten.(Goo et al., 2018)'], ['Model', 'JointBiLSTM(Hakkani-Tur et al., 2016)'], ['Model', 'Atten.RNN(Liu and Lane, 2016)']]
1
[['SWDA'], ['MRDA'], ['ATIS'], ['SNIPS']]
[['88.3', '90.1', '97.8', '97.9'], ['83.1', '86.7', '88.9', '93.4'], ['80.1', '86.8', '-', '-'], ['73.8', '84.3', '-', '-'], ['73.1', '84.6', '-', '-'], ['-', '-', '94.1', '96.8'], ['-', '-', '93.6', '97'], ['-', '-', '92.6', '96.9'], ['-', '-', '91.1', '96.7']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ProSeqo (our on-device model)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SWDA</th> <th>MRDA</th> <th>ATIS</th> <th>SNIPS</th> </tr> </thead> <tbody> <tr> <td>Model || ProSeqo (our on-device model)</td> <td>88.3</td> <td>90.1</td> <td>97.8</td> <td>97.9</td> </tr> <tr> <td>Model || SGNN(Ravi and Kozareva, 2018)(on-device)</td> <td>83.1</td> <td>86.7</td> <td>88.9</td> <td>93.4</td> </tr> <tr> <td>Model || RNN(Khanpour et al., 2016)</td> <td>80.1</td> <td>86.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || RNN+Attention(Ortega and Vu, 2017)</td> <td>73.8</td> <td>84.3</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || CNN(Lee and Dernoncourt, 2016)</td> <td>73.1</td> <td>84.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || GatedIntentAtten.(Goo et al., 2018)</td> <td>-</td> <td>-</td> <td>94.1</td> <td>96.8</td> </tr> <tr> <td>Model || GatedFullAtten.(Goo et al., 2018)</td> <td>-</td> <td>-</td> <td>93.6</td> <td>97</td> </tr> <tr> <td>Model || JointBiLSTM(Hakkani-Tur et al., 2016)</td> <td>-</td> <td>-</td> <td>92.6</td> <td>96.9</td> </tr> <tr> <td>Model || Atten.RNN(Liu and Lane, 2016)</td> <td>-</td> <td>-</td> <td>91.1</td> <td>96.7</td> </tr> </tbody></table>
Table 2
table_2
D19-1402
6
emnlp2019
4.1 STC: Comparison with On-Device Work. We compare our on-device model against state-of-cdfczxybvgthe-art on-device short text classification approach SGNN (Ravi and Kozareva, 2018). SGNN was evaluated only on the SWDA and MRDA dialog act tasks (Ravi and Kozareva, 2018) and reached state-of-the-art performance against prior non-ondevice neural approaches (Khanpour et al., 2016). We directly compare performance on the same SWDA and MRDA datasets. As shown in Table 2 ProSeqo reaches +5.3% improvement for SWDA and +3.4% accuracy improvement for MRDA. Since we want to compare on-device performance on a wider spectrum of tasks and datasets, we re-implemented SGNN with the same parameters (Ravi and Kozareva, 2018). We run experiments on ATIS and SNIPS intent prediction tasks and reported results in Table 2 in italic to indicate that this is a re-implementation of (Ravi and Kozareva, 2018) and previously these ondevice results were not reported. As shown in Table 2, SGNN consistently performs well on dialog act and intent prediction tasks. Overall, ProSeqo reached +8.9% accuracy improvements on ATIS and +4.5% accuracy improvements on SNIPS compared to SGNN. This shows that ProSeqo’s recurrent dynamic projections learn more powerful representations than the static SGNN ones, this also leads to significant performance improvements on multiple tasks.
[2, 1, 2, 1, 1, 2, 1, 1, 1, 1]
['4.1 STC: Comparison with On-Device Work.', 'We compare our on-device model against state-of-cdfczxybvgthe-art on-device short text classification approach SGNN (Ravi and Kozareva, 2018).', 'SGNN was evaluated only on the SWDA and MRDA dialog act tasks (Ravi and Kozareva, 2018) and reached state-of-the-art performance against prior non-ondevice neural approaches (Khanpour et al., 2016).', 'We directly compare performance on the same SWDA and MRDA datasets.', 'As shown in Table 2 ProSeqo reaches +5.3% improvement for SWDA and +3.4% accuracy improvement for MRDA.', 'Since we want to compare on-device performance on a wider spectrum of tasks and datasets, we re-implemented SGNN with the same parameters (Ravi and Kozareva, 2018).', 'We run experiments on ATIS and SNIPS intent prediction tasks and reported results in Table 2 in italic to indicate that this is a re-implementation of (Ravi and Kozareva, 2018) and previously these ondevice results were not reported.', 'As shown in Table 2, SGNN consistently performs well on dialog act and intent prediction tasks.', 'Overall, ProSeqo reached +8.9% accuracy improvements on ATIS and +4.5% accuracy improvements on SNIPS compared to SGNN.', 'This shows that ProSeqo’s recurrent dynamic projections learn more powerful representations than the static SGNN ones, this also leads to significant performance improvements on multiple tasks.']
[None, ['ProSeqo (our on-device model)', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['SGNN(Ravi and Kozareva, 2018)(on-device)', 'SWDA', 'MRDA'], ['SWDA', 'MRDA'], ['ProSeqo (our on-device model)', 'SWDA', 'MRDA', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['SGNN(Ravi and Kozareva, 2018)(on-device)'], ['ATIS', 'SNIPS'], ['SGNN(Ravi and Kozareva, 2018)(on-device)'], ['ProSeqo (our on-device model)', 'ATIS', 'SNIPS', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['ProSeqo (our on-device model)', 'SGNN(Ravi and Kozareva, 2018)(on-device)']]
1
D19-1402table_3
Long Text Classification On-device Results & Comparisons to Prior Work
2
[['Model', 'ProSeqo (our on-device model)'], ['Model', 'SGNN (Ravi and Kozareva 2018)(on-device)'], ['Model', 'FastText-full (Joulin et al. 2016)'], ['Model', 'CharCNNLargeWithThesau. (Zhang et al. 2015)'], ['Model', 'CNN+NGM (Bui et al. 2018)'], ['Model', 'LSTM-full (Zhang et al. 2015)'], ['Model', 'Hier.-Attention (Yang et al. 2016)'], ['Model', 'Hier.-AVE (Yang et al. 2016)']]
1
[['AG'], ['Y!A'], ['AMZN']]
[['91.5', '72.4', '62.3'], ['57.6', '36.5', '39.3'], ['92.5', '72.3', '60.2'], ['90.6', '71.2', '59.6'], ['86.9', '-', '-'], ['86.1', '70.8', '59.4'], ['-', '-', '63.6'], ['-', '-', '62.9']]
column
['accuracy', 'accuracy', 'accuracy']
['ProSeqo (our on-device model)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AG</th> <th>Y!A</th> <th>AMZN</th> </tr> </thead> <tbody> <tr> <td>Model || ProSeqo (our on-device model)</td> <td>91.5</td> <td>72.4</td> <td>62.3</td> </tr> <tr> <td>Model || SGNN (Ravi and Kozareva 2018)(on-device)</td> <td>57.6</td> <td>36.5</td> <td>39.3</td> </tr> <tr> <td>Model || FastText-full (Joulin et al. 2016)</td> <td>92.5</td> <td>72.3</td> <td>60.2</td> </tr> <tr> <td>Model || CharCNNLargeWithThesau. (Zhang et al. 2015)</td> <td>90.6</td> <td>71.2</td> <td>59.6</td> </tr> <tr> <td>Model || CNN+NGM (Bui et al. 2018)</td> <td>86.9</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LSTM-full (Zhang et al. 2015)</td> <td>86.1</td> <td>70.8</td> <td>59.4</td> </tr> <tr> <td>Model || Hier.-Attention (Yang et al. 2016)</td> <td>-</td> <td>-</td> <td>63.6</td> </tr> <tr> <td>Model || Hier.-AVE (Yang et al. 2016)</td> <td>-</td> <td>-</td> <td>62.9</td> </tr> </tbody></table>
Table 3
table_3
D19-1402
7
emnlp2019
5 LDC: Long Document Classification Results This section focuses on long document classification results. Table 3 shows the results on three tasks and datasets (AG, Y!A, AMZN). Overall, ProSeqo significantly improved upon the ondevice neural model SGNN (Ravi and Kozareva, 2018) with +23% to +35.9% accuracy. ProSeqo also reached comparable performance to prior non-on-device neural LSTMs and character CNNs approaches (Zhang et al., 2015; Bui et al., 2018).
[2, 1, 1, 1]
['5 LDC: Long Document Classification Results This section focuses on long document classification results.', 'Table 3 shows the results on three tasks and datasets (AG, Y!A, AMZN).', 'Overall, ProSeqo significantly improved upon the ondevice neural model SGNN (Ravi and Kozareva, 2018) with +23% to +35.9% accuracy.', 'ProSeqo also reached comparable performance to prior non-on-device neural LSTMs and character CNNs approaches (Zhang et al., 2015; Bui et al., 2018).']
[None, ['AG', 'Y!A', 'AMZN'], ['ProSeqo (our on-device model)', 'SGNN (Ravi and Kozareva 2018)(on-device)'], ['ProSeqo (our on-device model)', 'LSTM-full (Zhang et al. 2015)', 'CharCNNLargeWithThesau. (Zhang et al. 2015)']]
1
D19-1418table_3
Results on the VQA-CP v2.0 test set.
2
[['Debiasing Method', 'None'], ['Debiasing Method', 'Reweight'], ['Debiasing Method', 'Bias Product'], ['Debiasing Method', 'Learned-Mixin'], ['Debiasing Method', 'Learned-Mixin +H'], ['Debiasing Method', 'Ramakrishnan et al. (2018)'], ['Debiasing Method', 'Grand and Belinkov (2019)']]
1
[['Acc.']]
[['39.18'], ['40.06'], ['39.93'], ['48.69'], ['52.05'], ['41.17'], ['42.33']]
column
['Acc.']
['Learned-Mixin', 'Learned-Mixin +H']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Debiasing Method || None</td> <td>39.18</td> </tr> <tr> <td>Debiasing Method || Reweight</td> <td>40.06</td> </tr> <tr> <td>Debiasing Method || Bias Product</td> <td>39.93</td> </tr> <tr> <td>Debiasing Method || Learned-Mixin</td> <td>48.69</td> </tr> <tr> <td>Debiasing Method || Learned-Mixin +H</td> <td>52.05</td> </tr> <tr> <td>Debiasing Method || Ramakrishnan et al. (2018)</td> <td>41.17</td> </tr> <tr> <td>Debiasing Method || Grand and Belinkov (2019)</td> <td>42.33</td> </tr> </tbody></table>
Table 3
table_3
D19-1418
7
emnlp2019
Table 3 shows the results. The learned-mixin method was highly effective, boosting performance on VQA-CP by about 9 points, and the entropy regularizer can increase this by another 3 points, significantly surpassing prior work. For the learned-mixin ensemble, we find g(x i) is strongly correlated with the bias’s expected accuracy, with a spearmanr correlation of 0.77 on the test data. Qualitative examples (Figure 2) further suggest the model increases g(x i) when it knows if can rely on the bias-only model.
[1, 1, 2, 2]
['Table 3 shows the results.', 'The learned-mixin method was highly effective, boosting performance on VQA-CP by about 9 points, and the entropy regularizer can increase this by another 3 points, significantly surpassing prior work.', 'For the learned-mixin ensemble, we find g(x i) is strongly correlated with the bias’s expected accuracy, with a spearmanr correlation of 0.77 on the test data.', 'Qualitative examples (Figure 2) further suggest the model increases g(x i) when it knows if can rely on the bias-only model.']
[None, ['Learned-Mixin', 'None'], ['Learned-Mixin'], None]
1
D19-1420table_5
Subjective evaluations on the task of controlling the unselected rationale words. Acc denotes the accuracy in guessing sentiment labels. Accw/o UNK denotes the sentiment accuracy for these samples that are not selected as “UNK” for the secondary task. † denotes p-value < 0.005 in t-test. A desired rationalization method achieves high “UNK” rate and performance randomly for the Acc predictions.
2
[['Model', 'Lei2016'], ['Model', 'Intros+minimax']]
1
[['%UNK'], ['Acc'], ['Acc w/o UNK']]
[['43.5', '63.5', '69'], ['54.0*', '58', '66.3']]
column
['%UNK', 'Acc', 'Acc w/o UNK']
['Intros+minimax']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>%UNK</th> <th>Acc</th> <th>Acc w/o UNK</th> </tr> </thead> <tbody> <tr> <td>Model || Lei2016</td> <td>43.5</td> <td>63.5</td> <td>69</td> </tr> <tr> <td>Model || Intros+minimax</td> <td>54.0*</td> <td>58</td> <td>66.3</td> </tr> </tbody></table>
Table 5
table_5
D19-1420
8
emnlp2019
Table 5 shows the performance of subjective evaluations. Looking at the first column of the table, our model is better in confusing human, which gives a higher rate in selecting "UNK". It confirms that the three-player introspective model selects more comprehensive rationales and leave less informative texts unattended. Furthermore, the results also show that human evaluators offer worse sentiment predictions on the proposed approach, which is also desired and expected.
[1, 1, 2, 1]
['Table 5 shows the performance of subjective evaluations.', 'Looking at the first column of the table, our model is better in confusing human, which gives a higher rate in selecting "UNK".', 'It confirms that the three-player introspective model selects more comprehensive rationales and leave less informative texts unattended.', 'Furthermore, the results also show that human evaluators offer worse sentiment predictions on the proposed approach, which is also desired and expected.']
[None, ['Intros+minimax', '%UNK'], ['Intros+minimax'], ['Intros+minimax', 'Acc']]
1
D19-1422table_6
Main results on WSJ.
2
[['Model', 'Plank et al. (2016)'], ['Model', 'Huang et al. (2015)'], ['Model', 'Ma and Hovy (2016)'], ['Model', 'Liu et al. (2017)'], ['Model', 'Yang et al. (2018)'], ['Model', 'Zhang et al. (2018c)'], ['Model', 'Yasunaga et al. (2018)'], ['Model', 'Xin et al. (2018)'], ['Model', 'Transformer-softmax (Guo et al., 2019)'], ['Model', 'BiLSTM-softmax (Yang et al., 2018)'], ['Model', 'BiLSTM-CRF (Yang et al., 2018)'], ['Model', 'BiLSTM-LAN']]
1
[['Accuracy']]
[['97.22'], ['97.55'], ['97.55'], ['97.53'], ['97.51'], ['97.55'], ['97.58'], ['97.58'], ['97.04'], ['97.51'], ['97.51'], ['97.65']]
column
['Accuracy']
['BiLSTM-LAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Plank et al. (2016)</td> <td>97.22</td> </tr> <tr> <td>Model || Huang et al. (2015)</td> <td>97.55</td> </tr> <tr> <td>Model || Ma and Hovy (2016)</td> <td>97.55</td> </tr> <tr> <td>Model || Liu et al. (2017)</td> <td>97.53</td> </tr> <tr> <td>Model || Yang et al. (2018)</td> <td>97.51</td> </tr> <tr> <td>Model || Zhang et al. (2018c)</td> <td>97.55</td> </tr> <tr> <td>Model || Yasunaga et al. (2018)</td> <td>97.58</td> </tr> <tr> <td>Model || Xin et al. (2018)</td> <td>97.58</td> </tr> <tr> <td>Model || Transformer-softmax (Guo et al., 2019)</td> <td>97.04</td> </tr> <tr> <td>Model || BiLSTM-softmax (Yang et al., 2018)</td> <td>97.51</td> </tr> <tr> <td>Model || BiLSTM-CRF (Yang et al., 2018)</td> <td>97.51</td> </tr> <tr> <td>Model || BiLSTM-LAN</td> <td>97.65</td> </tr> </tbody></table>
Table 6
table_6
D19-1422
6
emnlp2019
Table 6 compares our model with topperforming methods reported in the literature. In particular, Huang et al. (2015) use BiLSTM-CRF. Ma and Hovy (2016), Liu et al. (2017) and Yang et al. (2018) explore character level representations on BiLSTM-CRF. Zhang et al. (2018c) use S-LSTM-CRF, a graph recurrent network encoder. Yasunaga et al. (2018) demonstrate that adversarial training can improve the tagging accuracy. Xin et al. (2018) proposed a compositional character-to-word model combined with LSTMCRF. BiLSTM-LAN gives highly competitive result on WSJ without training on external data.
[1, 2, 2, 2, 1, 2, 1]
['Table 6 compares our model with topperforming methods reported in the literature.', 'In particular, Huang et al. (2015) use BiLSTM-CRF.', 'Ma and Hovy (2016), Liu et al. (2017) and Yang et al. (2018) explore character level representations on BiLSTM-CRF.', 'Zhang et al. (2018c) use S-LSTM-CRF, a graph recurrent network encoder.', 'Yasunaga et al. (2018) demonstrate that adversarial training can improve the tagging accuracy.', 'Xin et al. (2018) proposed a compositional character-to-word model combined with LSTMCRF.', 'BiLSTM-LAN gives highly competitive result on WSJ without training on external data.']
[None, ['Huang et al. (2015)'], ['Ma and Hovy (2016)', 'Liu et al. (2017)', 'Yang et al. (2018)'], ['Zhang et al. (2018c)'], ['Yasunaga et al. (2018)'], ['Xin et al. (2018)'], ['BiLSTM-LAN']]
1
D19-1426table_1
Human teacher evaluation for learned and random question asking policy.
1
[['LiD'], ['Random']]
1
[['Avg. Reward'], ['Natural'], ['Avg. Rew (simulated)']]
[['0.524', '3.2', '0.607'], ['0.493', '2.9', '0.551']]
column
['Avg. Reward', 'Natural', 'Avg. Rew (simulated)']
['LiD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. Reward</th> <th>Natural</th> <th>Avg. Rew (simulated)</th> </tr> </thead> <tbody> <tr> <td>LiD</td> <td>0.524</td> <td>3.2</td> <td>0.607</td> </tr> <tr> <td>Random</td> <td>0.493</td> <td>2.9</td> <td>0.551</td> </tr> </tbody></table>
Table 1
table_1
D19-1426
9
emnlp2019
20 users were asked to interact with the learned LiD policy to teach a chosen email-classification task. For each task, the system asked a sequence of 10 questions, and the human teacher's responses were incorporated into the system to update the classification model. The users were also asked to teach another task with questions asked through a random policy. Table 1 shows the average cumulative reward for humans interacting with LiD vs a random policy for this experiment. We note that LiD leads to better performance on average. This trend is the same as in the simulated analysis, although we note that the learning is slower with real teachers than in the simulated setting on the same tasks, and the gain in performance is substantially smaller. A contributing reason for this is likely annotator bias (Geva et al., 2019), since in the simulated testing scenarios, the teacher's explanations can often likely come from a small set of turkers whose language explanations for teaching other tasks were used for training the learner's semantic parsing model. We note that the learned policy was rated by human users as more natural than a random policy on a Likert scale (with range 1-5).
[2, 2, 2, 1, 1, 1, 2, 1]
['20 users were asked to interact with the learned LiD policy to teach a chosen email-classification task.', "For each task, the system asked a sequence of 10 questions, and the human teacher's responses were incorporated into the system to update the classification model.", 'The users were also asked to teach another task with questions asked through a random policy.', 'Table 1 shows the average cumulative reward for humans interacting with LiD vs a random policy for this experiment.', 'We note that LiD leads to better performance on average.', 'This trend is the same as in the simulated analysis, although we note that the learning is slower with real teachers than in the simulated setting on the same tasks, and the gain in performance is substantially smaller.', "A contributing reason for this is likely annotator bias (Geva et al., 2019), since in the simulated testing scenarios, the teacher's explanations can often likely come from a small set of turkers whose language explanations for teaching other tasks were used for training the learner's semantic parsing model.", 'We note that the learned policy was rated by human users as more natural than a random policy on a Likert scale (with range 1-5).']
[None, None, None, ['LiD', 'Random', 'Avg. Reward'], ['LiD', 'Avg. Reward'], ['Avg. Reward'], None, ['LiD', 'Natural', 'Random']]
1
D19-1429table_4
Results of baselines and fine-grained knowledge fusion methods on CWS.
2
[['Methods', 'Source only'], ['Methods', 'Target only'], ['Methods', 'BasicKD'], ['Methods', 'sampDomain-q a samp'], ['Methods', 'elemDomain-q a elem'], ['Methods', 'multiDomain-q a multi'], ['Methods', 'Sample-q a samp'], ['Methods', 'elemSample-q a elem'], ['Methods', 'multiSample-q a multi'], ['Methods', 'FGKF']]
2
[['Zhuxian', 'F'], ['Zhuxian', 'ROOV'], ['5% Weibo', 'F'], ['5% Weibo', 'ROOV']]
[['83.86', '62.4', '83.75', '70.74'], ['92.8', '65.81', '84.01', '64.12'], ['94.23', '74.08', '89.21', '76.26'], ['94.55', '74.02', '89.63', '75.93'], ['94.81', '74.75', '89.99', '77.59'], ['94.75', '74.96', '90.06', '77.25'], ['94.57', '74.47', '89.77', '76.81'], ['94.78', '74.52', '90.07', ''], ['94.91', '75.56', '90.2', '77.46'], ['95.01', '77.26', '90.45', '77.27']]
column
['F', 'ROOV', 'F', 'ROOV']
['FGKF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Zhuxian || F</th> <th>Zhuxian || ROOV</th> <th>5% Weibo || F</th> <th>5% Weibo || ROOV</th> </tr> </thead> <tbody> <tr> <td>Methods || Source only</td> <td>83.86</td> <td>62.4</td> <td>83.75</td> <td>70.74</td> </tr> <tr> <td>Methods || Target only</td> <td>92.8</td> <td>65.81</td> <td>84.01</td> <td>64.12</td> </tr> <tr> <td>Methods || BasicKD</td> <td>94.23</td> <td>74.08</td> <td>89.21</td> <td>76.26</td> </tr> <tr> <td>Methods || sampDomain-q a samp</td> <td>94.55</td> <td>74.02</td> <td>89.63</td> <td>75.93</td> </tr> <tr> <td>Methods || elemDomain-q a elem</td> <td>94.81</td> <td>74.75</td> <td>89.99</td> <td>77.59</td> </tr> <tr> <td>Methods || multiDomain-q a multi</td> <td>94.75</td> <td>74.96</td> <td>90.06</td> <td>77.25</td> </tr> <tr> <td>Methods || Sample-q a samp</td> <td>94.57</td> <td>74.47</td> <td>89.77</td> <td>76.81</td> </tr> <tr> <td>Methods || elemSample-q a elem</td> <td>94.78</td> <td>74.52</td> <td>90.07</td> <td></td> </tr> <tr> <td>Methods || multiSample-q a multi</td> <td>94.91</td> <td>75.56</td> <td>90.2</td> <td>77.46</td> </tr> <tr> <td>Methods || FGKF</td> <td>95.01</td> <td>77.26</td> <td>90.45</td> <td>77.27</td> </tr> </tbody></table>
Table 4
table_4
D19-1429
6
emnlp2019
The results in Table 4 show that both the basicKD method and fine-grained methods achieve performance improvements through domain adaptation. Compared with the basicKD method, FGKF behaves better (+1.1% F and +2.8% Roov v.s. takes multilevel relevance discrepancies into account. The sample-q method performs better than the domainq method, which shows the domain feature is better represented at the sample level, not at the domain level.
[1, 1, 1]
['The results in Table 4 show that both the basicKD method and fine-grained methods achieve performance improvements through domain adaptation.', 'Compared with the basicKD method, FGKF behaves better (+1.1% F and +2.8% Roov v.s. takes multilevel relevance discrepancies into account.', 'The sample-q method performs better than the domainq method, which shows the domain feature is better represented at the sample level, not at the domain level.']
[['BasicKD', 'FGKF'], ['BasicKD', 'FGKF'], ['sampDomain-q a samp', 'elemDomain-q a elem', 'multiDomain-q a multi', 'Sample-q a samp', 'elemSample-q a elem', 'multiSample-q a multi']]
1
D19-1431table_5
Results of ablation study on Hits@10 of 1-shot link prediction in NELL-One.
2
[['Ablation Conf.', 'standard'], ['Ablation Conf.', ' -g'], ['Ablation Conf.', ' -g -r']]
1
[['BG:Pre-Train'], ['BG:In-Train']]
[['0.331', '0.401'], ['0.234', '0.341'], ['0.052', '0.052']]
column
['Hits@10', 'Hits@10']
[' -g -r']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BG:Pre-Train</th> <th>BG:In-Train</th> </tr> </thead> <tbody> <tr> <td>Ablation Conf. || standard</td> <td>0.331</td> <td>0.401</td> </tr> <tr> <td>Ablation Conf. || -g</td> <td>0.234</td> <td>0.341</td> </tr> <tr> <td>Ablation Conf. || -g -r</td> <td>0.052</td> <td>0.052</td> </tr> </tbody></table>
Table 5
table_5
D19-1431
7
emnlp2019
Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta. Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work. This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task.
[1, 1, 1, 1]
['Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results.', 'Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta.', "Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work.", 'This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task.']
[[' -g', ' -g -r'], [' -g -r'], [' -g -r'], None]
1
D19-1437table_1
BLEU scores on three MT benchmark datasets for FlowSeq with argmax decoding and baselines with purely non-autoregressive decoding method. The first and second block are results of models trained w/w.o. knowledge distillation, respectively.
3
[['Models', 'Raw Data', 'CMLM-base'], ['Models', 'Raw Data', 'LV NAR'], ['Models', 'Raw Data', 'FlowSeq-base'], ['Models', 'Raw Data', 'FlowSeq-large'], ['Models', 'Knowledge Distillation', 'NAT-IR'], ['Models', 'Knowledge Distillation', 'CTC Loss'], ['Models', 'Knowledge Distillation', 'NAT w/ FT'], ['Models', 'Knowledge Distillation', 'NAT-REG'], ['Models', 'Knowledge Distillation', 'CMLM-small'], ['Models', 'Knowledge Distillation', 'CMLM-base'], ['Models', 'Knowledge Distillation', 'FlowSeq-base'], ['Models', 'Knowledge Distillation', 'FlowSeq-large']]
2
[['WMT2014', 'EN-DE'], ['WMT2015', 'DE-EN'], ['WMT2016', 'EN-RO'], ['WMT2017', 'RO-EN'], ['IWSLT2014', 'DE-EN']]
[['10.88', '-', '20.24', '-', '-'], ['11.8', '-', '-', '-', '-'], ['18.55', '23.36', '29.26', '30.16', '24.75'], ['20.85', '25.4', '29.86', '30.69', '-'], ['13.91', '16.77', '24.45', '25.73', '21.86'], ['17.68', '19.8', '19.93', '24.71', '-'], ['17.69', '21.47', '27.29', '29.06', '20.32'], ['20.65', '24.77', '-', '-', '23.89'], ['15.06', '19.26', '20.12', '20.36', '-'], ['18.12', '22.26', '23.65', '22.78', '-'], ['21.45', '26.16', '29.34', '30.44', '27.55'], ['23.72', '28.39', '29.73', '30.72', '-']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['FlowSeq-base', 'FlowSeq-large']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT2014 || EN-DE</th> <th>WMT2015 || DE-EN</th> <th>WMT2016 || EN-RO</th> <th>WMT2017 || RO-EN</th> <th>IWSLT2014 || DE-EN</th> </tr> </thead> <tbody> <tr> <td>Models || Raw Data || CMLM-base</td> <td>10.88</td> <td>-</td> <td>20.24</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || Raw Data || LV NAR</td> <td>11.8</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || Raw Data || FlowSeq-base</td> <td>18.55</td> <td>23.36</td> <td>29.26</td> <td>30.16</td> <td>24.75</td> </tr> <tr> <td>Models || Raw Data || FlowSeq-large</td> <td>20.85</td> <td>25.4</td> <td>29.86</td> <td>30.69</td> <td>-</td> </tr> <tr> <td>Models || Knowledge Distillation || NAT-IR</td> <td>13.91</td> <td>16.77</td> <td>24.45</td> <td>25.73</td> <td>21.86</td> </tr> <tr> <td>Models || Knowledge Distillation || CTC Loss</td> <td>17.68</td> <td>19.8</td> <td>19.93</td> <td>24.71</td> <td>-</td> </tr> <tr> <td>Models || Knowledge Distillation || NAT w/ FT</td> <td>17.69</td> <td>21.47</td> <td>27.29</td> <td>29.06</td> <td>20.32</td> </tr> <tr> <td>Models || Knowledge Distillation || NAT-REG</td> <td>20.65</td> <td>24.77</td> <td>-</td> <td>-</td> <td>23.89</td> </tr> <tr> <td>Models || Knowledge Distillation || CMLM-small</td> <td>15.06</td> <td>19.26</td> <td>20.12</td> <td>20.36</td> <td>-</td> </tr> <tr> <td>Models || Knowledge Distillation || CMLM-base</td> <td>18.12</td> <td>22.26</td> <td>23.65</td> <td>22.78</td> <td>-</td> </tr> <tr> <td>Models || Knowledge Distillation || FlowSeq-base</td> <td>21.45</td> <td>26.16</td> <td>29.34</td> <td>30.44</td> <td>27.55</td> </tr> <tr> <td>Models || Knowledge Distillation || FlowSeq-large</td> <td>23.72</td> <td>28.39</td> <td>29.73</td> <td>30.72</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
D19-1437
7
emnlp2019
Table 1 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
[1, 2, 1, 1]
['Table 1 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass.', 'The first block lists results of models trained on raw data, while the second block are results using knowledge distillation.', 'Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR.', 'It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.']
[['FlowSeq-base', 'FlowSeq-large'], None, ['FlowSeq-base', 'CMLM-base', 'LV NAR'], ['FlowSeq-base', 'FlowSeq-large']]
1
D19-1457table_1
Results on the dependency task (test set).
2
[['Model', 'ProLocal'], ['Model', 'QRN'], ['Model', 'EntNet'], ['Model', 'ProStruct'], ['Model', 'ProGlobal'], ['Model', 'XPAD']]
1
[['P'], ['R'], ['F1']]
[['24.7', '18', '20.8'], ['32.6', '30.3', '31.4'], ['32.8', '38.6', '35.5'], ['76.3', '21.3', '33.4'], ['43.4', '37', '39.9'], ['62', '32.9', '43']]
column
['P', 'R', 'F1']
['XPAD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || ProLocal</td> <td>24.7</td> <td>18</td> <td>20.8</td> </tr> <tr> <td>Model || QRN</td> <td>32.6</td> <td>30.3</td> <td>31.4</td> </tr> <tr> <td>Model || EntNet</td> <td>32.8</td> <td>38.6</td> <td>35.5</td> </tr> <tr> <td>Model || ProStruct</td> <td>76.3</td> <td>21.3</td> <td>33.4</td> </tr> <tr> <td>Model || ProGlobal</td> <td>43.4</td> <td>37</td> <td>39.9</td> </tr> <tr> <td>Model || XPAD</td> <td>62</td> <td>32.9</td> <td>43</td> </tr> </tbody></table>
Table 1
table_1
D19-1457
7
emnlp2019
Table 1 reports results of all models on the new dependency task. XPAD significantly outperforms the strongest baselines, ProGlobal and ProStruct, by more than 3 points F1. XPAD has much higher precision than ProGlobal with similar recall, suggesting that XPAD dependency-aware decoder helps it select more accurate dependencies. Compared with ProStruct, it yields more than 11.6 points improvement on recall. As XPAD adds a novel dependency layer on top of the ProStruct architecture, we note that all these gains come exclusively from the dependency layer.
[1, 1, 1, 1, 2]
['Table 1 reports results of all models on the new dependency task.', 'XPAD significantly outperforms the strongest baselines, ProGlobal and ProStruct, by more than 3 points F1.', 'XPAD has much higher precision than ProGlobal with similar recall, suggesting that XPAD dependency-aware decoder helps it select more accurate dependencies.', 'Compared with ProStruct, it yields more than 11.6 points improvement on recall.', 'As XPAD adds a novel dependency layer on top of the ProStruct architecture, we note that all these gains come exclusively from the dependency layer.']
[None, ['XPAD', 'ProGlobal', 'ProStruct'], None, ['XPAD', 'ProStruct'], ['XPAD']]
1
D19-1461table_2
Comparison between our models based on fastText and BERT with the BiLSTM used by (Khatri et al., 2018) on Wikipedia Toxic Comments.
1
[['fastText'], ['BERT-based'], ['(Khatri et al. 2018)']]
1
[['OFFENSIVE F1'], ['Weighted F1']]
[['71.40%', '94.80%'], ['83.40%', '96.70%'], ['-', '95.40%']]
column
['OFFENSIVE F1', 'Weighted F1']
['fastText', 'BERT-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OFFENSIVE F1</th> <th>Weighted F1</th> </tr> </thead> <tbody> <tr> <td>fastText</td> <td>71.40%</td> <td>94.80%</td> </tr> <tr> <td>BERT-based</td> <td>83.40%</td> <td>96.70%</td> </tr> <tr> <td>(Khatri et al. 2018)</td> <td>-</td> <td>95.40%</td> </tr> </tbody></table>
Table 2
table_2
D19-1461
4
emnlp2019
Experiments . We compare the two afore mentioned models with (Khatri et al., 2018) who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors (Pennington et al., 2014). Results are listed in Table 2 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the OFFENSIVE-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to OFFENSIVE-class F1.). Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are SAFE, the weighted-F1 is closer to the F1 score of the SAFE class while we focus on detecting OFFENSIVE content. Our BERT-based model outperforms the method from Khatri et al. (2018); throughout the rest of the paper, we use the BERTbased architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.
[2, 1, 1, 1, 2, 2, 1, 2]
['Experiments .', 'We compare the two afore mentioned models with (Khatri et al., 2018) who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors (Pennington et al., 2014).', 'Results are listed in Table 2 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset.', 'We also report the F1 of the OFFENSIVE-class which is the metric we favor within this work, although we report both.', '(Note that throughout the paper, the notation F1 is always referring to OFFENSIVE-class F1.).', 'Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are SAFE, the weighted-F1 is closer to the F1 score of the SAFE class while we focus on detecting OFFENSIVE content.', 'Our BERT-based model outperforms the method from Khatri et al. (2018); throughout the rest of the paper, we use the BERTbased architecture in our experiments.', 'In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.']
[None, ['fastText', 'BERT-based', '(Khatri et al. 2018)'], ['Weighted F1'], ['OFFENSIVE F1'], ['OFFENSIVE F1'], ['Weighted F1', 'OFFENSIVE F1'], ['BERT-based', '(Khatri et al. 2018)'], None]
1
D19-1463table_3
Results of Per-response accuracy and Per-dialog accuracy (in brackets) on bAbI dialogues. Per-dialog accuracy presents the accuracy of complete dialogues.
2
[['Task', 'T3'], ['Task', 'T4'], ['Task', 'T5'], ['Task', 'T3-OOV'], ['Task', 'T4-OOV'], ['Task', 'T5-OOV']]
2
[['SEQ2SEQ', 'Per-response accuracy'], ['SEQ2SEQ', 'Per-dialog accuracy'], ['SEQ2SEQ+Attn.', 'Per-response accuracy'], ['SEQ2SEQ+Attn.', 'Per-dialog accuracy'], ['Mem2Seq', 'Per-response accuracy'], ['Mem2Seq', 'Per-dialog accuracy'], ['HMNs-CFO', 'Per-response accuracy'], ['HMNs-CFO', 'Per-dialog accuracy'], ['HMN', 'Per-response accuracy'], ['HMN', 'Per-dialog accuracy']]
[['74.8', '0', '74.8', '0', '83.9', '15.6', '93.7', '55.9', '93.6', '56.1'], ['56.5', '0', '56.5', '0', '97', '90.5', '96.8', '89.3', '100', '100'], ['98.9', '82.9', '98.6', '83', '96.2', '46.4', '97.1', '58.2', '98', '69'], ['74.9', '0', '74', '0', '83.6', '18.1', '92.3', '45.2', '92.5', '48.2'], ['56.5', '0', '57', '0', '97', '89.4', '96.1', '90.3', '100', '100'], ['67.2', '0', '67.6', '0', '71.4', '0', '78.3', '0', '84.1', '2.6']]
column
['Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy']
['HMNs-CFO']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEQ2SEQ || Per-response accuracy</th> <th>SEQ2SEQ || Per-dialog accuracy</th> <th>SEQ2SEQ+Attn. || Per-response accuracy</th> <th>SEQ2SEQ+Attn. || Per-dialog accuracy</th> <th>Mem2Seq || Per-response accuracy</th> <th>Mem2Seq || Per-dialog accuracy</th> <th>HMNs-CFO || Per-response accuracy</th> <th>HMNs-CFO || Per-dialog accuracy</th> <th>HMN || Per-response accuracy</th> <th>HMN || Per-dialog accuracy</th> </tr> </thead> <tbody> <tr> <td>Task || T3</td> <td>74.8</td> <td>0</td> <td>74.8</td> <td>0</td> <td>83.9</td> <td>15.6</td> <td>93.7</td> <td>55.9</td> <td>93.6</td> <td>56.1</td> </tr> <tr> <td>Task || T4</td> <td>56.5</td> <td>0</td> <td>56.5</td> <td>0</td> <td>97</td> <td>90.5</td> <td>96.8</td> <td>89.3</td> <td>100</td> <td>100</td> </tr> <tr> <td>Task || T5</td> <td>98.9</td> <td>82.9</td> <td>98.6</td> <td>83</td> <td>96.2</td> <td>46.4</td> <td>97.1</td> <td>58.2</td> <td>98</td> <td>69</td> </tr> <tr> <td>Task || T3-OOV</td> <td>74.9</td> <td>0</td> <td>74</td> <td>0</td> <td>83.6</td> <td>18.1</td> <td>92.3</td> <td>45.2</td> <td>92.5</td> <td>48.2</td> </tr> <tr> <td>Task || T4-OOV</td> <td>56.5</td> <td>0</td> <td>57</td> <td>0</td> <td>97</td> <td>89.4</td> <td>96.1</td> <td>90.3</td> <td>100</td> <td>100</td> </tr> <tr> <td>Task || T5-OOV</td> <td>67.2</td> <td>0</td> <td>67.6</td> <td>0</td> <td>71.4</td> <td>0</td> <td>78.3</td> <td>0</td> <td>84.1</td> <td>2.6</td> </tr> </tbody></table>
Table 3
table_3
D19-1463
6
emnlp2019
Table 3 shows results of models on bAbI tasks. HMNs and Mem2Seq adopt one hop attention only and note that all results are the best performance of each model in 100 epochs. HMNs achieved the best results on most tasks except T5. HMNs-CFO also outperforms the other models. This demonstrates that both training multiple distributions over heterogeneous information and employment of context-aware memory benefit the end-to-end dialogue system. The improvements in per-dialogue accuracy on out-of-vocabulary tests are even more significant. Figure 4 shows the changes of HMNs and HMNs-CFO's total loss across time. HMNs learns significantly faster.
[1, 1, 1, 1, 2, 1, 2, 2]
['Table 3 shows results of models on bAbI tasks.', 'HMNs and Mem2Seq adopt one hop attention only and note that all results are the best performance of each model in 100 epochs.', 'HMNs achieved the best results on most tasks except T5.', 'HMNs-CFO also outperforms the other models.', 'This demonstrates that both training multiple distributions over heterogeneous information and employment of context-aware memory benefit the end-to-end dialogue system.', 'The improvements in per-dialogue accuracy on out-of-vocabulary tests are even more significant.', "Figure 4 shows the changes of HMNs and HMNs-CFO's total loss across time.", 'HMNs learns significantly faster.']
[None, ['HMNs-CFO', 'Mem2Seq'], ['HMNs-CFO', 'Task', 'T5'], ['HMNs-CFO'], None, ['Per-dialog accuracy', 'T3-OOV', 'T4-OOV', 'T5-OOV'], ['HMNs-CFO'], ['HMNs-CFO']]
1
D19-1463table_4
The results on the DSTC 2
2
[['Model name', 'SEQ2SEQ'], ['Model name', 'SEQ2SEQ+Attn.'], ['Model name', 'SEQ2SEQ+Copy'], ['Model name', 'Mem2Seq'], ['Model name', 'Our model']]
1
[['F1'], ['BLEU']]
[['69.7', '55'], ['67.1', '56.6'], ['71.6', '55.4'], ['75.3', '55.3'], ['77.7', '56.4']]
column
['F1', 'BLEU']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model name || SEQ2SEQ</td> <td>69.7</td> <td>55</td> </tr> <tr> <td>Model name || SEQ2SEQ+Attn.</td> <td>67.1</td> <td>56.6</td> </tr> <tr> <td>Model name || SEQ2SEQ+Copy</td> <td>71.6</td> <td>55.4</td> </tr> <tr> <td>Model name || Mem2Seq</td> <td>75.3</td> <td>55.3</td> </tr> <tr> <td>Model name || Our model</td> <td>77.7</td> <td>56.4</td> </tr> </tbody></table>
Table 4
table_4
D19-1463
6
emnlp2019
Table 4 shows our model gets the best F1 score on dataset DSTC 2, while SEQ2SEQ with attention gets the best BLEU result.
[1]
['Table 4 shows our model gets the best F1 score on dataset DSTC 2, while SEQ2SEQ with attention gets the best BLEU result.']
[['Our model', 'F1', 'BLEU']]
1
D19-1467table_2
Results of the ALSC task in single-task settings in terms of accuracy (%) and Macro-F1 (%).
2
[['Model', 'LSTM'], ['Model', 'AT-LSTM'], ['Model', 'ATAE-LSTM'], ['Model', 'GCAE'], ['Model', 'AT-CAN-Rs'], ['Model', 'AT-CAN-Ro'], ['Model', 'ATAE-CAN-Rs'], ['Model', 'ATAE-CAN-Ro']]
3
[['Rest14', '3-way', 'Acc'], ['Rest15', '3-way', 'F1'], ['Rest16', 'Binary', 'Acc'], ['Rest17', 'Binary', 'F1'], ['Rest15', '3-way', 'Acc'], ['Rest16', '3-way', 'F1'], ['Rest17', 'Binary', 'Acc'], ['Rest18', 'Binary', 'F1']]
[['80.92', '68.3', '85.83', '80.88', '71.24', '49.4', '71.97', '69.97'], ['81.24', '69.19', '87.25', '82.2', '73.37', '51.74', '76.79', '74.61'], ['82.18', '69.18', '88.08', '83.03', '74.56', '51.4', '79.79', '78.69'], ['82.08', '70.2', '87.72', '83.84', '76.69', '53', '79.66', '77.96'], ['82.28', '70.94', '88.43', '84.07', '75.62', '53.56', '78.36', '76.69'], ['82.81', '71.32', '89.37', '85.66', '76.92', '55.67', '79.92', '78.77'], ['81.97', '72.19', '88.9', '84.29', '77.28', '52.45', '81.49', '80.61'], ['83.33', '73.23', '89.02', '84.76', '78.58', '54.72', '81.75', '80.91']]
column
['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1']
['AT-CAN-Rs', 'AT-CAN-Ro', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || 3-way || Acc</th> <th>Rest15 || 3-way || F1</th> <th>Rest16 || Binary || Acc</th> <th>Rest17 || Binary || F1</th> <th>Rest15 || 3-way || Acc</th> <th>Rest16 || 3-way || F1</th> <th>Rest17 || Binary || Acc</th> <th>Rest18 || Binary || F1</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM</td> <td>80.92</td> <td>68.3</td> <td>85.83</td> <td>80.88</td> <td>71.24</td> <td>49.4</td> <td>71.97</td> <td>69.97</td> </tr> <tr> <td>Model || AT-LSTM</td> <td>81.24</td> <td>69.19</td> <td>87.25</td> <td>82.2</td> <td>73.37</td> <td>51.74</td> <td>76.79</td> <td>74.61</td> </tr> <tr> <td>Model || ATAE-LSTM</td> <td>82.18</td> <td>69.18</td> <td>88.08</td> <td>83.03</td> <td>74.56</td> <td>51.4</td> <td>79.79</td> <td>78.69</td> </tr> <tr> <td>Model || GCAE</td> <td>82.08</td> <td>70.2</td> <td>87.72</td> <td>83.84</td> <td>76.69</td> <td>53</td> <td>79.66</td> <td>77.96</td> </tr> <tr> <td>Model || AT-CAN-Rs</td> <td>82.28</td> <td>70.94</td> <td>88.43</td> <td>84.07</td> <td>75.62</td> <td>53.56</td> <td>78.36</td> <td>76.69</td> </tr> <tr> <td>Model || AT-CAN-Ro</td> <td>82.81</td> <td>71.32</td> <td>89.37</td> <td>85.66</td> <td>76.92</td> <td>55.67</td> <td>79.92</td> <td>78.77</td> </tr> <tr> <td>Model || ATAE-CAN-Rs</td> <td>81.97</td> <td>72.19</td> <td>88.9</td> <td>84.29</td> <td>77.28</td> <td>52.45</td> <td>81.49</td> <td>80.61</td> </tr> <tr> <td>Model || ATAE-CAN-Ro</td> <td>83.33</td> <td>73.23</td> <td>89.02</td> <td>84.76</td> <td>78.58</td> <td>54.72</td> <td>81.75</td> <td>80.91</td> </tr> </tbody></table>
Table 2
table_2
D19-1467
6
emnlp2019
Single-task Settings . Table 2 shows our experimental results of ALSC in single-task settings. Firstly, we observe that by introducing attention regularizations (either Rs or Ro), most of our proposed methods outperform their counterparts. Particularly, AT-CAN-Rs and AT-CAN-Ro outperform AT-LSTM in all results; ATAE-CANRs and ATAE-CAN-Ro also outperform ATAELSTM in 15 of 16 results. in the Rest15 dataset, ATAE-CAN-Ro outperforms ATAE-LSTM by up to 5.39% of accuracy and 6.46% of the F1 score in the 3-way classification. Secondly, regularization Ro achieves better performance improvement than Rs in all results. This is because Ro includes both orthogonal and sparse regularizations for non-overlapping multiaspect sentences. Thirdly, our approaches, especially ATAE-CAN-Ro, outperform the state-ofthe-art baseline model GCAE. Finally, the LSTM method outputs the worst results in all cases, because it can not distinguish different aspects.
[2, 1, 1, 1, 1, 1, 2, 1, 1]
['Single-task Settings .', 'Table 2 shows our experimental results of ALSC in single-task settings.', 'Firstly, we observe that by introducing attention regularizations (either Rs or Ro), most of our proposed methods outperform their counterparts.', 'Particularly, AT-CAN-Rs and AT-CAN-Ro outperform AT-LSTM in all results; ATAE-CANRs and ATAE-CAN-Ro also outperform ATAELSTM in 15 of 16 results.', 'in the Rest15 dataset, ATAE-CAN-Ro outperforms ATAE-LSTM by up to 5.39% of accuracy and 6.46% of the F1 score in the 3-way classification.', 'Secondly, regularization Ro achieves better performance improvement than Rs in all results.', 'This is because Ro includes both orthogonal and sparse regularizations for non-overlapping multiaspect sentences.', 'Thirdly, our approaches, especially ATAE-CAN-Ro, outperform the state-ofthe-art baseline model GCAE.', 'Finally, the LSTM method outputs the worst results in all cases, because it can not distinguish different aspects.']
[None, None, ['AT-CAN-Rs', 'AT-CAN-Ro', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro'], ['AT-CAN-Rs', 'AT-CAN-Ro', 'AT-LSTM', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro', 'ATAE-LSTM'], ['Rest15', 'ATAE-CAN-Ro', 'ATAE-LSTM', 'Acc', 'F1'], ['ATAE-CAN-Rs', 'ATAE-CAN-Ro'], ['ATAE-CAN-Ro'], ['ATAE-CAN-Ro', 'GCAE'], ['LSTM']]
1
D19-1467table_3
Results of the ALSC task in multi-task settings in terms of accuracy (%) and Macro-F1 (%).
2
[['Model', 'M-AT-LSTM'], ['Model', 'M-CAN-Rs'], ['Model', 'M-CAN-Ro'], ['Model', 'M-CAN-2Rs'], ['Model', 'M-CAN-2Ro']]
3
[['Rest14', '3-way', 'Acc'], ['Rest15', '3-way', 'F1'], ['Rest16', 'Binary', 'Acc'], ['Rest17', 'Binary', 'F1'], ['Rest15', '3-way', 'Acc'], ['Rest16', '3-way', 'F1'], ['Rest17', 'Binary', 'Acc'], ['Rest18', 'Binary', 'F1']]
[['82.6', '71.44', '88.55', '83.76', '76.33', '51.64', '79.53', '78.31'], ['83.65', '73.97', '89.26', '85.43', '75.74', '52.43', '79.66', '78.46'], ['83.12', '72.29', '89.61', '85.18', '77.04', '52.69', '79.4', '77.88'], ['83.23', '72.81', '89.37', '85.42', '78.22', '55.8', '80.44', '80.01'], ['84.28', '74.45', '89.96', '86.16', '77.51', '52.78', '82.14', '81.58']]
column
['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1']
['M-AT-LSTM', 'M-CAN-Rs', 'M-CAN-Ro', 'M-CAN-2Rs', 'M-CAN-2Ro']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || 3-way || Acc</th> <th>Rest15 || 3-way || F1</th> <th>Rest16 || Binary || Acc</th> <th>Rest17 || Binary || F1</th> <th>Rest15 || 3-way || Acc</th> <th>Rest16 || 3-way || F1</th> <th>Rest17 || Binary || Acc</th> <th>Rest18 || Binary || F1</th> </tr> </thead> <tbody> <tr> <td>Model || M-AT-LSTM</td> <td>82.6</td> <td>71.44</td> <td>88.55</td> <td>83.76</td> <td>76.33</td> <td>51.64</td> <td>79.53</td> <td>78.31</td> </tr> <tr> <td>Model || M-CAN-Rs</td> <td>83.65</td> <td>73.97</td> <td>89.26</td> <td>85.43</td> <td>75.74</td> <td>52.43</td> <td>79.66</td> <td>78.46</td> </tr> <tr> <td>Model || M-CAN-Ro</td> <td>83.12</td> <td>72.29</td> <td>89.61</td> <td>85.18</td> <td>77.04</td> <td>52.69</td> <td>79.4</td> <td>77.88</td> </tr> <tr> <td>Model || M-CAN-2Rs</td> <td>83.23</td> <td>72.81</td> <td>89.37</td> <td>85.42</td> <td>78.22</td> <td>55.8</td> <td>80.44</td> <td>80.01</td> </tr> <tr> <td>Model || M-CAN-2Ro</td> <td>84.28</td> <td>74.45</td> <td>89.96</td> <td>86.16</td> <td>77.51</td> <td>52.78</td> <td>82.14</td> <td>81.58</td> </tr> </tbody></table>
Table 3
table_3
D19-1467
7
emnlp2019
Multi-task Settings Table 3 shows experimental results of ALSC in multi-task settings. We first observe that the overall results in multi-task settings outperform the ones in single-task settings, which demonstrates the effectiveness of multi-task learning by introducing the auxiliary ACD task to help the ALSC task. Second, in almost all cases, applying attention regularizations to both tasks gains more performance improvement than only to the ALSC task, which shows that our attention regularization approach can be extended to different tasks which involving aspect level attention weights, and works well in multi-task settings. For example, for the Binary classification in the Rest15 dataset, M-AT-LASTM outperforms ATLSTM by 3,57% of accuracy and 4,96% of the F1 score, and M-CAN-2Ro further outperforms MAT-LSTM by 3:28% of accuracy and 4,0% of the F1 score.
[1, 2, 1, 1]
['Multi-task Settings Table 3 shows experimental results of ALSC in multi-task settings.', 'We first observe that the overall results in multi-task settings outperform the ones in single-task settings, which demonstrates the effectiveness of multi-task learning by introducing the auxiliary ACD task to help the ALSC task.', 'Second, in almost all cases, applying attention regularizations to both tasks gains more performance improvement than only to the ALSC task, which shows that our attention regularization approach can be extended to different tasks which involving aspect level attention weights, and works well in multi-task settings.', 'For example, for the Binary classification in the Rest15 dataset, M-AT-LASTM outperforms ATLSTM by 3,57% of accuracy and 4,96% of the F1 score, and M-CAN-2Ro further outperforms MAT-LSTM by 3:28% of accuracy and 4,0% of the F1 score.']
[None, None, None, ['Binary', 'Rest15', 'M-AT-LSTM', 'Acc', 'F1', 'M-CAN-2Ro']]
1
D19-1467table_4
Results of the ACD task. Rest14 has 5 aspect categories while Rest15 has 13 ones.
2
[['Model', 'M-AT-LSTM'], ['Model', 'M-CAN-2Rs'], ['Model', 'M-CAN-2Ro']]
2
[['Rest14', 'Precision'], ['Rest14', 'Recall'], ['Rest14', 'F1'], ['Rest15', 'Precision'], ['Rest15', 'Recall'], ['Rest15', 'F1']]
[['0.8626', '0.8553', '0.8589', '0.6691', '0.4748', '0.5555'], ['0.8698', '0.8595', '0.8645', '0.6244', '0.5019', '0.5565'], ['0.8907', '0.8627', '0.8765', '0.7127', '0.4865', '0.5782']]
column
['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1']
['M-CAN-2Rs', 'M-CAN-2Ro']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || Precision</th> <th>Rest14 || Recall</th> <th>Rest14 || F1</th> <th>Rest15 || Precision</th> <th>Rest15 || Recall</th> <th>Rest15 || F1</th> </tr> </thead> <tbody> <tr> <td>Model || M-AT-LSTM</td> <td>0.8626</td> <td>0.8553</td> <td>0.8589</td> <td>0.6691</td> <td>0.4748</td> <td>0.5555</td> </tr> <tr> <td>Model || M-CAN-2Rs</td> <td>0.8698</td> <td>0.8595</td> <td>0.8645</td> <td>0.6244</td> <td>0.5019</td> <td>0.5565</td> </tr> <tr> <td>Model || M-CAN-2Ro</td> <td>0.8907</td> <td>0.8627</td> <td>0.8765</td> <td>0.7127</td> <td>0.4865</td> <td>0.5782</td> </tr> </tbody></table>
Table 4
table_4
D19-1467
7
emnlp2019
Table 4 shows the results of the ACD task in multi-task settings. Our proposed regularization terms can also improve the performance of ACD. Regularization Ro achieves the best performance in almost all metrics.
[1, 1, 1]
['Table 4 shows the results of the ACD task in multi-task settings.', 'Our proposed regularization terms can also improve the performance of ACD.', 'Regularization Ro achieves the best performance in almost all metrics.']
[None, ['M-CAN-2Rs', 'M-CAN-2Ro'], ['M-CAN-2Ro']]
1
D19-1470table_3
Results of our model variants on development set. The best MAP results are in bold. “Train Time”: training time per epoch divided by that of model GCN (With BiLSTM).
2
[['Models', 'BiLSTM'], ['Models', 'GLSTM'], ['Models', 'GCN (W/O BiLSTM)'], ['Models', 'GCN (With BiLSTM)']]
2
[['Train Time', '-'], ['MAP', 'Twitter'], ['MAP', 'Reddit']]
[['0.94', '0.617', '0.498'], ['1.25', '0.617', '0.528'], ['1.03', '0.619', '0.53'], ['1', '0.62', '0.533']]
column
['Train Time', 'MAP', 'MAP']
['GCN (With BiLSTM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train Time || -</th> <th>MAP || Twitter</th> <th>MAP || Reddit</th> </tr> </thead> <tbody> <tr> <td>Models || BiLSTM</td> <td>0.94</td> <td>0.617</td> <td>0.498</td> </tr> <tr> <td>Models || GLSTM</td> <td>1.25</td> <td>0.617</td> <td>0.528</td> </tr> <tr> <td>Models || GCN (W/O BiLSTM)</td> <td>1.03</td> <td>0.619</td> <td>0.53</td> </tr> <tr> <td>Models || GCN (With BiLSTM)</td> <td>1</td> <td>0.62</td> <td>0.533</td> </tr> </tbody></table>
Table 3
table_3
D19-1470
7
emnlp2019
We first compare the effects of varying interaction modeling methods (see Section 3.3) on conversation recommendation. Table 3 displays their results on development set. In comparison, we consider BiLSTM over turn sequence (only chronological order encoded and henceforth BiLSTM), GLSTM (state number g = 6), GCN (layer number set to 3) without BiLSTM-encoded temporal representations (henceforth GCN (W/O BiLSTM)), and the full GCN described in Section 3.3 (henceforth GCN (With BiLSTM) and layer number set to 1). The above hyper-parameters are tuned based on the training loss. From the results, we find that BiLSTM exhibits the worst results for not encoding replying relations. Its difference from others are larger on Reddit attributed to the rich replying structure therein (as shown in Figure 4(b)). The best performance is achieved for GCN (With BiLSTM), with relatively less training time. This shows the effectiveness and efficiency to explore the order of turns with BiLSTM and the user interactions with GCN. In the later analysis, we will only discuss our model that exploits GCN (With BiLSTM) for interaction modeling.
[2, 1, 1, 2, 1, 2, 1, 1, 2]
['We first compare the effects of varying interaction modeling methods (see Section 3.3) on conversation recommendation.', 'Table 3 displays their results on development set.', 'In comparison, we consider BiLSTM over turn sequence (only chronological order encoded and henceforth BiLSTM), GLSTM (state number g = 6), GCN (layer number set to 3) without BiLSTM-encoded temporal representations (henceforth GCN (W/O BiLSTM)), and the full GCN described in Section 3.3 (henceforth GCN (With BiLSTM) and layer number set to 1).', 'The above hyper-parameters are tuned based on the training loss.', 'From the results, we find that BiLSTM exhibits the worst results for not encoding replying relations.', 'Its difference from others are larger on Reddit attributed to the rich replying structure therein (as shown in Figure 4(b)).', 'The best performance is achieved for GCN (With BiLSTM), with relatively less training time.', 'This shows the effectiveness and efficiency to explore the order of turns with BiLSTM and the user interactions with GCN.', 'In the later analysis, we will only discuss our model that exploits GCN (With BiLSTM) for interaction modeling.']
[None, None, ['BiLSTM', 'GLSTM', 'GCN (W/O BiLSTM)', 'GCN (With BiLSTM)'], None, ['BiLSTM', 'MAP'], ['Reddit'], ['GCN (With BiLSTM)', 'MAP', 'Train Time'], ['GCN (With BiLSTM)'], ['GCN (With BiLSTM)']]
1
D19-1470table_4
Main results on conversation recommendation. “nDCG” stands for “nDCG@5”. The best result for each column is in bold. Our model significantly outperforms all the comparisons (p < 0.01, paired ttest).
3
[['Models', 'Baselines', 'RANDOM'], ['Models', 'Baselines', 'POPULARITY'], ['Models', 'Comparisons', 'RSVM'], ['Models', 'Comparisons', 'NCF'], ['Models', 'Comparisons', 'CONVMF'], ['Models', 'Comparisons', 'CR_JTD'], ['Models', 'Metrics', 'OURS']]
2
[['Twitter', 'MAP'], ['Twitter', 'P@1'], ['Twitter', 'nDCG'], ['Reddit', 'MAP'], ['Reddit', 'P@1'], ['Reddit', 'nDCG']]
[['0.006', '0.001', '0.002', '0.04', '0.01', '0.022'], ['0.023', '0.005', '0.01', '0.082', '0.033', '0.063'], ['0.554', '0.575', '0.559', '0.453', '0.457', '0.466'], ['0.573', '0.593', '0.576', '0.412', '0.544', '0.461'], ['0.579', '0.596', '0.583', '0.485', '0.532', '0.52'], ['0.591', '0.591', '0.6', '0.453', '0.559', '0.485'], ['0.625', '0.632', '0.626', '0.538', '0.674', '0.59']]
column
['MAP', 'P@1', 'nDCG', 'MAP', 'P@1', 'nDCG']
['OURS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || MAP</th> <th>Twitter || P@1</th> <th>Twitter || nDCG</th> <th>Reddit || MAP</th> <th>Reddit || P@1</th> <th>Reddit || nDCG</th> </tr> </thead> <tbody> <tr> <td>Models || Baselines || RANDOM</td> <td>0.006</td> <td>0.001</td> <td>0.002</td> <td>0.04</td> <td>0.01</td> <td>0.022</td> </tr> <tr> <td>Models || Baselines || POPULARITY</td> <td>0.023</td> <td>0.005</td> <td>0.01</td> <td>0.082</td> <td>0.033</td> <td>0.063</td> </tr> <tr> <td>Models || Comparisons || RSVM</td> <td>0.554</td> <td>0.575</td> <td>0.559</td> <td>0.453</td> <td>0.457</td> <td>0.466</td> </tr> <tr> <td>Models || Comparisons || NCF</td> <td>0.573</td> <td>0.593</td> <td>0.576</td> <td>0.412</td> <td>0.544</td> <td>0.461</td> </tr> <tr> <td>Models || Comparisons || CONVMF</td> <td>0.579</td> <td>0.596</td> <td>0.583</td> <td>0.485</td> <td>0.532</td> <td>0.52</td> </tr> <tr> <td>Models || Comparisons || CR_JTD</td> <td>0.591</td> <td>0.591</td> <td>0.6</td> <td>0.453</td> <td>0.559</td> <td>0.485</td> </tr> <tr> <td>Models || Metrics || OURS</td> <td>0.625</td> <td>0.632</td> <td>0.626</td> <td>0.538</td> <td>0.674</td> <td>0.59</td> </tr> </tbody></table>
Table 4
table_4
D19-1470
8
emnlp2019
5.2 Comparisons with Previous Work . Main Results. Table 4 shows the conversation recommendation results with baselines and state of the arts. Our model exhibits the best results on both datasets, significantly outperforming all the comparison models. It indicates the usefulness to encode user interactions for conversation recommendation. Particularly, CONVMF is able to encode turns' temporal orders yet ignores how they reply with each other in conversation history. It is outperformed by our model, showing the benefit to capture users' replying patterns for predicting what conversations will draw their engagement.
[2, 2, 1, 1, 1, 1, 1]
['5.2 Comparisons with Previous Work .', 'Main Results.', 'Table 4 shows the conversation recommendation results with baselines and state of the arts.', 'Our model exhibits the best results on both datasets, significantly outperforming all the comparison models.', 'It indicates the usefulness to encode user interactions for conversation recommendation.', "Particularly, CONVMF is able to encode turns' temporal orders yet ignores how they reply with each other in conversation history.", "It is outperformed by our model, showing the benefit to capture users' replying patterns for predicting what conversations will draw their engagement."]
[None, None, None, ['OURS'], None, ['CONVMF'], ['OURS']]
1
D19-1485table_2
Results of rumor stance classification. FS, FD, FQ and FC denote the F1 scores of supporting, denying, querying and commenting classes respectively. “–” indicates that the original paper does not report the metric.
2
[['Method', 'Affective Feature + SVM (Pamungkas et al., 2018)'], ['Method', 'BranchLSTM (Kochkina et al., 2017)'], ['Method', 'TemporalAttention (Veyseh et al., 2017)'], ['Method', 'Conversational-GCN (Ours, L = 2)']]
2
[['Evaluation Metric', 'Macro-F1'], ['Evaluation Metric', 'FS'], ['Evaluation Metric', 'FD'], ['Evaluation Metric', 'FQ'], ['Evaluation Metric', 'FC'], ['Evaluation Metric', 'Acc.']]
[['0.47', '0.41', '0', '0.58', '0.88', '0.795'], ['0.434', '0.403', '0', '0.462', '0.873', '0.784'], ['0.482', '-', '-', '-', '-', '0.82'], ['0.499', '0.311', '0.194', '0.646', '0.847', '0.751']]
column
['Macro-F1', 'FS', 'FD', 'FQ', 'FC', 'Acc.']
['Conversational-GCN (Ours, L = 2)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Evaluation Metric || Macro-F1</th> <th>Evaluation Metric || FS</th> <th>Evaluation Metric || FD</th> <th>Evaluation Metric || FQ</th> <th>Evaluation Metric || FC</th> <th>Evaluation Metric || Acc.</th> </tr> </thead> <tbody> <tr> <td>Method || Affective Feature + SVM (Pamungkas et al., 2018)</td> <td>0.47</td> <td>0.41</td> <td>0</td> <td>0.58</td> <td>0.88</td> <td>0.795</td> </tr> <tr> <td>Method || BranchLSTM (Kochkina et al., 2017)</td> <td>0.434</td> <td>0.403</td> <td>0</td> <td>0.462</td> <td>0.873</td> <td>0.784</td> </tr> <tr> <td>Method || TemporalAttention (Veyseh et al., 2017)</td> <td>0.482</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.82</td> </tr> <tr> <td>Method || Conversational-GCN (Ours, L = 2)</td> <td>0.499</td> <td>0.311</td> <td>0.194</td> <td>0.646</td> <td>0.847</td> <td>0.751</td> </tr> </tbody></table>
Table 2
table_2
D19-1485
6
emnlp2019
Performance Comparison Table 2 shows the results of different methods for rumor stance classification. Clearly, the macro-averaged F1 of Conversational-GCN is better than all baselines. Especially, our method shows the effectiveness of determining denying stance, while other methods can not give any correct prediction for denying class (the FD scores of them are equal to zero). Further, Conversational-GCN also achieves higher F1 score for querying stance (FQ). Identifying denying and querying stances correctly is crucial for veracity prediction because they play the role of indicators for f alse and unverif ied rumors respectively (see Figure 2). Meanwhile, the classimbalanced problem of data makes this a challenge. Conversational-GCN effectively encodes structural context for each tweet via aggregating information from its neighbors, learning powerful stance features without feature engineering. It is also more computationally efficient than sequential and temporal based methods. The information aggregations for all tweets in a conversation are worked in parallel and thus the running time is not sensitive to conversation’s depth.
[1, 1, 2, 1, 2, 2, 2, 2, 2]
['Performance Comparison Table 2 shows the results of different methods for rumor stance classification.', 'Clearly, the macro-averaged F1 of Conversational-GCN is better than all baselines.', 'Especially, our method shows the effectiveness of determining denying stance, while other methods can not give any correct prediction for denying class (the FD scores of them are equal to zero).', 'Further, Conversational-GCN also achieves higher F1 score for querying stance (FQ).', 'Identifying denying and querying stances correctly is crucial for veracity prediction because they play the role of indicators for f alse and unverif ied rumors respectively (see Figure 2).', 'Meanwhile, the classimbalanced problem of data makes this a challenge.', 'Conversational-GCN effectively encodes structural context for each tweet via aggregating information from its neighbors, learning powerful stance features without feature engineering.', 'It is also more computationally efficient than sequential and temporal based methods.', 'The information aggregations for all tweets in a conversation are worked in parallel and thus the running time is not sensitive to conversation’s depth.']
[None, ['Macro-F1', 'Conversational-GCN (Ours, L = 2)'], ['Conversational-GCN (Ours, L = 2)'], ['Conversational-GCN (Ours, L = 2)'], None, None, ['Conversational-GCN (Ours, L = 2)'], None, None]
1
D19-1485table_3
Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models.
3
[['Method Setting', 'Single-task', 'TD-RvNN (Ma et al., 2018b)'], ['Method Setting', 'Single-task', 'Hierarchical GCN-RNN (Ours)'], ['Method Setting', 'Multi-task', 'BranchLSTM+NileTMRG (Kochkina et al., 2018)'], ['Method Setting', 'Multi-task', 'MTL2 (Veracity+Stance) (Kochkina et al., 2018)'], ['Method Setting', 'Multi-task', 'Hierarchical-PSV (Ours, λ = 1)']]
2
[['SemEval dataset', 'Macro-F1'], ['SemEval dataset', 'Acc.'], ['PHEME dataset', 'Macro-F1'], ['PHEME dataset', 'Acc.']]
[['0.509', '0.536', '0.264', '0.341'], ['0.54', '0.536', '0.317', '0.356'], ['0.539', '0.57', '0.297', '0.36'], ['0.558', '0.571', '0.318', '0.357'], ['0.588', '0.643', '0.333', '0.361']]
column
['Macro-F1', 'Acc.', 'Macro-F1', 'Acc.']
['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SemEval dataset || Macro-F1</th> <th>SemEval dataset || Acc.</th> <th>PHEME dataset || Macro-F1</th> <th>PHEME dataset || Acc.</th> </tr> </thead> <tbody> <tr> <td>Method Setting || Single-task || TD-RvNN (Ma et al., 2018b)</td> <td>0.509</td> <td>0.536</td> <td>0.264</td> <td>0.341</td> </tr> <tr> <td>Method Setting || Single-task || Hierarchical GCN-RNN (Ours)</td> <td>0.54</td> <td>0.536</td> <td>0.317</td> <td>0.356</td> </tr> <tr> <td>Method Setting || Multi-task || BranchLSTM+NileTMRG (Kochkina et al., 2018)</td> <td>0.539</td> <td>0.57</td> <td>0.297</td> <td>0.36</td> </tr> <tr> <td>Method Setting || Multi-task || MTL2 (Veracity+Stance) (Kochkina et al., 2018)</td> <td>0.558</td> <td>0.571</td> <td>0.318</td> <td>0.357</td> </tr> <tr> <td>Method Setting || Multi-task || Hierarchical-PSV (Ours, λ = 1)</td> <td>0.588</td> <td>0.643</td> <td>0.333</td> <td>0.361</td> </tr> </tbody></table>
Table 3
table_3
D19-1485
7
emnlp2019
Performance Comparison Table 3 shows the comparisons of different methods. By comparing single-task methods, Hierarchical GCN-RNN performs better than TD-RvNN, which indicates that our hierarchical framework can effectively model conversation structures to learn high-quality tweet representations. The recursive operation in TD-RvNN is performed in a fixed direction and runs over all tweets, thus may not obtain enough useful information. Moreover, the training speed of Hierarchical GCN-RNN is significantly faster than TD-RvNN: in the condition of batch-wise optimization for training one step over a batch containing 32 conversations, our method takes only 0.18 seconds, while TD-RvNN takes 5.02 seconds.
[1, 1, 1, 1]
['Performance Comparison Table 3 shows the comparisons of different methods.', 'By comparing single-task methods, Hierarchical GCN-RNN performs better than TD-RvNN, which indicates that our hierarchical framework can effectively model conversation structures to learn high-quality tweet representations.', 'The recursive operation in TD-RvNN is performed in a fixed direction and runs over all tweets, thus may not obtain enough useful information.', 'Moreover, the training speed of Hierarchical GCN-RNN is significantly faster than TD-RvNN: in the condition of batch-wise optimization for training one step over a batch containing 32 conversations, our method takes only 0.18 seconds, while TD-RvNN takes 5.02 seconds.']
[None, ['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)'], ['TD-RvNN (Ma et al., 2018b)'], ['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)']]
1
D19-1488table_2
Test accuracy (%) of different models on six standard datasets. The second best results are underlined. The note ∗ means our model significantly outperforms the baselines based on t-test (p < 0.01).
2
[['Dataset', 'AGNews'], ['Dataset', 'Snippets'], ['Dataset', 'Ohsumed'], ['Dataset', 'TagMyNews'], ['Dataset', 'MR'], ['Dataset', 'Twitter']]
1
[['SVM +TFIDF'], ['SVM +LDACNN'], ['CNN -rand'], ['CNN -pretrain'], ['LSTM -rand'], ['LSTM -pretrain'], ['PTE'], ['TextGCN'], ['HAN'], ['HGAT']]
[['57.73', '65.16', '32.65', '67.24', '31.24', '66.28', '36', '67.61', '62.64', '72.10*'], ['63.85', '63.91', '48.34', '77.09', '26.38', '75.89', '63.1', '77.82', '58.38', '82.36*'], ['41.47', '31.26', '35.25', '32.92', '19.87', '28.7', '36.63', '41.56', '36.97', '42.68*'], ['42.9', '21.88', '28.76', '57.12', '25.52', '57.32', '40.32', '54.28', '42.18', '61.72*'], ['56.67', '54.69', '54.85', '58.32', '52.62', '60.89', '54.74', '59.12', '57.11', '62.75*'], ['54.39', '50.42', '52.58', '56.34', '54.8', '60.28', '54.24', '60.15', '53.75', '63.21*']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['HGAT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SVM +TFIDF</th> <th>SVM +LDACNN</th> <th>CNN -rand</th> <th>CNN -pretrain</th> <th>LSTM -rand</th> <th>LSTM -pretrain</th> <th>PTE</th> <th>TextGCN</th> <th>HAN</th> <th>HGAT</th> </tr> </thead> <tbody> <tr> <td>Dataset || AGNews</td> <td>57.73</td> <td>65.16</td> <td>32.65</td> <td>67.24</td> <td>31.24</td> <td>66.28</td> <td>36</td> <td>67.61</td> <td>62.64</td> <td>72.10*</td> </tr> <tr> <td>Dataset || Snippets</td> <td>63.85</td> <td>63.91</td> <td>48.34</td> <td>77.09</td> <td>26.38</td> <td>75.89</td> <td>63.1</td> <td>77.82</td> <td>58.38</td> <td>82.36*</td> </tr> <tr> <td>Dataset || Ohsumed</td> <td>41.47</td> <td>31.26</td> <td>35.25</td> <td>32.92</td> <td>19.87</td> <td>28.7</td> <td>36.63</td> <td>41.56</td> <td>36.97</td> <td>42.68*</td> </tr> <tr> <td>Dataset || TagMyNews</td> <td>42.9</td> <td>21.88</td> <td>28.76</td> <td>57.12</td> <td>25.52</td> <td>57.32</td> <td>40.32</td> <td>54.28</td> <td>42.18</td> <td>61.72*</td> </tr> <tr> <td>Dataset || MR</td> <td>56.67</td> <td>54.69</td> <td>54.85</td> <td>58.32</td> <td>52.62</td> <td>60.89</td> <td>54.74</td> <td>59.12</td> <td>57.11</td> <td>62.75*</td> </tr> <tr> <td>Dataset || Twitter</td> <td>54.39</td> <td>50.42</td> <td>52.58</td> <td>56.34</td> <td>54.8</td> <td>60.28</td> <td>54.24</td> <td>60.15</td> <td>53.75</td> <td>63.21*</td> </tr> </tbody></table>
Table 2
table_2
D19-1488
7
emnlp2019
Table 2 shows the classification accuracy of different methods on 6 benchmark datasets. We can see that our methods significantly outperform all the baselines by a large margin, which shows the effectiveness of our proposed method on semisupervised short text classification. The traditional method SVMs based on the human-designed features, achieve better performance than the deep models with random initialization, i.e., CNN-rand and LSTM-rand in most cases. While CNN-pretrain and LSTM-pretrain using the pre-trained vectors achieve significant improvements and outperform SVMs. Our model HGAT consistently outperforms all the state-ofthe-art models by a large margin, which shows the effectiveness of our proposed method.
[1, 1, 1, 1, 1]
['Table 2 shows the classification accuracy of different methods on 6 benchmark datasets.', 'We can see that our methods significantly outperform all the baselines by a large margin, which shows the effectiveness of our proposed method on semisupervised short text classification.', 'The traditional method SVMs based on the human-designed features, achieve better performance than the deep models with random initialization, i.e., CNN-rand and LSTM-rand in most cases.', 'While CNN-pretrain and LSTM-pretrain using the pre-trained vectors achieve significant improvements and outperform SVMs.', 'Our model HGAT consistently outperforms all the state-ofthe-art models by a large margin, which shows the effectiveness of our proposed method.']
[['Dataset'], ['HGAT'], ['SVM +TFIDF', 'SVM +LDACNN', 'CNN -rand', 'LSTM -rand'], ['CNN -pretrain', 'LSTM -pretrain', 'SVM +TFIDF', 'SVM +LDACNN'], ['HGAT']]
1
D19-1491table_7
Ranking results on BENCHLS dataset
3
[['BENCHLS', 'full(929)', 'S'], ['BENCHLS', 'full(929)', 'C'], ['BENCHLS', 'full(929)', 'S+C'], ['BENCHLS', 'test(464)', 'S'], ['BENCHLS', 'test(464)', 'C'], ['BENCHLS', 'test(464)', 'S+C'], ['BENCHLS', 'test(464)', 'P&S']]
1
[['n=1'], ['n=2'], ['n=3'], ['MRR']]
[['0.4974', '0.7381', '0.8899', '0.6648'], ['0.3509', '0.5885', '0.7877', '0.5998'], ['0.5602', '0.8064', '0.9428', '0.7219'], ['0.5839', '0.7546', '0.9302', '0.7083'], ['0.4086', '0.7142', '0.895', '0.6563'], ['0.6774', '0.7857', '0.9308', '0.8218'], ['0.4841', '0.5596', '0.7004', '0.6615']]
column
['n=1', 'n=2', 'n=3', 'MRR']
['BENCHLS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>n=1</th> <th>n=2</th> <th>n=3</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>BENCHLS || full(929) || S</td> <td>0.4974</td> <td>0.7381</td> <td>0.8899</td> <td>0.6648</td> </tr> <tr> <td>BENCHLS || full(929) || C</td> <td>0.3509</td> <td>0.5885</td> <td>0.7877</td> <td>0.5998</td> </tr> <tr> <td>BENCHLS || full(929) || S+C</td> <td>0.5602</td> <td>0.8064</td> <td>0.9428</td> <td>0.7219</td> </tr> <tr> <td>BENCHLS || test(464) || S</td> <td>0.5839</td> <td>0.7546</td> <td>0.9302</td> <td>0.7083</td> </tr> <tr> <td>BENCHLS || test(464) || C</td> <td>0.4086</td> <td>0.7142</td> <td>0.895</td> <td>0.6563</td> </tr> <tr> <td>BENCHLS || test(464) || S+C</td> <td>0.6774</td> <td>0.7857</td> <td>0.9308</td> <td>0.8218</td> </tr> <tr> <td>BENCHLS || test(464) || P&amp;S</td> <td>0.4841</td> <td>0.5596</td> <td>0.7004</td> <td>0.6615</td> </tr> </tbody></table>
Table 7
table_7
D19-1491
7
emnlp2019
We report the results on the full BENCHLS dataset in the upper half of Table 7. In the lower half, we compare our results on the test set of 464 instances to those running the Paetzold and Specia (2016a) system (P&S) on the same test splits. Since the P&S system was trained on half of BENCHLS we cannot run it on the full dataset. Table 7 shows that ranking with S+ C works best according to all measures and across both sets.
[1, 2, 2, 1]
['We report the results on the full BENCHLS dataset in the upper half of Table 7.', 'In the lower half, we compare our results on the test set of 464 instances to those running the Paetzold and Specia (2016a) system (P&S) on the same test splits.', 'Since the P&S system was trained on half of BENCHLS we cannot run it on the full dataset.', 'Table 7 shows that ranking with S+ C works best according to all measures and across both sets.']
[['BENCHLS'], ['P&S'], ['P&S', 'BENCHLS'], ['S+C']]
1
D19-1496table_3
Performance of SC and DISP on identifying perpetuated tokens.
4
[['Dataset', 'SST-2', 'SC', 'Precision'], ['Dataset', 'SST-2', 'SC', 'Recall'], ['Dataset', 'SST-2', 'SC', 'F1'], ['Dataset', 'SST-2', 'DISP', 'Precision'], ['Dataset', 'SST-2', 'DISP', 'Recall'], ['Dataset', 'SST-2', 'DISP', 'F1'], ['Dataset', 'IMDb', 'SC', 'Precision'], ['Dataset', 'IMDb', 'SC', 'Recall'], ['Dataset', 'IMDb', 'SC', 'F1'], ['Dataset', 'IMDb', 'DISP', 'Precision'], ['Dataset', 'IMDb', 'DISP', 'Recall'], ['Dataset', 'IMDb', 'DISP', 'F1']]
2
[['Character-level Attacks', 'Insertion'], ['Character-level Attacks', 'Deletion'], ['Character-level Attacks', 'Swap'], ['Word-level Attacks', 'Random'], ['Word-level Attacks', 'Embed'], ['Overall Attacks', 'Overall Attacks']]
[['0.5087', '0.4703', '0.5044', '0.1612', '0.1484', '0.3586'], ['0.9369', '0.8085', '0.9151', '0.1732', '0.1617', '0.5991'], ['0.6594', '0.5947', '0.6504', '0.1669', '0.1548', '0.4452'], ['0.9725', '0.9065', '0.9552', '0.8407', '0.4828', '0.8315'], ['0.8865', '0.876', '0.868', '0.6504', '0.5515', '0.7665'], ['0.9275', '0.891', '0.9095', '0.7334', '0.5149', '0.7952'], ['0.0429', '0.0369', '0.0406', '0.0084', '0.0064', '0.027'], ['0.9367', '0.8052', '0.8895', '0.179', '0.1352', '0.5891'], ['0.082', '0.0706', '0.0777', '0.0161', '0.0122', '0.0517'], ['0.915', '0.8181', '0.886', '0.5233', '0.2024', '0.669'], ['0.5068', '0.4886', '0.5', '0.3876', '0.2063', '0.4179'], ['0.6523', '0.6118', '0.6392', '0.4454', '0.2044', '0.5106']]
row
['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1']
['DISP', 'SC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Character-level Attacks || Insertion</th> <th>Character-level Attacks || Deletion</th> <th>Character-level Attacks || Swap</th> <th>Word-level Attacks || Random</th> <th>Word-level Attacks || Embed</th> <th>Overall Attacks || Overall Attacks</th> </tr> </thead> <tbody> <tr> <td>Dataset || SST-2 || SC || Precision</td> <td>0.5087</td> <td>0.4703</td> <td>0.5044</td> <td>0.1612</td> <td>0.1484</td> <td>0.3586</td> </tr> <tr> <td>Dataset || SST-2 || SC || Recall</td> <td>0.9369</td> <td>0.8085</td> <td>0.9151</td> <td>0.1732</td> <td>0.1617</td> <td>0.5991</td> </tr> <tr> <td>Dataset || SST-2 || SC || F1</td> <td>0.6594</td> <td>0.5947</td> <td>0.6504</td> <td>0.1669</td> <td>0.1548</td> <td>0.4452</td> </tr> <tr> <td>Dataset || SST-2 || DISP || Precision</td> <td>0.9725</td> <td>0.9065</td> <td>0.9552</td> <td>0.8407</td> <td>0.4828</td> <td>0.8315</td> </tr> <tr> <td>Dataset || SST-2 || DISP || Recall</td> <td>0.8865</td> <td>0.876</td> <td>0.868</td> <td>0.6504</td> <td>0.5515</td> <td>0.7665</td> </tr> <tr> <td>Dataset || SST-2 || DISP || F1</td> <td>0.9275</td> <td>0.891</td> <td>0.9095</td> <td>0.7334</td> <td>0.5149</td> <td>0.7952</td> </tr> <tr> <td>Dataset || IMDb || SC || Precision</td> <td>0.0429</td> <td>0.0369</td> <td>0.0406</td> <td>0.0084</td> <td>0.0064</td> <td>0.027</td> </tr> <tr> <td>Dataset || IMDb || SC || Recall</td> <td>0.9367</td> <td>0.8052</td> <td>0.8895</td> <td>0.179</td> <td>0.1352</td> <td>0.5891</td> </tr> <tr> <td>Dataset || IMDb || SC || F1</td> <td>0.082</td> <td>0.0706</td> <td>0.0777</td> <td>0.0161</td> <td>0.0122</td> <td>0.0517</td> </tr> <tr> <td>Dataset || IMDb || DISP || Precision</td> <td>0.915</td> <td>0.8181</td> <td>0.886</td> <td>0.5233</td> <td>0.2024</td> <td>0.669</td> </tr> <tr> <td>Dataset || IMDb || DISP || Recall</td> <td>0.5068</td> <td>0.4886</td> <td>0.5</td> <td>0.3876</td> <td>0.2063</td> <td>0.4179</td> </tr> <tr> <td>Dataset || IMDb || DISP || F1</td> <td>0.6523</td> <td>0.6118</td> <td>0.6392</td> <td>0.4454</td> <td>0.2044</td> <td>0.5106</td> </tr> </tbody></table>
Table 3
table_3
D19-1496
6
emnlp2019
4.2 Experimental Results Performance on identifying perpetuated tokens. Table 3 shows the performance of DISP and SC in discriminating perturbations. Compared to SC, DISP has an absolute improvement by 35% and 46% on SST-2 and IMDb in terms of F1score, respectively. It also proves that the context information is essential when discriminating the perturbations. An interesting observation is that SC has high recall but low precision scores for character-level attacks because it is eager to correct misspellings while most of its corrections are not perturbations. Conversely, DISP has more balances of recall and precision scores since it is optimized to discriminate the perturbed tokens. For the word-level attacks, SC shows similar low performance on both random and embed attacks while DISP behaves much better. Moreover, DISP works better on the random attack because the embeddings of the original tokens tend to have noticeably greater Euclidean distances to randomlypicked tokens than the distances to other tokens.
[2, 1, 1, 2, 1, 1, 1, 1]
['4.2 Experimental Results Performance on identifying perpetuated tokens.', 'Table 3 shows the performance of DISP and SC in discriminating perturbations.', 'Compared to SC, DISP has an absolute improvement by 35% and 46% on SST-2 and IMDb in terms of F1score, respectively.', 'It also proves that the context information is essential when discriminating the perturbations.', 'An interesting observation is that SC has high recall but low precision scores for character-level attacks because it is eager to correct misspellings while most of its corrections are not perturbations.', 'Conversely, DISP has more balances of recall and precision scores since it is optimized to discriminate the perturbed tokens.', 'For the word-level attacks, SC shows similar low performance on both random and embed attacks while DISP behaves much better.', 'Moreover, DISP works better on the random attack because the embeddings of the original tokens tend to have noticeably greater Euclidean distances to randomlypicked tokens than the distances to other tokens.']
[None, ['DISP', 'SC'], ['SC', 'DISP', 'SST-2', 'IMDb', 'F1'], None, ['SC', 'Recall', 'Precision', 'Character-level Attacks'], ['DISP', 'Recall', 'Precision'], ['SC', 'DISP', 'Random', 'Embed'], ['DISP', 'Random']]
1
D19-1498table_1
Overall, intraand inter-sentence pairs performance comparison with the state-of-the-art on the CDR test set. The methods below the double line take advantage of additional training data and/or incorporate external tools.
2
[['Method', 'Gu et al. (2017)'], ['Method', 'Verga et al. (2018)'], ['Method', 'Nguyen and Verspoor (2018)'], ['Method', 'EoG'], ['Method', 'EoG (Full)'], ['Method', 'EoG (NoInf)'], ['Method', 'EoG (Sent)'], ['Method', 'Zhou et al. (2016)'], ['Method', 'Peng et al. (2016)'], ['Method', 'Li et al. (2016b)'], ['Method', 'Panyam et al. (2018)'], ['Method', 'Zheng et al. (2018)']]
2
[['Overall (%)', 'P'], ['Overall (%)', 'R'], ['Overall (%)', 'F1'], ['Intra (%)', 'P'], ['Intra (%)', 'R'], ['Intra (%)', 'F1'], ['Inter (%)', 'P'], ['Inter (%)', 'R'], ['Inter (%)', 'F1']]
[['55.7', '68.1', '61.3', '59.7', '55.0', '57.2', '51.9', '7.0', '11.7'], ['55.6', '70.8', '62.1', '-', '-', '-', '-', '-', '-'], ['57.0', '68.6', '62.3', '-', '-', '-', '-', '-', '-'], ['62.1', '65.2', '63.6', '64.0', '73.0', '68.2', '56.0', '46.7', '50.9'], ['59.1', '56.2', '57.6', '71.2', '62.3', '66.5', '37.1', '42.0', '39.4'], ['48.2', '50.2', '49.2', '65.8', '55.2', '60.2', '25.4', '38.5', '30.6'], ['56.9', '53.5', '55.2', '56.9', '76.4', '65.2', '-', '-', '-'], ['55.6', '68.4', '61.3', '-', '-', '-', '-', '-', '-'], ['62.1', '64.2', '63.1', '-', '-', '-', '-', '-', '-'], ['60.8', '76.4', '67.7', '67.3', '52.4', '58.9', '-', '-', '-'], ['53.2', '69.7', '60.3', '54.7', '80.6', '65.1', '47.8', '43.8', '45.7'], ['56.2', '67.9', '61.5', '-', '-', '-', '-', '-', '-']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['EoG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall (%) || P</th> <th>Overall (%) || R</th> <th>Overall (%) || F1</th> <th>Intra (%) || P</th> <th>Intra (%) || R</th> <th>Intra (%) || F1</th> <th>Inter (%) || P</th> <th>Inter (%) || R</th> <th>Inter (%) || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Gu et al. (2017)</td> <td>55.7</td> <td>68.1</td> <td>61.3</td> <td>59.7</td> <td>55.0</td> <td>57.2</td> <td>51.9</td> <td>7.0</td> <td>11.7</td> </tr> <tr> <td>Method || Verga et al. (2018)</td> <td>55.6</td> <td>70.8</td> <td>62.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Nguyen and Verspoor (2018)</td> <td>57.0</td> <td>68.6</td> <td>62.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || EoG</td> <td>62.1</td> <td>65.2</td> <td>63.6</td> <td>64.0</td> <td>73.0</td> <td>68.2</td> <td>56.0</td> <td>46.7</td> <td>50.9</td> </tr> <tr> <td>Method || EoG (Full)</td> <td>59.1</td> <td>56.2</td> <td>57.6</td> <td>71.2</td> <td>62.3</td> <td>66.5</td> <td>37.1</td> <td>42.0</td> <td>39.4</td> </tr> <tr> <td>Method || EoG (NoInf)</td> <td>48.2</td> <td>50.2</td> <td>49.2</td> <td>65.8</td> <td>55.2</td> <td>60.2</td> <td>25.4</td> <td>38.5</td> <td>30.6</td> </tr> <tr> <td>Method || EoG (Sent)</td> <td>56.9</td> <td>53.5</td> <td>55.2</td> <td>56.9</td> <td>76.4</td> <td>65.2</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Zhou et al. (2016)</td> <td>55.6</td> <td>68.4</td> <td>61.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Peng et al. (2016)</td> <td>62.1</td> <td>64.2</td> <td>63.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Li et al. (2016b)</td> <td>60.8</td> <td>76.4</td> <td>67.7</td> <td>67.3</td> <td>52.4</td> <td>58.9</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Panyam et al. (2018)</td> <td>53.2</td> <td>69.7</td> <td>60.3</td> <td>54.7</td> <td>80.6</td> <td>65.1</td> <td>47.8</td> <td>43.8</td> <td>45.7</td> </tr> <tr> <td>Method || Zheng et al. (2018)</td> <td>56.2</td> <td>67.9</td> <td>61.5</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
D19-1498
6
emnlp2019
4 Results Table 1 depicts the performance of our proposed model on the CDR test set, in comparison with the state-of-the-art. We directly compare our model with models that do not incorporate external knowledge. Verga et al. (2018) and Nguyen and Verspoor (2018) consider a single pair per document, while Gu et al. (2017) develops separate models for intra- and inter-sentence pairs. As it can be observed, the proposed model outperforms the state-of-the-art in CDR dataset by 1.3 percentage points of overall performance. We also show the methods that take advantage of syntactic dependency tools. Li et al. (2016b) uses cotraining with additional unlabeled training data. Our model performs significantly better on intraand inter-sentential pairs, even compared to most of the models with external knowledge, except for Li et al. (2016b).
[1, 1, 1, 2, 2, 1]
['4 Results Table 1 depicts the performance of our proposed model on the CDR test set, in comparison with the state-of-the-art.', 'We directly compare our model with models that do not incorporate external knowledge. Verga et al. (2018) and Nguyen and Verspoor (2018) consider a single pair per document, while Gu et al. (2017) develops separate models for intra- and inter-sentence pairs.', 'As it can be observed, the proposed model outperforms the state-of-the-art in CDR dataset by 1.3 percentage points of overall performance.', 'We also show the methods that take advantage of syntactic dependency tools.', 'Li et al. (2016b) uses cotraining with additional unlabeled training data.', 'Our model performs significantly better on intraand inter-sentential pairs, even compared to most of the models with external knowledge, except for Li et al. (2016b).']
[None, ['Verga et al. (2018)', 'Nguyen and Verspoor (2018)', 'Gu et al. (2017)', 'EoG'], ['EoG'], None, ['Li et al. (2016b)'], ['EoG']]
1
D19-1499table_3
Automatic evaluation results on four style transfer tasks. Acc refers to the style accuracy.
2
[['Model', 'S2S'], ['Model', 'SLS'], ['Model', 'DAR'], ['Model', 'CPLS']]
2
[['to Anc.P', 'Acc'], ['to Anc.P', 'BLEU'], ['to Anc.P', 'GLEU'], ['to M.zh', 'Acc'], ['to M.zh', 'BLEU'], ['to M.zh', 'GLEU'], ['to F.en', 'Acc'], ['to F.en', 'BLEU'], ['to F.en', 'GLEU'], ['to Inf.en', 'Acc'], ['to Inf.en', 'BLEU'], ['to Inf.en', 'GLEU']]
[['87.2%', '4.2', '3.24', '74.8%', '3.66', '3.43', '88.9%', '33.9', '14.06', '71.8%', '18.34', '2.99'], ['82.0%', '5.89', '4.49', '81.9%', '3.05', '1.88', '89.5%', '41.41', '16.77', '63.5%', '19.21', '2.55'], ['82.5%', '6.33', '5.21', '80.4%', '4.72', '4.26', '89.2%', '44.72', '18.52', '63.5%', '23.32', '3.26'], ['85.4%', '7.11', '5.52', '84.4%', '4.95', '4.12', '91.3%', '48.6', '19.04', '64.3%', '27.25', '3.61']]
column
['Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU']
['CPLS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>to Anc.P || Acc</th> <th>to Anc.P || BLEU</th> <th>to Anc.P || GLEU</th> <th>to M.zh || Acc</th> <th>to M.zh || BLEU</th> <th>to M.zh || GLEU</th> <th>to F.en || Acc</th> <th>to F.en || BLEU</th> <th>to F.en || GLEU</th> <th>to Inf.en || Acc</th> <th>to Inf.en || BLEU</th> <th>to Inf.en || GLEU</th> </tr> </thead> <tbody> <tr> <td>Model || S2S</td> <td>87.2%</td> <td>4.2</td> <td>3.24</td> <td>74.8%</td> <td>3.66</td> <td>3.43</td> <td>88.9%</td> <td>33.9</td> <td>14.06</td> <td>71.8%</td> <td>18.34</td> <td>2.99</td> </tr> <tr> <td>Model || SLS</td> <td>82.0%</td> <td>5.89</td> <td>4.49</td> <td>81.9%</td> <td>3.05</td> <td>1.88</td> <td>89.5%</td> <td>41.41</td> <td>16.77</td> <td>63.5%</td> <td>19.21</td> <td>2.55</td> </tr> <tr> <td>Model || DAR</td> <td>82.5%</td> <td>6.33</td> <td>5.21</td> <td>80.4%</td> <td>4.72</td> <td>4.26</td> <td>89.2%</td> <td>44.72</td> <td>18.52</td> <td>63.5%</td> <td>23.32</td> <td>3.26</td> </tr> <tr> <td>Model || CPLS</td> <td>85.4%</td> <td>7.11</td> <td>5.52</td> <td>84.4%</td> <td>4.95</td> <td>4.12</td> <td>91.3%</td> <td>48.6</td> <td>19.04</td> <td>64.3%</td> <td>27.25</td> <td>3.61</td> </tr> </tbody></table>
Table 3
table_3
D19-1499
8
emnlp2019
9 Results and Analysis Evaluation Results. Table 3 presents the evaluation results of automatic metrics on the models. It can be seen that the BLEU scores and GLEU scores of the semi-supervised models on almost all the datasets are better than the baseline S2S model. This result indicates that the model benefits from the nonparallel data in terms of content preservation. One interesting thing is that the overall BLEU scores on the ancient poems and modern Chinese datasets are lower than other datasets. This result may be explained by the fact that the edit distance between formal and informal texts are smaller than between ancient poems and modern Chinese texts. Therefore, it is more challenging for model to preserve the content meaning when transferring between ancient poems and modern Chinese text. Among three semi-supervised models, CPLS model achieves the greatest improvement, verifying the effectiveness of the projection functions. However, the gain of CPLS model in the aspect of style accuracy is not that significant. A possible explanation may be the bias of the style classifier. Take the transfer task from ancient poems to modern Chinese text for example. We observe that the classifier tends to classify short sentences into ancient poems as length is an obvious feature. We analyse the sentences generated by S2S model and by the CPLS model, and the statistics show that the average length of the text generated by S2S model is shorter, which may lead to the bias of the style classifier. Therefore, we also adopt human evaluation to alleviate this issue.
[2, 1, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2]
['9 Results and Analysis Evaluation Results.', 'Table 3 presents the evaluation results of automatic metrics on the models.', 'It can be seen that the BLEU scores and GLEU scores of the semi-supervised models on almost all the datasets are better than the baseline S2S model.', 'This result indicates that the model benefits from the nonparallel data in terms of content preservation.', 'One interesting thing is that the overall BLEU scores on the ancient poems and modern Chinese datasets are lower than other datasets.', 'This result may be explained by the fact that the edit distance between formal and informal texts are smaller than between ancient poems and modern Chinese texts.', 'Therefore, it is more challenging for model to preserve the content meaning when transferring between ancient poems and modern Chinese text.', 'Among three semi-supervised models, CPLS model achieves the greatest improvement, verifying the effectiveness of the projection functions.', 'However, the gain of CPLS model in the aspect of style accuracy is not that significant.', 'A possible explanation may be the bias of the style classifier.', 'Take the transfer task from ancient poems to modern Chinese text for example.', 'We observe that the classifier tends to classify short sentences into ancient poems as length is an obvious feature.', 'We analyse the sentences generated by S2S model and by the CPLS model, and the statistics show that the average length of the text generated by S2S model is shorter, which may lead to the bias of the style classifier.', 'Therefore, we also adopt human evaluation to alleviate this issue.']
[None, None, ['BLEU', 'GLEU', 'S2S'], None, ['BLEU', 'to Anc.P', 'to M.zh'], ['to Anc.P', 'to M.zh'], ['to Anc.P', 'to M.zh'], ['CPLS'], ['CPLS', 'Acc'], None, None, None, ['S2S', 'CPLS'], None]
1
D19-1499table_4
The human annotation results of the S2S model and CPLS model from three aspects.
3
[['Dataset Model', 'S2S', 'to M.zh'], ['Dataset Model', 'S2S', 'to Anc.P'], ['Dataset Model', 'S2S', 'to Inf.en'], ['Dataset Model', 'S2S', 'to F.en'], ['Dataset Model', 'CPLS', 'to M.zh'], ['Dataset Model', 'CPLS', 'to Anc.P'], ['Dataset Model', 'CPLS', 'to Inf.en'], ['Dataset Model', 'CPLS', 'to F.en']]
1
[['Content'], ['Style'], ['Fluency']]
[['0.1875', '0.5675', '0.3575'], ['0.2275', '0.54', '0.4425'], ['0.3175', '0.46', '0.58'], ['0.3625', '0.5125', '0.6325'], ['0.31', '0.5825', '0.305'], ['0.4375', '0.6875', '0.5475'], ['0.4675', '0.4625', '0.5725'], ['0.46', '0.5675', '0.62']]
column
['content', 'style', 'fluency']
['CPLS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Content</th> <th>Style</th> <th>Fluency</th> </tr> </thead> <tbody> <tr> <td>Dataset Model || S2S || to M.zh</td> <td>0.1875</td> <td>0.5675</td> <td>0.3575</td> </tr> <tr> <td>Dataset Model || S2S || to Anc.P</td> <td>0.2275</td> <td>0.54</td> <td>0.4425</td> </tr> <tr> <td>Dataset Model || S2S || to Inf.en</td> <td>0.3175</td> <td>0.46</td> <td>0.58</td> </tr> <tr> <td>Dataset Model || S2S || to F.en</td> <td>0.3625</td> <td>0.5125</td> <td>0.6325</td> </tr> <tr> <td>Dataset Model || CPLS || to M.zh</td> <td>0.31</td> <td>0.5825</td> <td>0.305</td> </tr> <tr> <td>Dataset Model || CPLS || to Anc.P</td> <td>0.4375</td> <td>0.6875</td> <td>0.5475</td> </tr> <tr> <td>Dataset Model || CPLS || to Inf.en</td> <td>0.4675</td> <td>0.4625</td> <td>0.5725</td> </tr> <tr> <td>Dataset Model || CPLS || to F.en</td> <td>0.46</td> <td>0.5675</td> <td>0.62</td> </tr> </tbody></table>
Table 4
table_4
D19-1499
8
emnlp2019
Table 4 compares the human evaluation results of S2S model and CPLS model on all the datasets, which are calculated by the average score of the human annotations. As shown in the Table 4, the CPLS model outperforms the S2S model in the aspects of the content preservation and style strength, and is on par in terms of fluency.
[1, 1]
['Table 4 compares the human evaluation results of S2S model and CPLS model on all the datasets, which are calculated by the average score of the human annotations.', 'As shown in the Table 4, the CPLS model outperforms the S2S model in the aspects of the content preservation and style strength, and is on par in terms of fluency.']
[['S2S', 'CPLS'], ['CPLS', 'S2S', 'Content', 'Style', 'Fluency']]
1
D19-1505table_4
Performance on Edit Anchoring
1
[['Passive-Aggr'], ['RandForest'], ['Adaboost'], ['Gated RNN'], ['CmntEdit-MT'], ['CmntEdit-EA']]
2
[['Candidates=5', 'Acc'], ['Candidates=5', 'F1'], ['Candidates=10', 'Acc'], ['Candidates=10', 'F1']]
[['0.581', '0.533', '0.716', '0.262'], ['0.639', '0.290', '0.743', '0.112'], ['0.657', '0.398', '0.751', '0.207'], ['0.696', '0.651', '0.665', '0.539'], ['0.635', '0.587', '0.619', '0.468'], ['0.744', '0.687', '0.726', '0.583']]
column
['Acc', 'F1', 'Acc', 'F1']
['CmntEdit-MT', 'CmntEdit-EA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Candidates=5 || Acc</th> <th>Candidates=5 || F1</th> <th>Candidates=10 || Acc</th> <th>Candidates=10 || F1</th> </tr> </thead> <tbody> <tr> <td>Passive-Aggr</td> <td>0.581</td> <td>0.533</td> <td>0.716</td> <td>0.262</td> </tr> <tr> <td>RandForest</td> <td>0.639</td> <td>0.290</td> <td>0.743</td> <td>0.112</td> </tr> <tr> <td>Adaboost</td> <td>0.657</td> <td>0.398</td> <td>0.751</td> <td>0.207</td> </tr> <tr> <td>Gated RNN</td> <td>0.696</td> <td>0.651</td> <td>0.665</td> <td>0.539</td> </tr> <tr> <td>CmntEdit-MT</td> <td>0.635</td> <td>0.587</td> <td>0.619</td> <td>0.468</td> </tr> <tr> <td>CmntEdit-EA</td> <td>0.744</td> <td>0.687</td> <td>0.726</td> <td>0.583</td> </tr> </tbody></table>
Table 4
table_4
D19-1505
7
emnlp2019
5.2.2 Edit Anchoring. Table 4 shows the results for Edit Anchoring. Our method, CmntEdit-EA, outperforms the best baseline method, Gated-RNN, by 5.5% on F1 and 6.9% on accuracy. The improvements over all the baselines are statistically significant at a p-value of 0.01. The baseline classifiers including PassiveAggressive, Random Forest and Adaboost have high accuracies, but low F1 scores. This is because of the imbalance between positive and negative samples in our data. Specifically, the number of negative samples is 4 times greater than the number of positive samples when the size of the candidate set is 5 - and even greater when it is 10. Therefore, the baseline classifiers tend to naively predict a negative label, which artificially boosts precision to the detriment of recall. In fact, Adaboost actually outperforms our models on accuracy when the candidate set size is 10, but yields a much lower F1 score.
[2, 1, 1, 1, 1, 2, 2, 2, 1]
['5.2.2 Edit Anchoring.', 'Table 4 shows the results for Edit Anchoring.', 'Our method, CmntEdit-EA, outperforms the best baseline method, Gated-RNN, by 5.5% on F1 and 6.9% on accuracy.', 'The improvements over all the baselines are statistically significant at a p-value of 0.01.', 'The baseline classifiers including PassiveAggressive, Random Forest and Adaboost have high accuracies, but low F1 scores.', 'This is because of the imbalance between positive and negative samples in our data.', 'Specifically, the number of negative samples is 4 times greater than the number of positive samples when the size of the candidate set is 5 - and even greater when it is 10.', 'Therefore, the baseline classifiers tend to naively predict a negative label, which artificially boosts precision to the detriment of recall.', 'In fact, Adaboost actually outperforms our models on accuracy when the candidate set size is 10, but yields a much lower F1 score.']
[None, None, ['CmntEdit-EA', 'Gated RNN', 'F1', 'Acc'], None, ['Passive-Aggr', 'RandForest', 'Adaboost', 'Acc', 'F1'], None, None, None, ['Adaboost', 'Acc', 'CmntEdit-EA', 'CmntEdit-MT', 'F1']]
1
D19-1506table_3
Evaluation Results
1
[['PRADO'], ['PRADO 8-bit Quantized'], ['SGNN (Ravi and Kozareva, 2018)'], ['HN-ATT* (Yang et al., 2016)'], ['HN-MAX* (Yang et al., 2016)'], ['HN-AVE* (Yang et al., 2016)'], ['LSTM-GRNN (Tang et al., 2015)'], ['Conv-GRNN (Tang et al., 2015)'], ['CNN-char (Zhang et al., 2015)'], ['CNN-word (Tang et al., 2015)'], ['CNN-word (Zhang et al., 2015)'], ['Paragraph Vector (Tang et al., 2015)'], ['LSTM (Zhang et al., 2015)'], ['SVM + Bigrams (Tang et al., 2015)'], ['SVM + Unigrams (Tang et al., 2015)'], ['SVM + AverageSG (Tang et al., 2015)'], ['SVM + SSWE (Tang et al., 2015)'], ['BoW TFIDF (Zhang et al., 2015)'], ['ngrams TFIDF (Zhang et al., 2015)']]
2
[['Dataset', 'Yelp'], ['Dataset', 'Amazon'], ['Dataset', 'Yahoo']]
[['64.7', '61.2', '72.3'], ['65.9', '61.9', '72.5'], ['35.4', '39.1', '36.6'], ['-', '63.6', '-'], ['-', '62.9', '-'], ['-', '62.9', '-'], ['67.6', '-', '-'], ['66', '-', '-'], ['62', '59.6', '71.2'], ['61.5', '-', '-'], ['60.5', '57.6', '71.2'], ['60.5', '-', '-'], ['58.2', '59.4', '70.8'], ['62.4', '-', '-'], ['61.1', '-', '-'], ['56.8', '-', '-'], ['55.4', '-', '-'], ['59.9', '55.3', '71'], ['54.8', '52.4', '68.5']]
column
['accuracy', 'accuracy', 'accuracy']
['PRADO', 'PRADO 8-bit Quantized']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || Yelp</th> <th>Dataset || Amazon</th> <th>Dataset || Yahoo</th> </tr> </thead> <tbody> <tr> <td>PRADO</td> <td>64.7</td> <td>61.2</td> <td>72.3</td> </tr> <tr> <td>PRADO 8-bit Quantized</td> <td>65.9</td> <td>61.9</td> <td>72.5</td> </tr> <tr> <td>SGNN (Ravi and Kozareva, 2018)</td> <td>35.4</td> <td>39.1</td> <td>36.6</td> </tr> <tr> <td>HN-ATT* (Yang et al., 2016)</td> <td>-</td> <td>63.6</td> <td>-</td> </tr> <tr> <td>HN-MAX* (Yang et al., 2016)</td> <td>-</td> <td>62.9</td> <td>-</td> </tr> <tr> <td>HN-AVE* (Yang et al., 2016)</td> <td>-</td> <td>62.9</td> <td>-</td> </tr> <tr> <td>LSTM-GRNN (Tang et al., 2015)</td> <td>67.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Conv-GRNN (Tang et al., 2015)</td> <td>66</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN-char (Zhang et al., 2015)</td> <td>62</td> <td>59.6</td> <td>71.2</td> </tr> <tr> <td>CNN-word (Tang et al., 2015)</td> <td>61.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN-word (Zhang et al., 2015)</td> <td>60.5</td> <td>57.6</td> <td>71.2</td> </tr> <tr> <td>Paragraph Vector (Tang et al., 2015)</td> <td>60.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>LSTM (Zhang et al., 2015)</td> <td>58.2</td> <td>59.4</td> <td>70.8</td> </tr> <tr> <td>SVM + Bigrams (Tang et al., 2015)</td> <td>62.4</td> <td>-</td> <td>-</td> </tr> <tr> <td>SVM + Unigrams (Tang et al., 2015)</td> <td>61.1</td> <td>-</td> <td>-</td> </tr> <tr> <td>SVM + AverageSG (Tang et al., 2015)</td> <td>56.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>SVM + SSWE (Tang et al., 2015)</td> <td>55.4</td> <td>-</td> <td>-</td> </tr> <tr> <td>BoW TFIDF (Zhang et al., 2015)</td> <td>59.9</td> <td>55.3</td> <td>71</td> </tr> <tr> <td>ngrams TFIDF (Zhang et al., 2015)</td> <td>54.8</td> <td>52.4</td> <td>68.5</td> </tr> </tbody></table>
Table 3
table_3
D19-1506
6
emnlp2019
We train a PRADO model variant with 8-bit quantization as described in (Jacob et al., 2018). This procedure simulates the quantization process during training by nudging the weights and activations towards a grid of discrete levels (2N levels, where N=8 is the number of bits). We estimate the activation ranges for each training batch and use exponential moving average to smooth the quantization ranges across training steps. (Jacob et al., 2018) noted that by training with quantization, they reached similar accuracy with 8-bit models as floating point ones on several image classification and object detection data sets. For text classification, we observed that training with quantization significantly improves accuracy as shown in Table 3. We believe that this is due to the improved regularization as quantization has the highest impact on Yelp. This dataset has relatively few training samples per class (see Table 1) which causes the model to overfit the training data and regularization provided by the operation that simulates quantization during training helps it generalize better. Furthermore, the model size of 8-bit quantized PRADO models is equal to the number of parameters. Figure 3 shows that PRADO can reach the performance reported in the Table 3 with model size of less than 200 Kilobytes. PRADO starts getting competitive results on the same data sets with tiny model size as low as 25 Kilobytes.
[2, 2, 2, 2, 1, 1, 2, 2, 2, 2]
['We train a PRADO model variant with 8-bit quantization as described in (Jacob et al., 2018).', 'This procedure simulates the quantization process during training by nudging the weights and activations towards a grid of discrete levels (2N levels, where N=8 is the number of bits).', 'We estimate the activation ranges for each training batch and use exponential moving average to smooth the quantization ranges across training steps.', '(Jacob et al., 2018) noted that by training with quantization, they reached similar accuracy with 8-bit models as floating point ones on several image classification and object detection data sets.', 'For text classification, we observed that training with quantization significantly improves accuracy as shown in Table 3.', 'We believe that this is due to the improved regularization as quantization has the highest impact on Yelp.', 'This dataset has relatively few training samples per class (see Table 1) which causes the model to overfit the training data and regularization provided by the operation that simulates quantization during training helps it generalize better.', 'Furthermore, the model size of 8-bit quantized PRADO models is equal to the number of parameters.', 'Figure 3 shows that PRADO can reach the performance reported in the Table 3 with model size of less than 200 Kilobytes.', 'PRADO starts getting competitive results on the same data sets with tiny model size as low as 25 Kilobytes.']
[['PRADO 8-bit Quantized'], None, None, None, ['PRADO 8-bit Quantized'], ['PRADO 8-bit Quantized', 'Yelp'], ['Yelp'], ['PRADO 8-bit Quantized'], ['PRADO'], ['PRADO']]
1
D19-1510table_2
Sentence fusion results on DfWiki.
2
[['Model', 'Transformer (Geva et al., 2019)'], ['Model', 'SEQ2SEQBERT'], ['Model', 'LASERTAGGERAR (no SWAP)'], ['Model', 'LASERTAGGERFF'], ['Model', 'LASERTAGGERAR']]
1
[['Exact'], ['SARI']]
[[' 51.1', ' 84.5'], ['53.6', '85.3'], [' 46.4', ' 80.4'], [' 52.2', ' 84.1'], [' 53.8', ' 85.5']]
column
['Exact', 'SARI']
['LASERTAGGERAR (no SWAP)', 'LASERTAGGERFF', 'LASERTAGGERAR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact</th> <th>SARI</th> </tr> </thead> <tbody> <tr> <td>Model || Transformer (Geva et al., 2019)</td> <td>51.1</td> <td>84.5</td> </tr> <tr> <td>Model || SEQ2SEQBERT</td> <td>53.6</td> <td>85.3</td> </tr> <tr> <td>Model || LASERTAGGERAR (no SWAP)</td> <td>46.4</td> <td>80.4</td> </tr> <tr> <td>Model || LASERTAGGERFF</td> <td>52.2</td> <td>84.1</td> </tr> <tr> <td>Model || LASERTAGGERAR</td> <td>53.8</td> <td>85.5</td> </tr> </tbody></table>
Table 2
table_2
D19-1510
6
emnlp2019
Comparison against Baselines. Table 2 lists the results for the DfWiki dataset. We obtain new SOTA results with LASERTAGGERAR, outperforming the previous SOTA 7-layer Transformer model from Geva et al. (2019) by 2.7% Exact score and 1.0% SARI score. We also find that the pretrained SEQ2SEQBERT model yields nearly as good performance, demonstrating the effectiveness of unsupervised pretraining for generation tasks. The performance of the tagger is impaired significantly when leaving out the SWAP tag due to the model's inability to reconstruct 10.5% of the training set.
[2, 1, 1, 1, 2]
['Comparison against Baselines.', 'Table 2 lists the results for the DfWiki dataset.', 'We obtain new SOTA results with LASERTAGGERAR, outperforming the previous SOTA 7-layer Transformer model from Geva et al. (2019) by 2.7% Exact score and 1.0% SARI score.', 'We also find that the pretrained SEQ2SEQBERT model yields nearly as good performance, demonstrating the effectiveness of unsupervised pretraining for generation tasks.', "The performance of the tagger is impaired significantly when leaving out the SWAP tag due to the model's inability to reconstruct 10.5% of the training set."]
[None, None, ['LASERTAGGERAR', 'Transformer (Geva et al., 2019)', 'Exact', ' SARI'], ['SEQ2SEQBERT'], None]
1
D19-1510table_5
Results on grammatical-error correction. Note that Grundkiewicz et al. (2019) augment the training dataset of 4,384 examples by 100 million synthetic examples and 2 million Wikipedia edits.
2
[['Model', 'Grundkiewicz et al. (2019)'], ['Model', 'SEQ2SEQBERT'], ['Model', 'LASERTAGGER FF'], ['Model', 'LASERTAGGER AR']]
1
[['P'], ['R'], ['F 0.5']]
[['70.19', '47.99', '64.24'], ['6.13', '14.14', '6.91'], ['44.17', '24', '37.82'], ['47.46', '25.58', '40.52']]
column
['P', 'R', 'F 0.5']
['LASERTAGGER FF', 'LASERTAGGER AR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F 0.5</th> </tr> </thead> <tbody> <tr> <td>Model || Grundkiewicz et al. (2019)</td> <td>70.19</td> <td>47.99</td> <td>64.24</td> </tr> <tr> <td>Model || SEQ2SEQBERT</td> <td>6.13</td> <td>14.14</td> <td>6.91</td> </tr> <tr> <td>Model || LASERTAGGER FF</td> <td>44.17</td> <td>24</td> <td>37.82</td> </tr> <tr> <td>Model || LASERTAGGER AR</td> <td>47.46</td> <td>25.58</td> <td>40.52</td> </tr> </tbody></table>
Table 5
table_5
D19-1510
8
emnlp2019
Table 5 compares our taggers against two baselines. Again, the tagging approach clearly outperforms the BERT-based seq2seq model, here by being more than seven times as accurate in the prediction of corrections. This can be accounted to the seq2seq model's much richer generation capacity, which the model can not properly tune to the task at hand given the small amount of training data. The tagging approach on the other hand is naturally suited to this kind of problem.
[1, 1, 1, 1]
['Table 5 compares our taggers against two baselines.', 'Again, the tagging approach clearly outperforms the BERT-based seq2seq model, here by being more than seven times as accurate in the prediction of corrections.', "This can be accounted to the seq2seq model's much richer generation capacity, which the model can not properly tune to the task at hand given the small amount of training data.", 'The tagging approach on the other hand is naturally suited to this kind of problem.']
[None, ['SEQ2SEQBERT', 'LASERTAGGER AR', 'LASERTAGGER FF'], ['SEQ2SEQBERT'], ['LASERTAGGER AR', 'LASERTAGGER FF']]
1
D19-1512table_5
Model ablation results
4
[['Dataset', 'Tencent', 'Metrics', 'METEOR'], ['Dataset', 'Tencent', 'Metrics', 'W-METEOR'], ['Dataset', 'Tencent', 'Metrics', 'Rouge_L'], ['Dataset', 'Tencent', 'Metrics', 'W-Rouge_L'], ['Dataset', 'Tencent', 'Metrics', 'CIDEr'], ['Dataset', 'Tencent', 'Metrics', 'W-CIDEr'], ['Dataset', 'Tencent', 'Metrics', 'BLEU-1'], ['Dataset', 'Tencent', 'Metrics', 'W-BLEU-1'], ['Dataset', 'Yahoo', 'Metrics', 'METEOR'], ['Dataset', 'Yahoo', 'Metrics', 'Rouge_L'], ['Dataset', 'Yahoo', 'Metrics', 'CIDEr'], ['Dataset', 'Yahoo', 'Metrics', 'BLEU-1']]
1
[['No Reading'], ['No Prediction'], ['No Sampling'], ['Full Model']]
[['0.096', '0.171', '0.171', '0.181'], ['0.072', '0.129', '0.131', '0.138'], ['0.282', '0.307', '0.303', '0.317'], ['0.219', '0.241', '0.239', '0.250'], ['0.012', '0.024', '0.026', '0.029'], ['0.009', '0.019', '0.021', '0.023'], ['0.426', '0.674', '0.667', '0.721'], ['0.388', '0.614', '0.607', '0.656'], ['0.081', '0.092', '0.102', '0.107'], ['0.232', '0.245', '0.244', '0.263'], ['0.017', '0.023', '0.020', '0.024'], ['0.490', '0.531', '0.609', '0.665']]
row
['METEOR', 'W-METEOR', 'Rouge_L', 'W-Rouge_L', 'CIDEr', 'W-CIDEr', 'BLEU-1', 'W-BLEU-1', 'METEOR', 'Rouge_L', 'CIDEr', 'BLEU-1']
['No Reading', 'No Prediction', 'No Sampling', 'Full Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Reading</th> <th>No Prediction</th> <th>No Sampling</th> <th>Full Model</th> </tr> </thead> <tbody> <tr> <td>Dataset || Tencent || Metrics || METEOR</td> <td>0.096</td> <td>0.171</td> <td>0.171</td> <td>0.181</td> </tr> <tr> <td>Dataset || Tencent || Metrics || W-METEOR</td> <td>0.072</td> <td>0.129</td> <td>0.131</td> <td>0.138</td> </tr> <tr> <td>Dataset || Tencent || Metrics || Rouge_L</td> <td>0.282</td> <td>0.307</td> <td>0.303</td> <td>0.317</td> </tr> <tr> <td>Dataset || Tencent || Metrics || W-Rouge_L</td> <td>0.219</td> <td>0.241</td> <td>0.239</td> <td>0.250</td> </tr> <tr> <td>Dataset || Tencent || Metrics || CIDEr</td> <td>0.012</td> <td>0.024</td> <td>0.026</td> <td>0.029</td> </tr> <tr> <td>Dataset || Tencent || Metrics || W-CIDEr</td> <td>0.009</td> <td>0.019</td> <td>0.021</td> <td>0.023</td> </tr> <tr> <td>Dataset || Tencent || Metrics || BLEU-1</td> <td>0.426</td> <td>0.674</td> <td>0.667</td> <td>0.721</td> </tr> <tr> <td>Dataset || Tencent || Metrics || W-BLEU-1</td> <td>0.388</td> <td>0.614</td> <td>0.607</td> <td>0.656</td> </tr> <tr> <td>Dataset || Yahoo || Metrics || METEOR</td> <td>0.081</td> <td>0.092</td> <td>0.102</td> <td>0.107</td> </tr> <tr> <td>Dataset || Yahoo || Metrics || Rouge_L</td> <td>0.232</td> <td>0.245</td> <td>0.244</td> <td>0.263</td> </tr> <tr> <td>Dataset || Yahoo || Metrics || CIDEr</td> <td>0.017</td> <td>0.023</td> <td>0.020</td> <td>0.024</td> </tr> <tr> <td>Dataset || Yahoo || Metrics || BLEU-1</td> <td>0.490</td> <td>0.531</td> <td>0.609</td> <td>0.665</td> </tr> </tbody></table>
Table 5
table_5
D19-1512
8
emnlp2019
4.5 Discussions. Ablation study: We compare the full model of DeepCom with the following variants: (1) No Reading: the entire reading network is replaced by a TF-IDF based keyword extractor, and top 40 keywords (tuned on validation sets) are fed to the generation network; (2) No Prediction: the prediction layer of the reading network is removed, and thus the entire V is used in the generation network; and (3) No Sampling: we directly use the model pre-trained by maximizing Objective (12). Table 5 reports the results on automatic metrics. We can see that all variants suffer from performance drop and No Reading is the worst among the three variants. Thus, we can conclude that (1) span prediction cannot be simply replaced by TF-IDF based keyword extraction, as the former is based on a deep comprehension of news articles and calibrated in the end-to-end learning process; (2) even with sophisticated representations, one cannot directly feed the entire article to the generation network, as comment generation is vulnerable to the noise in the article; and (3) pre-training is useful, but optimizing the lower bound of the true objective is still beneficial.
[0, 2, 1, 1, 2]
['4.5 Discussions.', ' Ablation study: We compare the full model of DeepCom with the following variants: (1) No Reading: the entire reading network is replaced by a TF-IDF based keyword extractor, and top 40 keywords (tuned on validation sets) are fed to the generation network; (2) No Prediction: the prediction layer of the reading network is removed, and thus the entire V is used in the generation network; and (3) No Sampling: we directly use the model pre-trained by maximizing Objective (12).', 'Table 5 reports the results on automatic metrics.', 'We can see that all variants suffer from performance drop and No Reading is the worst among the three variants.', 'Thus, we can conclude that (1) span prediction cannot be simply replaced by TF-IDF based keyword extraction, as the former is based on a deep comprehension of news articles and calibrated in the end-to-end learning process; (2) even with sophisticated representations, one cannot directly feed the entire article to the generation network, as comment generation is vulnerable to the noise in the article; and (3) pre-training is useful, but optimizing the lower bound of the true objective is still beneficial.']
[None, None, None, ['No Reading'], None]
1
D19-1515table_1
Performance of different phrase grounding methods on Flickr30k Entities (test set). Our CRF models has transition scores conditioned on features of context in between the two phrases (“M” in Table 2). Our methods, unless explicitly specified, uses ELMo (Peters et al., 2018) as word embeddings.
5
[['Method', 'Compared Methods', 'Structured Matching (Wang et al. 2016)', 'Vision Backbone', 'Fast R-CNN (Girshick 2015)'], ['Method', 'Compared Methods', 'Phrase-Region CCA (Plummer et al. 2017a)', 'Vision Backbone', 'Fast R-CNN (Girshick 2015)'], ['Method', 'Compared Methods', 'QRC Net (Chen et al. 2017b)', 'Vision Backbone', 'Fast R-CNN (Girshick 2015)'], ['Method', 'Compared Methods', 'BAN (Kim et al. 2018)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Compared Methods', 'DDPN (Yu et al. 2018b)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Our methods', 'Hard-Label (GloVe (Pennington et al. 2014))', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Our methods', 'Hard-Label (HL)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Our methods', 'Soft-Label (SL)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Our methods', 'Hard-Label Chain CRF (HL-CCRF)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)'], ['Method', 'Our methods', 'Soft-Label Chain CRF (SL-CCRF)', 'Vision Backbone', 'Bottom-Up Attention (Anderson et al. 2018)']]
1
[['Grounding Accuracy (%)']]
[['42.08'], ['55.85'], ['65.14'], ['69.69'], ['73.3'], ['71.88'], ['72.21'], ['74.29'], ['72.26'], ['74.69']]
column
['Grounding Accuracy (%)']
['Our methods']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Grounding Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || Compared Methods || Structured Matching (Wang et al. 2016) || Vision Backbone || Fast R-CNN (Girshick 2015)</td> <td>42.08</td> </tr> <tr> <td>Method || Compared Methods || Phrase-Region CCA (Plummer et al. 2017a) || Vision Backbone || Fast R-CNN (Girshick 2015)</td> <td>55.85</td> </tr> <tr> <td>Method || Compared Methods || QRC Net (Chen et al. 2017b) || Vision Backbone || Fast R-CNN (Girshick 2015)</td> <td>65.14</td> </tr> <tr> <td>Method || Compared Methods || BAN (Kim et al. 2018) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>69.69</td> </tr> <tr> <td>Method || Compared Methods || DDPN (Yu et al. 2018b) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>73.3</td> </tr> <tr> <td>Method || Our methods || Hard-Label (GloVe (Pennington et al. 2014)) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>71.88</td> </tr> <tr> <td>Method || Our methods || Hard-Label (HL) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>72.21</td> </tr> <tr> <td>Method || Our methods || Soft-Label (SL) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>74.29</td> </tr> <tr> <td>Method || Our methods || Hard-Label Chain CRF (HL-CCRF) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>72.26</td> </tr> <tr> <td>Method || Our methods || Soft-Label Chain CRF (SL-CCRF) || Vision Backbone || Bottom-Up Attention (Anderson et al. 2018)</td> <td>74.69</td> </tr> </tbody></table>
Table 1
table_1
D19-1515
8
emnlp2019
Table 1 shows the performance of previous structured prediction models, current state-of-theart models, our baseline models and the SoftLabel Chain CRF model. For a fair comparison with BAN (Kim et al., 2018), we also report result of the hard-label baseline with GloVe (Pennington et al., 2014) embeddings, while we obtain 0.33% higher result with ELMo. Training a non-CRF model on soft-label target distributions improves accuracy by a further 2.08%. On top of that, Soft-Label Chain CRF improves accuracy by another 0.40%, which shows the effectiveness of treating phrase grounding as a sequence labeling task and using CRFs to capture entity dependencies. We also observe that the Hard-Label Chain CRF outperforms the hard-label baseline by a mere margin of 0.05%, so our conjecture is that using chain CRFs works well only with a suitable choice of training regime. Soft-Label Chain CRF gives an overall improvement of 2.48% over the hard-label baseline; it significantly outperforms previous structured prediction models including Structured Matching (Wang et al., 2016), Phrase-Region CCA (Plummer et al., 2017a) and QRC Net (Chen et al., 2017b), and surpasses the state-of-the-art BAN (Kim et al., 2018) and DDPN (Yu et al., 2018b) models by a margin of 5.00% and about 1.4%, respectively.
[1, 2, 1, 1, 1, 1]
['Table 1 shows the performance of previous structured prediction models, current state-of-theart models, our baseline models and the SoftLabel Chain CRF model.', 'For a fair comparison with BAN (Kim et al., 2018), we also report result of the hard-label baseline with GloVe (Pennington et al., 2014) embeddings, while we obtain 0.33% higher result with ELMo.', 'Training a non-CRF model on soft-label target distributions improves accuracy by a further 2.08%.', 'On top of that, Soft-Label Chain CRF improves accuracy by another 0.40%, which shows the effectiveness of treating phrase grounding as a sequence labeling task and using CRFs to capture entity dependencies.', 'We also observe that the Hard-Label Chain CRF outperforms the hard-label baseline by a mere margin of 0.05%, so our conjecture is that using chain CRFs works well only with a suitable choice of training regime.', ' Soft-Label Chain CRF gives an overall improvement of 2.48% over the hard-label baseline; it significantly outperforms previous structured prediction models including Structured Matching (Wang et al., 2016), Phrase-Region CCA (Plummer et al., 2017a) and QRC Net (Chen et al., 2017b), and surpasses the state-of-the-art BAN (Kim et al., 2018) and DDPN (Yu et al., 2018b) models by a margin of 5.00% and about 1.4%, respectively.']
[None, ['BAN (Kim et al. 2018)'], ['Soft-Label (SL)', 'Hard-Label (HL)'], ['Soft-Label Chain CRF (SL-CCRF)', 'Soft-Label (SL)'], ['Hard-Label Chain CRF (HL-CCRF)', 'Hard-Label (HL)'], ['Soft-Label Chain CRF (SL-CCRF)', 'Hard-Label (HL)', 'Structured Matching (Wang et al. 2016)', 'Phrase-Region CCA (Plummer et al. 2017a)', 'QRC Net (Chen et al. 2017b)', 'BAN (Kim et al. 2018)', 'DDPN (Yu et al. 2018b)']]
1
D19-1521table_7
Performance of BLING-KPE ablations. Italic marks statistically significant worse performances than Full Model.
1
[['No ELMo'], ['No Transformer'], ['No Position'], ['No Visual'], ['No Pretraining'], ['Full Model']]
3
[['OpenKP', 'Method', 'P@1'], ['OpenKP', 'Method', 'R@1'], ['OpenKP', 'Method', 'P@3'], ['OpenKP', 'Method', 'R@3'], ['OpenKP', 'Method', 'P@5'], ['OpenKP', 'Method', 'R@5'], ['Query Prediction', 'Method', 'P@1'], ['Query Prediction', 'Method', 'R@1'], ['Query Prediction', 'Method', 'P@3'], ['Query Prediction', 'Method', 'R@3'], ['Query Prediction', 'Method', 'P@5'], ['Query Prediction', 'Method', 'R@5']]
[['0.27', '0.145', '0.172', '0.271', '0.132', '0.347', '0.323', '0.274', '0.189', '0.45', '0.136', '0.527'], ['0.389', '0.211', '0.247', '0.385', '0.189', '0.481', '0.489', '0.407', '0.258', '0.618', '0.178', '0.698'], ['0.394', '0.213', '0.247', '0.386', '0.187', '0.475', '0.543', '0.452', '0.281', '0.666', '0.191', '0.742'], ['0.37', '0.201', '0.23', '0.362', '0.176', '0.45', '0.492', '0.409', '0.258', '0.615', '0.178', '0.695'], ['0.369', '0.198', '0.236', '0.367', '0.181', '0.46', '–', '–', '–', '–', '–', '–'], ['0.404', '0.22', '0.248', '0.39', '0.188', '0.481', '0.54', '0.449', '0.275', '0.654', '0.188', '0.729']]
column
['P@1', 'R@1', 'P@3', 'R@3', 'P@5', 'R@5', 'P@1', 'R@1', 'P@3', 'R@3', 'P@5', 'R@5']
['No Visual', 'No Pretraining', 'No ELMo', 'No Transformer', 'No Position']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OpenKP || Method || P@1</th> <th>OpenKP || Method || R@1</th> <th>OpenKP || Method || P@3</th> <th>OpenKP || Method || R@3</th> <th>OpenKP || Method || P@5</th> <th>OpenKP || Method || R@5</th> <th>Query Prediction || Method || P@1</th> <th>Query Prediction || Method || R@1</th> <th>Query Prediction || Method || P@3</th> <th>Query Prediction || Method || R@3</th> <th>Query Prediction || Method || P@5</th> <th>Query Prediction || Method || R@5</th> </tr> </thead> <tbody> <tr> <td>No ELMo</td> <td>0.27</td> <td>0.145</td> <td>0.172</td> <td>0.271</td> <td>0.132</td> <td>0.347</td> <td>0.323</td> <td>0.274</td> <td>0.189</td> <td>0.45</td> <td>0.136</td> <td>0.527</td> </tr> <tr> <td>No Transformer</td> <td>0.389</td> <td>0.211</td> <td>0.247</td> <td>0.385</td> <td>0.189</td> <td>0.481</td> <td>0.489</td> <td>0.407</td> <td>0.258</td> <td>0.618</td> <td>0.178</td> <td>0.698</td> </tr> <tr> <td>No Position</td> <td>0.394</td> <td>0.213</td> <td>0.247</td> <td>0.386</td> <td>0.187</td> <td>0.475</td> <td>0.543</td> <td>0.452</td> <td>0.281</td> <td>0.666</td> <td>0.191</td> <td>0.742</td> </tr> <tr> <td>No Visual</td> <td>0.37</td> <td>0.201</td> <td>0.23</td> <td>0.362</td> <td>0.176</td> <td>0.45</td> <td>0.492</td> <td>0.409</td> <td>0.258</td> <td>0.615</td> <td>0.178</td> <td>0.695</td> </tr> <tr> <td>No Pretraining</td> <td>0.369</td> <td>0.198</td> <td>0.236</td> <td>0.367</td> <td>0.181</td> <td>0.46</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Full Model</td> <td>0.404</td> <td>0.22</td> <td>0.248</td> <td>0.39</td> <td>0.188</td> <td>0.481</td> <td>0.54</td> <td>0.449</td> <td>0.275</td> <td>0.654</td> <td>0.188</td> <td>0.729</td> </tr> </tbody></table>
Table 7
table_7
D19-1521
7
emnlp2019
6.2 Ablation Study. Table 7 shows ablation results on BLING-KPE’s variations. Each variation removes a component and keeps all others unchanged. ELMo Embedding. We first verify the effectiveness of using ELMo embedding by replacing ELMo with the WordPiece token embedding (Wu et al., 2016). The accuracy of this variation is much lower than the accuracy of the full model and others. The result is shown in the first row of Table 7. The context-aware word embedding is a necessary component of BLING-KPE. Network Architecture. The second part of Table 7 studies the contribution of Transformer and position embedding. Transformer contributes significantly to Query Prediction; with a lot of training data, the self-attention layers capture the global contexts between n-grams. But on OpenKP, its effectiveness is mostly observed on the first position. The position embedding barely helps, since real-world web pages are often not one text sequence. Beyond Language Understanding. As shown in the second part of Table 7, both visual features and search pretraining contribute significantly to BLING-KPE’s effectiveness. Without either of them, the accuracy drops significantly. Visual features even help on Query Prediction, though users issued the click queries and clicked on the documents before seeing its full page. The crucial role of ELMo embeddings confirm the benefits of bringing background knowledge and general language understanding, in the format of pre-trained contextual embedding, in keyphrase extraction. The importance of visual features and search weak supervisions confirms the benefits of going beyond language understanding in modeling real-world web documents.
[2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 2, 1, 1, 1, 1, 1]
['6.2 Ablation Study.', 'Table 7 shows ablation results on BLING-KPE’s variations.', 'Each variation removes a component and keeps all others unchanged.', 'ELMo Embedding.', 'We first verify the effectiveness of using ELMo embedding by replacing ELMo with the WordPiece token embedding (Wu et al., 2016).', 'The accuracy of this variation is much lower than the accuracy of the full model and others.', 'The result is shown in the first row of Table 7.', 'The context-aware word embedding is a necessary component of BLING-KPE.', 'Network Architecture.', 'The second part of Table 7 studies the contribution of Transformer and position embedding.', 'Transformer contributes significantly to Query Prediction; with a lot of training data, the self-attention layers capture the global contexts between n-grams.', 'But on OpenKP, its effectiveness is mostly observed on the first position.', 'The position embedding barely helps, since real-world web pages are often not one text sequence.', 'Beyond Language Understanding.', 'As shown in the second part of Table 7, both visual features and search pretraining contribute significantly to BLING-KPE’s effectiveness.', 'Without either of them, the accuracy drops significantly.', 'Visual features even help on Query Prediction, though users issued the click queries and clicked on the documents before seeing its full page.', 'The crucial role of ELMo embeddings confirm the benefits of bringing background knowledge and general language understanding, in the format of pre-trained contextual embedding, in keyphrase extraction.', 'The importance of visual features and search weak supervisions confirms the benefits of going beyond language understanding in modeling real-world web documents.']
[None, None, None, None, ['No ELMo'], ['No ELMo'], None, None, None, ['No Transformer', 'No Position'], ['No Transformer'], None, ['No Position'], None, ['No Visual', 'No Pretraining'], ['No Visual', 'No Pretraining'], ['No Visual'], ['No ELMo'], None]
1
D19-1524table_6
The Precision@Top3 and the MAP results for the ranking list predicted by SciResREC.
2
[['Methods', 'RF (BoW+TFIDF)'], ['Methods', 'RF (N-grams+TFIDF)'], ['Methods', 'SciResREC'], ['Methods', '-Function feature'], ['Methods', '-Role 2nd feature'], ['Methods', '-Role 1st feature']]
1
[['Precision@Top3'], ['MAP']]
[['0.438', '0.275'], ['0.449', '0.306'], ['0.4890', '.597'], ['0.471', '0.569'], ['0.420', '0.539'], ['0.399', '0.497']]
column
['Precision@Top3', 'MAP']
['SciResREC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision@Top3</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Methods || RF (BoW+TFIDF)</td> <td>0.438</td> <td>0.275</td> </tr> <tr> <td>Methods || RF (N-grams+TFIDF)</td> <td>0.449</td> <td>0.306</td> </tr> <tr> <td>Methods || SciResREC</td> <td>0.4890</td> <td>.597</td> </tr> <tr> <td>Methods || -Function feature</td> <td>0.471</td> <td>0.569</td> </tr> <tr> <td>Methods || -Role 2nd feature</td> <td>0.420</td> <td>0.539</td> </tr> <tr> <td>Methods || -Role 1st feature</td> <td>0.399</td> <td>0.497</td> </tr> </tbody></table>
Table 6
table_6
D19-1524
8
emnlp2019
As Table 6 shows, our SciResREC framework outperforms the two baselines. An ablation test suggests that each feature component of our model contributes to the final performance, which indicates the information of role and function are helpful for understanding the scientific resources. And we can observe that the feature of 2-nd category role label has the largest impact on performance indicating that capturing fine-grained role types is important for recognizing specific resources.
[1, 2, 1]
['As Table 6 shows, our SciResREC framework outperforms the two baselines.', 'An ablation test suggests that each feature component of our model contributes to the final performance, which indicates the information of role and function are helpful for understanding the scientific resources.', 'And we can observe that the feature of 2-nd category role label has the largest impact on performance indicating that capturing fine-grained role types is important for recognizing specific resources.']
[['SciResREC', 'RF (BoW+TFIDF)', 'RF (N-grams+TFIDF)'], None, ['-Role 2nd feature']]
1
D19-1526table_2
The performances of different supervised hashing models on three datasets under different lengths of hashing codes.
2
[['Method', 'KSH'], ['Method', 'SHTTM'], ['Method', 'VDSH-S'], ['Method', 'NASH-DN-S'], ['Method', 'GMSH-S'], ['Method', 'BMSH-S']]
3
[['Datasets', 'TMC', '16bit'], ['Datasets', 'TMC', '32bit'], ['Datasets', 'TMC', '64bit'], ['Datasets', 'TMC', '128bit'], ['Datasets', '20Newsgroups', '16bit'], ['Datasets', '20Newsgroups', '32bit'], ['Datasets', '20Newsgroups', '64bit'], ['Datasets', '20Newsgroups', '128bit'], ['Datasets', 'Reuters', '16bit'], ['Datasets', 'Reuters', '32bit'], ['Datasets', 'Reuters', '64bit'], ['Datasets', 'Reuters', '128bit']]
[['0.6842', '0.7047', '0.7175', '0.7243', '0.5559', '0.6103', '0.6488', '0.6638', '0.8376', '0.848', '0.8537', '0.862'], ['0.6571', '0.6485', '0.6893', '0.6474', '0.3235', '0.2357', '0.1411', '0.1299', '0.852', '0.8323', '0.8271', '0.815'], ['0.7887', '0.7883', '0.7967', '0.8018', '0.6791', '0.7564', '0.685', '0.6916', '0.9121', '0.9337', '0.9407', '0.9299'], ['0.7946', '0.7987', '0.8014', '0.8139', '0.6973', '0.8069', '0.8213', '0.784', '0.9327', '0.938', '0.9427', '0.9336'], ['0.7806', '0.7929', '0.8103', '0.8144', '0.6972', '0.7426', '0.7574', '0.769', '0.9144', '0.9175', '0.9414', '0.9522'], ['0.8051', '0.8247', '0.834', '0.831', '0.7316', '0.8144', '0.8216', '0.8183', '0.935', '0.964', '0.9633', '0.959']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['VDSH-S', 'NASH-DN-S', 'GMSH-S', 'BMSH-S']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || TMC || 16bit</th> <th>Datasets || TMC || 32bit</th> <th>Datasets || TMC || 64bit</th> <th>Datasets || TMC || 128bit</th> <th>Datasets || 20Newsgroups || 16bit</th> <th>Datasets || 20Newsgroups || 32bit</th> <th>Datasets || 20Newsgroups || 64bit</th> <th>Datasets || 20Newsgroups || 128bit</th> <th>Datasets || Reuters || 16bit</th> <th>Datasets || Reuters || 32bit</th> <th>Datasets || Reuters || 64bit</th> <th>Datasets || Reuters || 128bit</th> </tr> </thead> <tbody> <tr> <td>Method || KSH</td> <td>0.6842</td> <td>0.7047</td> <td>0.7175</td> <td>0.7243</td> <td>0.5559</td> <td>0.6103</td> <td>0.6488</td> <td>0.6638</td> <td>0.8376</td> <td>0.848</td> <td>0.8537</td> <td>0.862</td> </tr> <tr> <td>Method || SHTTM</td> <td>0.6571</td> <td>0.6485</td> <td>0.6893</td> <td>0.6474</td> <td>0.3235</td> <td>0.2357</td> <td>0.1411</td> <td>0.1299</td> <td>0.852</td> <td>0.8323</td> <td>0.8271</td> <td>0.815</td> </tr> <tr> <td>Method || VDSH-S</td> <td>0.7887</td> <td>0.7883</td> <td>0.7967</td> <td>0.8018</td> <td>0.6791</td> <td>0.7564</td> <td>0.685</td> <td>0.6916</td> <td>0.9121</td> <td>0.9337</td> <td>0.9407</td> <td>0.9299</td> </tr> <tr> <td>Method || NASH-DN-S</td> <td>0.7946</td> <td>0.7987</td> <td>0.8014</td> <td>0.8139</td> <td>0.6973</td> <td>0.8069</td> <td>0.8213</td> <td>0.784</td> <td>0.9327</td> <td>0.938</td> <td>0.9427</td> <td>0.9336</td> </tr> <tr> <td>Method || GMSH-S</td> <td>0.7806</td> <td>0.7929</td> <td>0.8103</td> <td>0.8144</td> <td>0.6972</td> <td>0.7426</td> <td>0.7574</td> <td>0.769</td> <td>0.9144</td> <td>0.9175</td> <td>0.9414</td> <td>0.9522</td> </tr> <tr> <td>Method || BMSH-S</td> <td>0.8051</td> <td>0.8247</td> <td>0.834</td> <td>0.831</td> <td>0.7316</td> <td>0.8144</td> <td>0.8216</td> <td>0.8183</td> <td>0.935</td> <td>0.964</td> <td>0.9633</td> <td>0.959</td> </tr> </tbody></table>
Table 2
table_2
D19-1526
7
emnlp2019
We evaluate the performance of supervised hashing in this section. Table 2 shows the performances of different supervised hashing models on three datasets under different lengths of hashing codes. We observe that all of the VAE-based generative hashing models (i.e VDSH, NASH, GMSH and BMSH) exhibit better performance, demonstrating the effectiveness of generative models on the task of semantic hashing. It can be also seen that BMSH-S achieves the best performance, suggesting that the advantages of Bernoulli mixture priors can also be extended to the supervised scenarios.
[2, 1, 1, 1]
['We evaluate the performance of supervised hashing in this section.', 'Table 2 shows the performances of different supervised hashing models on three datasets under different lengths of hashing codes.', 'We observe that all of the VAE-based generative hashing models (i.e VDSH, NASH, GMSH and BMSH) exhibit better performance, demonstrating the effectiveness of generative models on the task of semantic hashing.', 'It can be also seen that BMSH-S achieves the best performance, suggesting that the advantages of Bernoulli mixture priors can also be extended to the supervised scenarios.']
[None, None, ['VDSH-S', 'NASH-DN-S', 'GMSH-S', 'BMSH-S'], ['BMSH-S']]
1
D19-1530table_2
Word similarity Results
2
[['Method', 'none'], ['Method', 'CDA'], ['Method', 'gCDA'], ['Method', 'nCDA'], ['Method', 'gCDS'], ['Method', 'nCDS'], ['Method', 'WED40'], ['Method', 'WED70'], ['Method', 'nWED70']]
2
[['rs', 'Gigaword'], ['rs', 'Wikipedia']]
[['0.385', '0.368'], ['0.381', '0.363'], ['0.381', '0.363'], ['0.380', '0.365'], ['0.382', '0.366'], ['0.380', '0.362'], ['0.386', '0.371'], ['0.395', '0.375'], ['0.384', '0.367']]
column
['rs', 'rs']
['WED40', 'WED70', 'nWED70']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>rs || Gigaword</th> <th>rs || Wikipedia</th> </tr> </thead> <tbody> <tr> <td>Method || none</td> <td>0.385</td> <td>0.368</td> </tr> <tr> <td>Method || CDA</td> <td>0.381</td> <td>0.363</td> </tr> <tr> <td>Method || gCDA</td> <td>0.381</td> <td>0.363</td> </tr> <tr> <td>Method || nCDA</td> <td>0.380</td> <td>0.365</td> </tr> <tr> <td>Method || gCDS</td> <td>0.382</td> <td>0.366</td> </tr> <tr> <td>Method || nCDS</td> <td>0.380</td> <td>0.362</td> </tr> <tr> <td>Method || WED40</td> <td>0.386</td> <td>0.371</td> </tr> <tr> <td>Method || WED70</td> <td>0.395</td> <td>0.375</td> </tr> <tr> <td>Method || nWED70</td> <td>0.384</td> <td>0.367</td> </tr> </tbody></table>
Table 2
table_2
D19-1530
7
emnlp2019
Word similarity . Table 2 reports the SimLex-999 Spearman rank-order correlation coefficients rs (all are significant, p < 0.01). Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on Wikipedia). nWED70, on the other hand, performs worse than the unmitigated embedding (0.384 vs. 0.385 on Gigaword, 0.367 vs. 0.368 on Wikipedia). CDA and CDS methods do not match the quality of the unmitigated space, but once again the difference is small. It should be noted that since SimLex-999 was pro-duced by human raters, it will reflect the humanbiases these methods were designed to remove, so worse performance might result from successful bias mitigation.
[2, 1, 1, 1, 1, 2]
['Word similarity .', 'Table 2 reports the SimLex-999 Spearman rank-order correlation coefficients rs (all are significant, p < 0.01).', 'Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on Wikipedia).', 'nWED70, on the other hand, performs worse than the unmitigated embedding (0.384 vs. 0.385 on Gigaword, 0.367 vs. 0.368 on Wikipedia).', 'CDA and CDS methods do not match the quality of the unmitigated space, but once again the difference is small.', 'It should be noted that since SimLex-999 was pro-duced by human raters, it will reflect the humanbiases these methods were designed to remove, so worse performance might result from successful bias mitigation.']
[None, ['rs'], ['WED40', 'WED70', 'none', 'Gigaword', 'Wikipedia'], ['nWED70', 'none', 'Gigaword', 'Wikipedia'], ['CDA', 'gCDA', 'nCDA', 'gCDS', 'nCDS'], None]
1
D19-1533table_2
English all-words task results in F1 measure (%), averaged over three runs. SemEval 2007 Task 17 (SE07) test set is used as the development set. We show the results of nearest neighbor matching (1nn) and linear projection, by simple last layer linear projection, layer weighting (LW), and gated linear units (GLU). Apart from BERT representation of one sentence (1sent), we also show BERT representation of one sentence plus one surrounding sentence to the left and one to the right (1sent+1sur). The best result in each dataset is shown in bold. Statistical significance tests by bootstrap resampling (∗: p < 0.05) compare 1nn (1sent+1sur) with each of Simple (1sent+1sur), LW (1sent+1sur), GLU (1sent+1sur), and GLU+LW (1sent+1sur).
3
[['System', 'Reported in previous papers', 'MFS baseline'], ['System', 'Reported in previous papers', 'IMS (Zhong and Ng, 2010)'], ['System', 'Reported in previous papers', 'IMS+emb (Iacobacci et al., 2016)'], ['System', 'Reported in previous papers', 'SupWSD (Papandrea et al., 2017)'], ['System', 'Reported in previous papers', 'SupWSD+emb (Papandrea et al., 2017)'], ['System', 'Reported in previous papers', 'BiLSTMatt+LEX (Raganato et al., 2017b)'], ['System', 'Reported in previous papers', 'GASext Concat (Luo et al., 2018)'], ['System', 'Reported in previous papers', 'context2vec (Melamud et al., 2016)'], ['System', 'Reported in previous papers', 'ELMo (Peters et al., 2018)'], ['System', 'BERT nearest neighbor (ours)', '1nn (1sent)'], ['System', 'BERT nearest neighbor (ours)', '1nn (1sent+1sur)'], ['System', 'BERT linear projection (ours)', 'Simple (1sent)'], ['System', 'BERT linear projection (ours)', 'Simple (1sent+1sur)'], ['System', 'BERT linear projection (ours)', 'LW (1sent)'], ['System', 'BERT linear projection (ours)', 'LW (1sent+1sur)'], ['System', 'BERT linear projection (ours)', 'GLU (1sent)'], ['System', 'BERT linear projection (ours)', 'GLU (1sent+1sur)'], ['System', 'BERT linear projection (ours)', 'GLU+LW (1sent)'], ['System', 'BERT linear projection (ours)', 'GLU+LW (1sent+1sur)']]
1
[['SE07'], ['SE2'], ['SE3'], ['SE13'], ['SE15'], ['Avg']]
[['54.5', '65.6', '66.0', '63.8', '67.1', '65.6'], ['61.3', '70.9', '69.3', '65.3', '69.5', '68.8'], ['60.9', '71.0', '69.3', '67.3', '71.3', '69.7'], ['60.2', '71.3', '68.8', '65.8', '70.0', '69.0'], ['63.1', '72.7', '70.6', '66.8', '71.8', '70.5'], ['63.7', '72.0', '69.4', '66.4', '72.4', '70.1'], ['–', '72.2', '70.5', '67.2', '72.6', '70.6'], ['61.3', '71.8', '69.1', '65.6', '71.9', '69.6'], ['62.2', '71.6', '69.6', '66.2', '71.3', '69.7'], ['64.0', '73.0', '69.7', '67.8', '73.3', '71.0'], ['63.3', '73.8', '71.6', '69.2', '74.4', '72.3'], ['67.0', '75.0', '71.6', '69.7', '74.4', '72.7'], ['69.3*', '75.9*', '73.4', '70.4*', '75.1', '73.7*'], ['66.7', '75.0', '71.6', '69.9', '74.2', '72.7'], ['69.0*', '76.4*', '74.0*', '70.1*', '75.0', '73.9*'], ['64.9', '74.1', '71.6', '69.8', '74.3', '72.5'], ['68.1*', '75.5*', '73.6*', '71.1*', '76.2*', '74.1*'], ['65.7', '74.0', '70.9', '68.8', '73.6', '71.8'], ['68.5*', '75.5*', '73.4*', '71.0*', '76.2*', '74.0*']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['BERT nearest neighbor (ours)', 'BERT linear projection (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SE07</th> <th>SE2</th> <th>SE3</th> <th>SE13</th> <th>SE15</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>System || Reported in previous papers || MFS baseline</td> <td>54.5</td> <td>65.6</td> <td>66.0</td> <td>63.8</td> <td>67.1</td> <td>65.6</td> </tr> <tr> <td>System || Reported in previous papers || IMS (Zhong and Ng, 2010)</td> <td>61.3</td> <td>70.9</td> <td>69.3</td> <td>65.3</td> <td>69.5</td> <td>68.8</td> </tr> <tr> <td>System || Reported in previous papers || IMS+emb (Iacobacci et al., 2016)</td> <td>60.9</td> <td>71.0</td> <td>69.3</td> <td>67.3</td> <td>71.3</td> <td>69.7</td> </tr> <tr> <td>System || Reported in previous papers || SupWSD (Papandrea et al., 2017)</td> <td>60.2</td> <td>71.3</td> <td>68.8</td> <td>65.8</td> <td>70.0</td> <td>69.0</td> </tr> <tr> <td>System || Reported in previous papers || SupWSD+emb (Papandrea et al., 2017)</td> <td>63.1</td> <td>72.7</td> <td>70.6</td> <td>66.8</td> <td>71.8</td> <td>70.5</td> </tr> <tr> <td>System || Reported in previous papers || BiLSTMatt+LEX (Raganato et al., 2017b)</td> <td>63.7</td> <td>72.0</td> <td>69.4</td> <td>66.4</td> <td>72.4</td> <td>70.1</td> </tr> <tr> <td>System || Reported in previous papers || GASext Concat (Luo et al., 2018)</td> <td>–</td> <td>72.2</td> <td>70.5</td> <td>67.2</td> <td>72.6</td> <td>70.6</td> </tr> <tr> <td>System || Reported in previous papers || context2vec (Melamud et al., 2016)</td> <td>61.3</td> <td>71.8</td> <td>69.1</td> <td>65.6</td> <td>71.9</td> <td>69.6</td> </tr> <tr> <td>System || Reported in previous papers || ELMo (Peters et al., 2018)</td> <td>62.2</td> <td>71.6</td> <td>69.6</td> <td>66.2</td> <td>71.3</td> <td>69.7</td> </tr> <tr> <td>System || BERT nearest neighbor (ours) || 1nn (1sent)</td> <td>64.0</td> <td>73.0</td> <td>69.7</td> <td>67.8</td> <td>73.3</td> <td>71.0</td> </tr> <tr> <td>System || BERT nearest neighbor (ours) || 1nn (1sent+1sur)</td> <td>63.3</td> <td>73.8</td> <td>71.6</td> <td>69.2</td> <td>74.4</td> <td>72.3</td> </tr> <tr> <td>System || BERT linear projection (ours) || Simple (1sent)</td> <td>67.0</td> <td>75.0</td> <td>71.6</td> <td>69.7</td> <td>74.4</td> <td>72.7</td> </tr> <tr> <td>System || BERT linear projection (ours) || Simple (1sent+1sur)</td> <td>69.3*</td> <td>75.9*</td> <td>73.4</td> <td>70.4*</td> <td>75.1</td> <td>73.7*</td> </tr> <tr> <td>System || BERT linear projection (ours) || LW (1sent)</td> <td>66.7</td> <td>75.0</td> <td>71.6</td> <td>69.9</td> <td>74.2</td> <td>72.7</td> </tr> <tr> <td>System || BERT linear projection (ours) || LW (1sent+1sur)</td> <td>69.0*</td> <td>76.4*</td> <td>74.0*</td> <td>70.1*</td> <td>75.0</td> <td>73.9*</td> </tr> <tr> <td>System || BERT linear projection (ours) || GLU (1sent)</td> <td>64.9</td> <td>74.1</td> <td>71.6</td> <td>69.8</td> <td>74.3</td> <td>72.5</td> </tr> <tr> <td>System || BERT linear projection (ours) || GLU (1sent+1sur)</td> <td>68.1*</td> <td>75.5*</td> <td>73.6*</td> <td>71.1*</td> <td>76.2*</td> <td>74.1*</td> </tr> <tr> <td>System || BERT linear projection (ours) || GLU+LW (1sent)</td> <td>65.7</td> <td>74.0</td> <td>70.9</td> <td>68.8</td> <td>73.6</td> <td>71.8</td> </tr> <tr> <td>System || BERT linear projection (ours) || GLU+LW (1sent+1sur)</td> <td>68.5*</td> <td>75.5*</td> <td>73.4*</td> <td>71.0*</td> <td>76.2*</td> <td>74.0*</td> </tr> </tbody></table>
Table 2
table_2
D19-1533
6
emnlp2019
Table 2 shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT’s pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently. We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§4.2.2 and gated linear unit (§4.2.3) gives the best F1 scores on all test sets. All our BERT WSD systems outperform glossenhanced neural WSD, which has the best overall score among all prior systems.
[1, 1, 2, 1, 1, 2, 1, 1]
['Table 2 shows our WSD results in F1 measure.', 'It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo.', 'This shows the effectiveness of BERT’s pre-trained contextualized word representation.', 'When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently.', 'We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved.', 'While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector.', 'It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§4.2.2 and gated linear unit (§4.2.3) gives the best F1 scores on all test sets.', 'All our BERT WSD systems outperform glossenhanced neural WSD, which has the best overall score among all prior systems.']
[None, ['BERT nearest neighbor (ours)', 'context2vec (Melamud et al., 2016)', 'ELMo (Peters et al., 2018)'], None, ['BERT nearest neighbor (ours)', '1nn (1sent+1sur)'], ['BERT linear projection (ours)'], None, ['BERT linear projection (ours)', 'LW (1sent)', 'LW (1sent+1sur)', 'GLU+LW (1sent)', 'GLU+LW (1sent+1sur)', 'GLU (1sent)', 'GLU (1sent+1sur)'], ['BERT nearest neighbor (ours)', 'BERT linear projection (ours)']]
1
D19-1535table_1
SymAcc, BLEU and AnsAcc on the FollowUp dataset. Results marked † are from Liu et al. (2019).
2
[['Model', 'SEQ2SEQÊ(Bahdanau et al., 2015)'], ['Model', 'COPYNETÊ(Gu et al., 2016)'], ['Model', 'COPY+BERT (Devlin et al., 2019)'], ['Model', 'CONCAT'], ['Model', 'E2ECRÊ(Lee et al., 2017)'], ['Model', 'FANDA (Liu et al., 2019)'], ['Model', 'STAR']]
2
[['Dev', 'SymAcc (%)'], ['Dev', 'BLEU (%)'], ['Test', 'SymAcc (%)'], ['Test', 'BLEU (%)'], ['Test', 'AnsAcc (%)']]
[['0.63±0.00', '21.34±1.14', '0.50±0.22', '20.72±1.31', '-'], ['17.50±0.87', '43.36±0.54', '19.30±0.93', '43.34±0.45', '-'], ['18.63±0.61', '45.14±0.68', '22.00±0.45', '44.87±0.52', '-'], ['-', '-', '22.00±-', '52.02±-', '25.24'], ['-', '-', '27.00±-', '52.47±-', '27.18'], ['49.00±1.28', '60.14±0.98', '47.80±1.14', '59.02±0.54', '60.19'], ['55.38±1.21', '67.62±0.65', '54.00±1.09', '67.05±1.05', '65.05']]
column
['SymAcc (%)', 'BLEU (%)', 'SymAcc (%)', 'BLEU (%)', 'AnsAcc (%)']
['STAR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || SymAcc (%)</th> <th>Dev || BLEU (%)</th> <th>Test || SymAcc (%)</th> <th>Test || BLEU (%)</th> <th>Test || AnsAcc (%)</th> </tr> </thead> <tbody> <tr> <td>Model || SEQ2SEQÊ(Bahdanau et al., 2015)</td> <td>0.63±0.00</td> <td>21.34±1.14</td> <td>0.50±0.22</td> <td>20.72±1.31</td> <td>-</td> </tr> <tr> <td>Model || COPYNETÊ(Gu et al., 2016)</td> <td>17.50±0.87</td> <td>43.36±0.54</td> <td>19.30±0.93</td> <td>43.34±0.45</td> <td>-</td> </tr> <tr> <td>Model || COPY+BERT (Devlin et al., 2019)</td> <td>18.63±0.61</td> <td>45.14±0.68</td> <td>22.00±0.45</td> <td>44.87±0.52</td> <td>-</td> </tr> <tr> <td>Model || CONCAT</td> <td>-</td> <td>-</td> <td>22.00±-</td> <td>52.02±-</td> <td>25.24</td> </tr> <tr> <td>Model || E2ECRÊ(Lee et al., 2017)</td> <td>-</td> <td>-</td> <td>27.00±-</td> <td>52.47±-</td> <td>27.18</td> </tr> <tr> <td>Model || FANDA (Liu et al., 2019)</td> <td>49.00±1.28</td> <td>60.14±0.98</td> <td>47.80±1.14</td> <td>59.02±0.54</td> <td>60.19</td> </tr> <tr> <td>Model || STAR</td> <td>55.38±1.21</td> <td>67.62±0.65</td> <td>54.00±1.09</td> <td>67.05±1.05</td> <td>65.05</td> </tr> </tbody></table>
Table 1
table_1
D19-1535
6
emnlp2019
Answer Level. Table 1 shows AnsAcc results of competitive baselines on the test set. Compared with them, STAR achieves the highest, 65.05%, which demonstrates its superiority. Meanwhile, it verifies the feasibility of follow-up query analysis in cooperating with context-independent semantic parsing. Compared with CONCAT, our approach boosts over 39.81% on COARSE2FINE for the capability of context-dependent semantic parsing. Query Level. Table 1 also shows SymAcc and BLEU of different methods on the dev and test sets. As observed, STAR significantly outperforms all baselines, demonstrating its effectiveness. For example, STAR achieves an absolute improvement of 8.03% BLEU over the state-ofthe-art baseline FANDA on testing. Moreover, the rewriting-based baselines, even the simplest CONCAT, perform better than the generation-based ones. It suggests that the idea of rewriting is more reasonable for the task, where precedent and follow-up queries are of full utilization.
[2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1]
['Answer Level.', 'Table 1 shows AnsAcc results of competitive baselines on the test set.', 'Compared with them, STAR achieves the highest, 65.05%, which demonstrates its superiority.', 'Meanwhile, it verifies the feasibility of follow-up query analysis in cooperating with context-independent semantic parsing.', 'Compared with CONCAT, our approach boosts over 39.81% on COARSE2FINE for the capability of context-dependent semantic parsing.', 'Query Level.', 'Table 1 also shows SymAcc and BLEU of different methods on the dev and test sets.', 'As observed, STAR significantly outperforms all baselines, demonstrating its effectiveness.', 'For example, STAR achieves an absolute improvement of 8.03% BLEU over the state-ofthe-art baseline FANDA on testing.', 'Moreover, the rewriting-based baselines, even the simplest CONCAT, perform better than the generation-based ones.', 'It suggests that the idea of rewriting is more reasonable for the task, where precedent and follow-up queries are of full utilization.']
[None, ['AnsAcc (%)'], ['STAR'], None, ['CONCAT'], None, ['SymAcc (%)', 'BLEU (%)'], ['STAR'], ['STAR', 'BLEU (%)', 'Test'], ['CONCAT'], None]
1
D19-1538table_3
Semantic F1-score on CoNLL-2009 in-domain test set. The first row is the best result of CoNLL-2009 shared task (Hajiˇc et al., 2009). The previously best published results of Catalan and Japanese is from Zhao et al. (2009a), Chinese from Cai et al. (2018), Czech from Marcheggiani et al. (2017), English from Li et al. (2019), German and Spanish from Roth and Lapata (2016).
2
[['Model', 'CoNLL-2009 ST best system'], ['Model', 'Zhao et al. (2009a)'], ['Model', 'Roth and Lapata (2016)'], ['Model', 'Marcheggiani et al. (2017)'], ['Model', 'Li et al. (2019)'], ['Model', 'The best previously published'], ['Model', 'Our baseline']]
1
[['Catalan'], ['Chinese'], ['Czech'], ['English'], ['German'], ['Japanese'], ['Spanish']]
[['80.3', '78.6', '85.4', '85.6', '79.7', '78.2', '80.5'], ['80.3', '77.7', '85.2', '86.2', '76.0', '78.2', '80.5'], ['−', '79.4', '−', '87.7', '80.1', '−', '80.2'], ['−', '81.2', '86.0', '87.7', '−', '−', '80.3'], ['−', '−', '−', '90.4', '−', '−', '−'], ['80.3', '84.3', '86.0', '90.4', '80.1', '78.2', '80.5'], ['84.07', '84.05', '88.35', '89.61', '78.36', '83.08', '83.47']]
column
['F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score']
['Our baseline']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Catalan</th> <th>Chinese</th> <th>Czech</th> <th>English</th> <th>German</th> <th>Japanese</th> <th>Spanish</th> </tr> </thead> <tbody> <tr> <td>Model || CoNLL-2009 ST best system</td> <td>80.3</td> <td>78.6</td> <td>85.4</td> <td>85.6</td> <td>79.7</td> <td>78.2</td> <td>80.5</td> </tr> <tr> <td>Model || Zhao et al. (2009a)</td> <td>80.3</td> <td>77.7</td> <td>85.2</td> <td>86.2</td> <td>76.0</td> <td>78.2</td> <td>80.5</td> </tr> <tr> <td>Model || Roth and Lapata (2016)</td> <td>−</td> <td>79.4</td> <td>−</td> <td>87.7</td> <td>80.1</td> <td>−</td> <td>80.2</td> </tr> <tr> <td>Model || Marcheggiani et al. (2017)</td> <td>−</td> <td>81.2</td> <td>86.0</td> <td>87.7</td> <td>−</td> <td>−</td> <td>80.3</td> </tr> <tr> <td>Model || Li et al. (2019)</td> <td>−</td> <td>−</td> <td>−</td> <td>90.4</td> <td>−</td> <td>−</td> <td>−</td> </tr> <tr> <td>Model || The best previously published</td> <td>80.3</td> <td>84.3</td> <td>86.0</td> <td>90.4</td> <td>80.1</td> <td>78.2</td> <td>80.5</td> </tr> <tr> <td>Model || Our baseline</td> <td>84.07</td> <td>84.05</td> <td>88.35</td> <td>89.61</td> <td>78.36</td> <td>83.08</td> <td>83.47</td> </tr> </tbody></table>
Table 3
table_3
D19-1538
6
emnlp2019
Table 3 presents all test results on seven languages of CoNLL-2009 datasets. So far, the best previously reported results of Catalan, Japanese and Spanish are still from CoNLL-2009 shared task. Compared with previous methods, our baseline yields strong performance on all datasets except German. Especially for Catalan, Czech, Japanese and Spanish, our baseline performs better than existing methods with a large margin of 3.5% F1 on average. Nevertheless, applying our argument pruning to the strong syntax-agnostic baseline can still boost the model performance, which demonstrates the effectiveness of proposed method. On the other hand, it indicates that syntax is generally beneficial to multiple languages, and can enhance the multilingual SRL performance with effective syntactic integration.
[1, 1, 1, 1, 1, 1]
['Table 3 presents all test results on seven languages of CoNLL-2009 datasets.', 'So far, the best previously reported results of Catalan, Japanese and Spanish are still from CoNLL-2009 shared task.', 'Compared with previous methods, our baseline yields strong performance on all datasets except German.', 'Especially for Catalan, Czech, Japanese and Spanish, our baseline performs better than existing methods with a large margin of 3.5% F1 on average.', 'Nevertheless, applying our argument pruning to the strong syntax-agnostic baseline can still boost the model performance, which demonstrates the effectiveness of proposed method.', 'On the other hand, it indicates that syntax is generally beneficial to multiple languages, and can enhance the multilingual SRL performance with effective syntactic integration.']
[None, ['Catalan', 'Japanese', 'Spanish'], ['Our baseline', 'German'], ['Catalan', 'Czech', 'Japanese', 'Spanish', 'Our baseline'], None, None]
1
D19-1539table_3
CoNLL-2003 Named Entity Recognition results. Test result was evaluated on parameter set with the best dev F1.
2
[['Model', 'ELMoBASE'], ['Model', 'CNN Large + ELMo'], ['Model', 'CNN Large + fine-tune'], ['Model', 'BERTBASE'], ['Model', 'BERTLARGE']]
1
[['dev F1'], ['test F1']]
[['95.7', '92.2'], ['96.4', '93.2'], ['96.9', '93.5'], ['96.4', '92.4'], ['96.6', '92.8']]
column
['dev F1', 'test F1']
['CNN Large + ELMo', 'CNN Large + fine-tune']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev F1</th> <th>test F1</th> </tr> </thead> <tbody> <tr> <td>Model || ELMoBASE</td> <td>95.7</td> <td>92.2</td> </tr> <tr> <td>Model || CNN Large + ELMo</td> <td>96.4</td> <td>93.2</td> </tr> <tr> <td>Model || CNN Large + fine-tune</td> <td>96.9</td> <td>93.5</td> </tr> <tr> <td>Model || BERTBASE</td> <td>96.4</td> <td>92.4</td> </tr> <tr> <td>Model || BERTLARGE</td> <td>96.6</td> <td>92.8</td> </tr> </tbody></table>
Table 3
table_3
D19-1539
6
emnlp2019
Table 3 shows the results, with comparison to previous published ELMoBASE results (Peters et al., 2018) and the BERT models. Both of our stacking methods outperform the previous state of the art, but fine tuning gives the biggest gain.
[1, 1]
['Table 3 shows the results, with comparison to previous published ELMoBASE results (Peters et al., 2018) and the BERT models.', 'Both of our stacking methods outperform the previous state of the art, but fine tuning gives the biggest gain.']
[['ELMoBASE', 'BERTBASE', 'BERTLARGE'], ['CNN Large + ELMo', 'CNN Large + fine-tune']]
1
D19-1539table_4
Penn Treebank Constituency Parsing results. Test result was evaluated on parameter set with the best dev F1.
2
[['Model', 'ELMoBASE'], ['Model', 'CNN Large + ELMo'], ['Model', 'CNN Large + fine-tune']]
1
[['dev F1'], ['test F1']]
[['95.2', '95.1'], ['95.1', '95.2'], ['95.5', '95.6']]
column
['dev f1', 'test f1']
['CNN Large + fine-tune']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev F1</th> <th>test F1</th> </tr> </thead> <tbody> <tr> <td>Model || ELMoBASE</td> <td>95.2</td> <td>95.1</td> </tr> <tr> <td>Model || CNN Large + ELMo</td> <td>95.1</td> <td>95.2</td> </tr> <tr> <td>Model || CNN Large + fine-tune</td> <td>95.5</td> <td>95.6</td> </tr> </tbody></table>
Table 4
table_4
D19-1539
6
emnlp2019
6.2.2 Constituency Parsing. We also report parseval F1 for Penn Treebank constituency parsing. We adopted the current state-ofthe-art architecture (Kitaev and Klein, 2018). We again used grid search for learning rates and number of layers in parsing encoder, and used 8E-04 for language model finetuning, 8E-03 for the parsing model parameters, and two layers for encoder. Table 4 shows the results. Here, fine tuning is required to achieve gains over the previous state of the art, which used ELMo embeddings.
[2, 2, 2, 2, 1, 1]
['6.2.2 Constituency Parsing.', 'We also report parseval F1 for Penn Treebank constituency parsing.', 'We adopted the current state-ofthe-art architecture (Kitaev and Klein, 2018).', 'We again used grid search for learning rates and number of layers in parsing encoder, and used 8E-04 for language model finetuning, 8E-03 for the parsing model parameters, and two layers for encoder.', 'Table 4 shows the results.', 'Here, fine tuning is required to achieve gains over the previous state of the art, which used ELMo embeddings.']
[None, ['dev F1', 'test F1'], None, None, None, ['CNN Large + fine-tune', 'CNN Large + ELMo']]
1
D19-1539table_5
Different loss functions on the development sets of GLUE (cf. Table 2). Results are based on the CNN base model (Table 1)
1
[['cloze'], ['bilm'], ['cloze + bilm']]
1
[['CoLA (mcc)'], ['SST-2 (acc)'], ['MRPC (F1)'], ['STS-B (scc)'], ['QQP (F1)'], ['MNLI-m (acc)'], ['QNLI (acc)'], ['RTE (acc)'], ['Avg']]
[['55.1', '92.9', '88.3', '88.3', '87.2', '82.3', '86.5', '66.4', '80.9'], ['50', '92.4', '86.6', '87.1', '86.1', '81.7', '84', '66.4', '79.3'], ['52.6', '93.2', '88.9', '87.9', '87.2', '82.1', '86.1', '65.5', '80.4']]
column
['loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss']
['cloze', 'bilm', 'cloze + bilm']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoLA (mcc)</th> <th>SST-2 (acc)</th> <th>MRPC (F1)</th> <th>STS-B (scc)</th> <th>QQP (F1)</th> <th>MNLI-m (acc)</th> <th>QNLI (acc)</th> <th>RTE (acc)</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>cloze</td> <td>55.1</td> <td>92.9</td> <td>88.3</td> <td>88.3</td> <td>87.2</td> <td>82.3</td> <td>86.5</td> <td>66.4</td> <td>80.9</td> </tr> <tr> <td>bilm</td> <td>50</td> <td>92.4</td> <td>86.6</td> <td>87.1</td> <td>86.1</td> <td>81.7</td> <td>84</td> <td>66.4</td> <td>79.3</td> </tr> <tr> <td>cloze + bilm</td> <td>52.6</td> <td>93.2</td> <td>88.9</td> <td>87.9</td> <td>87.2</td> <td>82.1</td> <td>86.1</td> <td>65.5</td> <td>80.4</td> </tr> </tbody></table>
Table 5
table_5
D19-1539
7
emnlp2019
Table 5 shows that the cloze loss performs significantly better than the bilm loss and that combining the two loss types does not improve over the cloze loss by itself. We conjecture that individual left and right context prediction tasks are too different from center word prediction and that their learning signals are not complementary enough.
[1, 2]
['Table 5 shows that the cloze loss performs significantly better than the bilm loss and that combining the two loss types does not improve over the cloze loss by itself.', 'We conjecture that individual left and right context prediction tasks are too different from center word prediction and that their learning signals are not complementary enough.']
[['cloze', 'bilm', 'cloze + bilm'], None]
1
D19-1540table_2
Results on TrecQA, TwitterURL, and Quora. The best scores except for BERT are bolded. In these experiments, all our approaches use the deep encoder in Sec. 2.1. RM and SM denote that only relevance and semantic matching signals are used, respectively. HCAN denotes the complete HCAN model.
3
[['Model', 'Baseline', 'InferSent'], ['Model', 'Baseline', 'DecAtt'], ['Model', 'Baseline', 'ESIMseq'], ['Model', 'Baseline', 'ESIMtree'], ['Model', 'Baseline', 'ESIMseq+tree'], ['Model', 'Baseline', 'PWIM'], ['Model', 'State-of-the-Art Models', 'Rao et al. (2016)'], ['Model', 'State-of-the-Art Models', 'Gong et al. (2018)'], ['Model', 'State-of-the-Art Models', 'BERT'], ['Model', 'Our Approach', 'RM'], ['Model', 'Our Approach', 'SM'], ['Model', 'Our Approach', 'HCAN']]
2
[['TrecQA', 'MAP'], ['TrecQA', 'MRR'], ['TwitterURL', 'macro-F1'], ['Quora', 'Acc']]
[['0.521', '0.559', '0.797', '0.866'], ['0.660', '0.712', '0.785', '0.845'], ['0.771', '0.795', '0.822', '0.850'], ['0.698', '0.734', '-', '0.755'], ['0.749', '0.768', '-', '0.854'], ['0.739', '0.795', '0.809', '0.834'], ['0.780', '0.834', '-', '-'], ['-', '-', '-', '0.891'], ['0.838', '0.887', '0.852', '0.892'], ['0.756', '0.812', '0.790', '0.842'], ['0.663', '0.725', '0.708', '0.817'], ['0.774', '0.843', '0.817', '0.853']]
column
['MAP', 'MRR', 'macro-F1', 'Acc']
['Our Approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TrecQA || MAP</th> <th>TrecQA || MRR</th> <th>TwitterURL || macro-F1</th> <th>Quora || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline || InferSent</td> <td>0.521</td> <td>0.559</td> <td>0.797</td> <td>0.866</td> </tr> <tr> <td>Model || Baseline || DecAtt</td> <td>0.660</td> <td>0.712</td> <td>0.785</td> <td>0.845</td> </tr> <tr> <td>Model || Baseline || ESIMseq</td> <td>0.771</td> <td>0.795</td> <td>0.822</td> <td>0.850</td> </tr> <tr> <td>Model || Baseline || ESIMtree</td> <td>0.698</td> <td>0.734</td> <td>-</td> <td>0.755</td> </tr> <tr> <td>Model || Baseline || ESIMseq+tree</td> <td>0.749</td> <td>0.768</td> <td>-</td> <td>0.854</td> </tr> <tr> <td>Model || Baseline || PWIM</td> <td>0.739</td> <td>0.795</td> <td>0.809</td> <td>0.834</td> </tr> <tr> <td>Model || State-of-the-Art Models || Rao et al. (2016)</td> <td>0.780</td> <td>0.834</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || State-of-the-Art Models || Gong et al. (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>0.891</td> </tr> <tr> <td>Model || State-of-the-Art Models || BERT</td> <td>0.838</td> <td>0.887</td> <td>0.852</td> <td>0.892</td> </tr> <tr> <td>Model || Our Approach || RM</td> <td>0.756</td> <td>0.812</td> <td>0.790</td> <td>0.842</td> </tr> <tr> <td>Model || Our Approach || SM</td> <td>0.663</td> <td>0.725</td> <td>0.708</td> <td>0.817</td> </tr> <tr> <td>Model || Our Approach || HCAN</td> <td>0.774</td> <td>0.843</td> <td>0.817</td> <td>0.853</td> </tr> </tbody></table>
Table 2
table_2
D19-1540
6
emnlp2019
4 Results. Our main results on the TrecQA, TwitterURL, and Quora datasets are shown in Table 2 and results on TREC Microblog 2013–2014 are shown in Table 3. The best numbers for each dataset (besides BERT) are bolded. We compare to three variants of our HCAN model: (1) only relevance matching signals (RM), (2) only semantic matching signals (SM), and (3) the complete model (HCAN). In these experiments, we use the deep encoder. From Table 2, we can see that on all three datasets, relevance matching (RM) achieves significantly higher effectiveness than semantic matching (SM). It beats other competitive baselines (InferSent, DecAtt and ESIM) by a large margin on the TrecQA dataset, and is still comparable to those baselines on TwitterURL and Quora. This finding suggests that soft term matching signals alone are fairly effective for many textual similarity modeling tasks. However, SM performs much worse on TrecQA and TwitterURL, while the gap between SM and RM is reduced on Quora. By combining SM and RM signals, we observe consistent effectiveness gains in HCAN across all three datasets, establishing new state-of-the-art (non-BERT) results on TrecQA.
[2, 1, 2, 2, 2, 1, 1, 1, 1, 1]
['4 Results.', 'Our main results on the TrecQA, TwitterURL, and Quora datasets are shown in Table 2 and results on TREC Microblog 2013–2014 are shown in Table 3.', 'The best numbers for each dataset (besides BERT) are bolded.', 'We compare to three variants of our HCAN model: (1) only relevance matching signals (RM), (2) only semantic matching signals (SM), and (3) the complete model (HCAN).', 'In these experiments, we use the deep encoder.', 'From Table 2, we can see that on all three datasets, relevance matching (RM) achieves significantly higher effectiveness than semantic matching (SM).', 'It beats other competitive baselines (InferSent, DecAtt and ESIM) by a large margin on the TrecQA dataset, and is still comparable to those baselines on TwitterURL and Quora.', 'This finding suggests that soft term matching signals alone are fairly effective for many textual similarity modeling tasks.', 'However, SM performs much worse on TrecQA and TwitterURL, while the gap between SM and RM is reduced on Quora.', 'By combining SM and RM signals, we observe consistent effectiveness gains in HCAN across all three datasets, establishing new state-of-the-art (non-BERT) results on TrecQA.']
[None, None, None, None, None, ['TrecQA', 'TwitterURL', 'Quora', 'RM', 'SM'], ['Our Approach', 'InferSent', 'DecAtt', 'TwitterURL', 'Quora'], None, ['SM', 'TrecQA', 'TwitterURL', 'RM', 'Quora'], ['SM', 'RM', 'HCAN', 'TrecQA', 'TwitterURL', 'Quora']]
1
D19-1541table_1
Experimental results of syntax-aware methods we compare on CPB1.0 dataset.
2
[['Methods', 'Baseline'], ['Methods', 'Baseline + Dep (Tree-GRU)'], ['Methods', 'Baseline + Dep (FIR)'], ['Methods', 'Baseline + Dep (HPS)'], ['Methods', 'Baseline + Dep (IIR)']]
2
[['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']]
[['81.52', '82.17', '81.85', '80.95', '80.01', '80.48'], ['82.35', '80.24', '81.28', '82.1', '78.11', '80.06'], ['83.56', '83.05', '83.3', '83.38', '81.93', '82.65'], ['82.58', '84.15', '83.36', '83.22', '83.81', '83.51'], ['83.12', '83.66', '83.39', '84.49', '83.34', '83.91']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['Baseline + Dep (IIR)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Baseline</td> <td>81.52</td> <td>82.17</td> <td>81.85</td> <td>80.95</td> <td>80.01</td> <td>80.48</td> </tr> <tr> <td>Methods || Baseline + Dep (Tree-GRU)</td> <td>82.35</td> <td>80.24</td> <td>81.28</td> <td>82.1</td> <td>78.11</td> <td>80.06</td> </tr> <tr> <td>Methods || Baseline + Dep (FIR)</td> <td>83.56</td> <td>83.05</td> <td>83.3</td> <td>83.38</td> <td>81.93</td> <td>82.65</td> </tr> <tr> <td>Methods || Baseline + Dep (HPS)</td> <td>82.58</td> <td>84.15</td> <td>83.36</td> <td>83.22</td> <td>83.81</td> <td>83.51</td> </tr> <tr> <td>Methods || Baseline + Dep (IIR)</td> <td>83.12</td> <td>83.66</td> <td>83.39</td> <td>84.49</td> <td>83.34</td> <td>83.91</td> </tr> </tbody></table>
Table 1
table_1
D19-1541
6
emnlp2019
4.3 Main Results. Results of Syntax-aware Methods. Table 1 shows the results of these syntax-aware methods on CPB1.0 dataset. First, the first line shows the results of our baseline model, which only employs the word embeddings and char representations as the inputs of the basic SRL model. Second, the Tree-GRU method only achieves 80.06 F1 score on the test data, which even didn’t catch up with the baseline model. We think this is caused by the relatively low accuracy in Chinese dependency parsing. Third, the FIR approach outperforms the baseline by 2.17 F1 score on the test data, demonstrating the effectiveness of introducing fixed implicit syntactic representations. Forth, the HPS strategy achieves more significant performance by 83.51 F1 score. Finally, our proposed framework achieves the best performance of 83.91 F1 score among these methods, outperforming the baseline by 3.43 F1 score. All the improvements are statistically significant (p < 0.0001). From these experimental results, we can conclude that: 1) the quality of syntax has a crucial impact on the methods which depend on the systematic dependency trees, like Tree-GRU, 2) the implicit syntactic features have the potential to improve the down-stream NLP tasks, and 3) learning the syntactic features with the main task performs better than extract them from a fixed dependency parser.
[2, 2, 1, 1, 1, 2, 1, 1, 1, 2, 2]
['4.3 Main Results.', 'Results of Syntax-aware Methods.', 'Table 1 shows the results of these syntax-aware methods on CPB1.0 dataset.', 'First, the first line shows the results of our baseline model, which only employs the word embeddings and char representations as the inputs of the basic SRL model.', 'Second, the Tree-GRU method only achieves 80.06 F1 score on the test data, which even didn’t catch up with the baseline model.', 'We think this is caused by the relatively low accuracy in Chinese dependency parsing.', 'Third, the FIR approach outperforms the baseline by 2.17 F1 score on the test data, demonstrating the effectiveness of introducing fixed implicit syntactic representations.', 'Forth, the HPS strategy achieves more significant performance by 83.51 F1 score.', 'Finally, our proposed framework achieves the best performance of 83.91 F1 score among these methods, outperforming the baseline by 3.43 F1 score.', 'All the improvements are statistically significant (p < 0.0001).', 'From these experimental results, we can conclude that: 1) the quality of syntax has a crucial impact on the methods which depend on the systematic dependency trees, like Tree-GRU, 2) the implicit syntactic features have the potential to improve the down-stream NLP tasks, and 3) learning the syntactic features with the main task performs better than extract them from a fixed dependency parser.']
[None, None, None, ['Baseline'], ['Baseline + Dep (Tree-GRU)', 'Baseline', 'F1', 'Test'], None, ['Baseline + Dep (FIR)', 'Baseline', 'F1', 'Test'], ['Baseline + Dep (HPS)', 'F1', 'Test'], ['Baseline + Dep (IIR)', 'F1', 'Test', 'Baseline'], None, None]
1
D19-1541table_2
Results and comparison with previous works on CPB1.0 test set.
3
[['Methods', 'Previous Works', 'Sun et al. (2009)'], ['Methods', 'Previous Works', 'Wang et al. (2015b)'], ['Methods', 'Previous Works', 'Sha et al. (2016)'], ['Methods', 'Previous Works', 'Xia et al. (2017)'], ['Methods', 'Ours', 'Baseline'], ['Methods', 'Ours', 'Baseline + Dep (HPS)'], ['Methods', 'Ours', 'Baseline + Dep (IIR)'], ['Methods', 'Ours', 'Baseline + BERT'], ['Methods', 'Ours', 'Baseline + BERT + Dep (HPS)'], ['Methods', 'Ours', 'Baseline + BERT + Dep (IIR)']]
1
[['F1']]
[['74.12'], ['77.59'], ['77.69'], ['79.67'], ['80.48'], ['83.51'], ['83.91'], ['86.62'], ['87.03'], ['87.54']]
column
['F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Previous Works || Sun et al. (2009)</td> <td>74.12</td> </tr> <tr> <td>Methods || Previous Works || Wang et al. (2015b)</td> <td>77.59</td> </tr> <tr> <td>Methods || Previous Works || Sha et al. (2016)</td> <td>77.69</td> </tr> <tr> <td>Methods || Previous Works || Xia et al. (2017)</td> <td>79.67</td> </tr> <tr> <td>Methods || Ours || Baseline</td> <td>80.48</td> </tr> <tr> <td>Methods || Ours || Baseline + Dep (HPS)</td> <td>83.51</td> </tr> <tr> <td>Methods || Ours || Baseline + Dep (IIR)</td> <td>83.91</td> </tr> <tr> <td>Methods || Ours || Baseline + BERT</td> <td>86.62</td> </tr> <tr> <td>Methods || Ours || Baseline + BERT + Dep (HPS)</td> <td>87.03</td> </tr> <tr> <td>Methods || Ours || Baseline + BERT + Dep (IIR)</td> <td>87.54</td> </tr> </tbody></table>
Table 2
table_2
D19-1541
6
emnlp2019
Results on CPB1.0. Table 2 shows the results of our baseline model and proposed framework using external dependency trees on CPB1.0, as well as the corresponding results when adding BERT representations. It is clear that adding dependency trees into the baseline SRL model can effectively improve the performance (p < 0.0001), no matter whether employ the BERT representations or not. Especially, our proposed framework (IIR) consistently outperforms the hard parameter sharing strategy. So we only report the results of our proposed framework in later experiments. Our final results outperforms the best previous model (Xia et al., 2017) by 7.87 and 4.24 F1 scores with BERT representations or not, respectively.
[2, 1, 1, 1, 2, 1]
['Results on CPB1.0.', 'Table 2 shows the results of our baseline model and proposed framework using external dependency trees on CPB1.0, as well as the corresponding results when adding BERT representations.', 'It is clear that adding dependency trees into the baseline SRL model can effectively improve the performance (p < 0.0001), no matter whether employ the BERT representations or not.', 'Especially, our proposed framework (IIR) consistently outperforms the hard parameter sharing strategy.', 'So we only report the results of our proposed framework in later experiments.', 'Our final results outperforms the best previous model (Xia et al., 2017) by 7.87 and 4.24 F1 scores with BERT representations or not, respectively.']
[None, None, ['Baseline + Dep (HPS)'], ['Baseline + Dep (IIR)', 'Baseline + BERT + Dep (IIR)', 'Baseline + Dep (HPS)', 'Baseline + BERT + Dep (HPS)'], None, ['Ours', 'Xia et al. (2017)']]
1
D19-1541table_4
Results and comparison with previous works on CoNLL-2009 Chinese test set.
3
[['Methods', 'Previous Works', 'Roth and Lapata (2016)'], ['Methods', 'Previous Works', 'Marcheggiani et al. (2017)'], ['Methods', 'Previous Works', 'He et al. (2018b)'], ['Methods', 'Previous Works', 'Cai et al. (2018)'], ['Methods', 'Ours', 'Baseline'], ['Methods', 'Ours', 'Baseline + Dep (IIR)'], ['Methods', 'Ours', 'Baseline + BERT'], ['Methods', 'Ours', 'Baseline + BERT + Dep (IIR)']]
1
[['P'], ['R'], ['F1']]
[['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['84.7', '84.0', '84.3'], ['83.7', '84.8', '84.2'], ['84.6', '85.7', '85.1'], ['87.8', '89.2', '88.5'], ['88.0', '89.1', '88.5']]
column
['P', 'R', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Previous Works || Roth and Lapata (2016)</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr> <td>Methods || Previous Works || Marcheggiani et al. (2017)</td> <td>84.6</td> <td>80.4</td> <td>82.5</td> </tr> <tr> <td>Methods || Previous Works || He et al. (2018b)</td> <td>84.2</td> <td>81.5</td> <td>82.8</td> </tr> <tr> <td>Methods || Previous Works || Cai et al. (2018)</td> <td>84.7</td> <td>84.0</td> <td>84.3</td> </tr> <tr> <td>Methods || Ours || Baseline</td> <td>83.7</td> <td>84.8</td> <td>84.2</td> </tr> <tr> <td>Methods || Ours || Baseline + Dep (IIR)</td> <td>84.6</td> <td>85.7</td> <td>85.1</td> </tr> <tr> <td>Methods || Ours || Baseline + BERT</td> <td>87.8</td> <td>89.2</td> <td>88.5</td> </tr> <tr> <td>Methods || Ours || Baseline + BERT + Dep (IIR)</td> <td>88.0</td> <td>89.1</td> <td>88.5</td> </tr> </tbody></table>
Table 4
table_4
D19-1541
7
emnlp2019
Results on CoNLL-2009. Table 4 shows the results of our framework and comparison with previous works on the CoNLL-2009 Chinese test data. Our baseline achieves nearly the same performance with Cai et al. (2018), which is an endto-end neural model that consists of BiLSTM encoder and biaffine scorer. Our proposed framework outperforms the best reported result (Cai et al., 2018) by 0.8 F1 score and brings a significant improvement (p < 0.0001) of 0.9 F1 score over our baseline model. Our experimental result boosts to 88.5 F1 score when the framework is enhanced with BERT representations. However, compared with the results in the settings without BERT, the improvement is fairly small (88.53 - 88.47 = 0.06 F1 score, p > 0.1) 8 of the proposed framework, which we will discuss in Section 5.3.
[2, 1, 1, 1, 1, 1]
['Results on CoNLL-2009.', 'Table 4 shows the results of our framework and comparison with previous works on the CoNLL-2009 Chinese test data.', 'Our baseline achieves nearly the same performance with Cai et al. (2018), which is an endto-end neural model that consists of BiLSTM encoder and biaffine scorer.', 'Our proposed framework outperforms the best reported result (Cai et al., 2018) by 0.8 F1 score and brings a significant improvement (p < 0.0001) of 0.9 F1 score over our baseline model.', 'Our experimental result boosts to 88.5 F1 score when the framework is enhanced with BERT representations.', 'However, compared with the results in the settings without BERT, the improvement is fairly small (88.53 - 88.47 = 0.06 F1 score, p > 0.1) 8 of the proposed framework, which we will discuss in Section 5.3.']
[None, ['Ours', 'Previous Works'], ['Baseline', 'Cai et al. (2018)'], ['F1', 'Cai et al. (2018)', 'Baseline + Dep (IIR)'], ['F1', 'Baseline + BERT + Dep (IIR)'], ['F1', 'P', 'Baseline + BERT', 'Baseline + BERT + Dep (IIR)']]
1
D19-1542table_3
GLUE test results scored by the GLUE evaluation server. The best scores are represented in bold and scores higher than those of BERT-base are underlined.
2
[['Model', 'BERT-base'], ['Model', 'BERT-large'], ['Model', 'Transfer Fine-Tuning']]
3
[['Task', 'Semantic Equivalence', 'MRPC'], ['Task', 'Semantic Equivalence', 'STS-B'], ['Task', 'Semantic Equivalence', 'QQP'], ['Task', 'NLI', 'MNLI (m/mm)'], ['Task', 'NLI', 'RTE'], ['Task', 'NLI', 'QNLI'], ['Task', 'Single-Sent.', 'SST'], ['Task', 'Single-Sent.', 'CoLA']]
[['88.3', '84.7', '71.2', '84.3/83.0', '59.8', '89.1', '93.3', '52.7'], ['88.6', '86.0', '72.1', '86.2/85.5', '65.5', '92.7', '94.1', '55.7'], ['89.2', '87.4', '71.2', '83.9/83.1', '64.8', '89.3', '93.1', '47.2']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Transfer Fine-Tuning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task || Semantic Equivalence || MRPC</th> <th>Task || Semantic Equivalence || STS-B</th> <th>Task || Semantic Equivalence || QQP</th> <th>Task || NLI || MNLI (m/mm)</th> <th>Task || NLI || RTE</th> <th>Task || NLI || QNLI</th> <th>Task || Single-Sent. || SST</th> <th>Task || Single-Sent. || CoLA</th> </tr> </thead> <tbody> <tr> <td>Model || BERT-base</td> <td>88.3</td> <td>84.7</td> <td>71.2</td> <td>84.3/83.0</td> <td>59.8</td> <td>89.1</td> <td>93.3</td> <td>52.7</td> </tr> <tr> <td>Model || BERT-large</td> <td>88.6</td> <td>86.0</td> <td>72.1</td> <td>86.2/85.5</td> <td>65.5</td> <td>92.7</td> <td>94.1</td> <td>55.7</td> </tr> <tr> <td>Model || Transfer Fine-Tuning</td> <td>89.2</td> <td>87.4</td> <td>71.2</td> <td>83.9/83.1</td> <td>64.8</td> <td>89.3</td> <td>93.1</td> <td>47.2</td> </tr> </tbody></table>
Table 3
table_3
D19-1542
7
emnlp2019
6.1 Effect on Semantic Equivalence Assessment Tasks. Table 3 shows fine-tuning results on GLUE; our model, denoted as Transfer Fine-Tuning, is compared against BERT-base and BERT-large. The first set of columns shows the results of semantic equivalence assessment tasks. Our model outperformed BERT-base on MRPC (+0.9 points) and STS-B (+2.7 points). Furthermore, it outperformed even BERT-large by 0.6 points on MRPC and by 1.4 points on STS-B, despite BERT-large having 3.1 times more parameters than our model. Devlin et al. (2019) described that the nextsentence prediction task in BERT’s pre-training aims to train a model that understands sentence relations. Herein, we argue that such relations are effective at generating representations broadly transferable to various NLP tasks, but are too generic to generate representations for semantic equivalence assessment tasks. Our method allows semantic relations between sentences and phrases that are directly useful for this class of tasks to be learned. These results support hypothesis H1, indicating that our approach is more effective than blindly enlarging the model size. A smaller model size is desirable for practical applications. We have also applied our method on the BERT-large model, but its performance was not much improved to warrant the larger model size. Further investigation regarding pre-trained model sizes is our future work.
[2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2]
['6.1 Effect on Semantic Equivalence Assessment Tasks.', 'Table 3 shows fine-tuning results on GLUE; our model, denoted as Transfer Fine-Tuning, is compared against BERT-base and BERT-large.', 'The first set of columns shows the results of semantic equivalence assessment tasks.', 'Our model outperformed BERT-base on MRPC (+0.9 points) and STS-B (+2.7 points).', 'Furthermore, it outperformed even BERT-large by 0.6 points on MRPC and by 1.4 points on STS-B, despite BERT-large having 3.1 times more parameters than our model.', 'Devlin et al. (2019) described that the nextsentence prediction task in BERT’s pre-training aims to train a model that understands sentence relations.', 'Herein, we argue that such relations are effective at generating representations broadly transferable to various NLP tasks, but are too generic to generate representations for semantic equivalence assessment tasks.', 'Our method allows semantic relations between sentences and phrases that are directly useful for this class of tasks to be learned.', 'These results support hypothesis H1, indicating that our approach is more effective than blindly enlarging the model size.', 'A smaller model size is desirable for practical applications.', 'We have also applied our method on the BERT-large model, but its performance was not much improved to warrant the larger model size.', 'Further investigation regarding pre-trained model sizes is our future work.']
[None, ['Transfer Fine-Tuning', 'BERT-base', 'BERT-large'], None, ['Transfer Fine-Tuning', 'BERT-base', 'MRPC', 'STS-B'], ['Transfer Fine-Tuning', 'BERT-large', 'MRPC', 'STS-B'], None, None, ['Transfer Fine-Tuning'], None, None, ['Transfer Fine-Tuning', 'BERT-large'], None]
1