table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D18-1486table_2
Single-task results. Row “Orphan category” denotes a variant of CapsNet-2 without orphan category
1
[['LSTM'], ['BiLSTM'], ['LR-LSTM'], ['VD-CNN'], ['DCNN'], ['CNN-MC'], ['CapsNet-1'], ['CapsNet-2'], ['- Orphan']]
2
[['Dataset', 'MR'], ['Dataset', 'SST-1'], ['Dataset', 'SST-2'], ['Dataset', 'Subj'], ['Dataset', 'TREC'], ['Dataset', 'AG’s']]
[['75.9', '45.9', '80.6', '89.3', '86.8', '86.1'], ['79.3', '46.2', '83.2', '90.5', '89.6', '88.2'], ['81.5', '48.2', '87.5', '89.9', '-', '-'], ['-', '-', '-', '-', '-', '91.3'], ['-', '48.5', '86.8', '-', '93.0', '-'], ['81.1', '47.4', '88.1', '93.2', '92.2', '-'], ['81.5', '48.1', '86.4', '93.3', '91.8', '91.1'], ['82.4', '48.7', '87.8', '93.6', '92.9', '92.3'], ['81.9', '48.3', '87.2', '93.4', '92.6', '91.7']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['CapsNet-1', 'CapsNet-2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || MR</th> <th>Dataset || SST-1</th> <th>Dataset || SST-2</th> <th>Dataset || Subj</th> <th>Dataset || TREC</th> <th>Dataset || AG’s</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>75.9</td> <td>45.9</td> <td>80.6</td> <td>89.3</td> <td>86.8</td> <td>86.1</td> </tr> <tr> <td>BiLSTM</td> <td>79.3</td> <td>46.2</td> <td>83.2</td> <td>90.5</td> <td>89.6</td> <td>88.2</td> </tr> <tr> <td>LR-LSTM</td> <td>81.5</td> <td>48.2</td> <td>87.5</td> <td>89.9</td> <td>-</td> <td>-</td> </tr> <tr> <td>VD-CNN</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>91.3</td> </tr> <tr> <td>DCNN</td> <td>-</td> <td>48.5</td> <td>86.8</td> <td>-</td> <td>93.0</td> <td>-</td> </tr> <tr> <td>CNN-MC</td> <td>81.1</td> <td>47.4</td> <td>88.1</td> <td>93.2</td> <td>92.2</td> <td>-</td> </tr> <tr> <td>CapsNet-1</td> <td>81.5</td> <td>48.1</td> <td>86.4</td> <td>93.3</td> <td>91.8</td> <td>91.1</td> </tr> <tr> <td>CapsNet-2</td> <td>82.4</td> <td>48.7</td> <td>87.8</td> <td>93.6</td> <td>92.9</td> <td>92.3</td> </tr> <tr> <td>- Orphan</td> <td>81.9</td> <td>48.3</td> <td>87.2</td> <td>93.4</td> <td>92.6</td> <td>91.7</td> </tr> </tbody></table>
Table 2
table_2
D18-1486
6
emnlp2018
4.3 Single-Task Learning Results. We first test our approach on six datasets for text classification under the scheme of single-task. As Table 2 shows, our single-task network enhanced by capsules is already a strong model. CapsNet1 that has one kernel size obtains the best accuracy on 2 out of 6 datasets, and gets competitive results on the others. And CapsNet-2 with multiple kernel sizes further improves the performance and get best accuracy on 4 datasets. This proves our capsule networks are effective for text. Particularly, our capsule network outperforms conventional CNNs like DCNN, CNN-MC and VD-CNN with a large margin (by average 1.1%, 0.7% and 1.0% respectively), which shows the advantages of capsule network over conventional CNNs for clustering features and leveraging the position information. Ablation Study on Orphan Category. Orphan category in class capsule layer helps collect the noise capsules that contain the ‘background’ information like stop words, punctuations or any unrelated words. We conduct the ablation experiment on orphan category, and result (Table 2) shows that network with orphan category perform better than the without one by 0.4%. This demonstrates the effectiveness of orphan category.
[2, 2, 1, 1, 1, 2, 1, 2, 2, 1, 2]
['4.3 Single-Task Learning Results.', 'We first test our approach on six datasets for text classification under the scheme of single-task.', 'As Table 2 shows, our single-task network enhanced by capsules is already a strong model.', 'CapsNet1 that has one kernel size obtains the best accuracy on 2 out of 6 datasets, and gets competitive results on the others.', 'And CapsNet-2 with multiple kernel sizes further improves the performance and get best accuracy on 4 datasets.', 'This proves our capsule networks are effective for text.', 'Particularly, our capsule network outperforms conventional CNNs like DCNN, CNN-MC and VD-CNN with a large margin (by average 1.1%, 0.7% and 1.0% respectively), which shows the advantages of capsule network over conventional CNNs for clustering features and leveraging the position information.', 'Ablation Study on Orphan Category.', 'Orphan category in class capsule layer helps collect the noise capsules that contain the ‘background’ information like stop words, punctuations or any unrelated words.', 'We conduct the ablation experiment on orphan category, and result (Table 2) shows that network with orphan category perform better than the without one by 0.4%.', 'This demonstrates the effectiveness of orphan category.']
[None, ['CapsNet-1', 'CapsNet-2', 'MR', 'SST-1', 'SST-2', 'Subj', 'TREC', 'AG’s'], ['CapsNet-1', 'CapsNet-2'], ['CapsNet-1', 'MR', 'SST-1', 'SST-2', 'Subj', 'TREC', 'AG’s'], ['CapsNet-2', 'MR', 'SST-1', 'Subj', 'AG’s'], ['CapsNet-1', 'CapsNet-2'], ['CapsNet-2', 'VD-CNN', 'DCNN', 'CNN-MC'], None, None, ['CapsNet-2', '- Orphan'], ['- Orphan']]
1
D18-1490table_5
Comparison of the ACNN model to the stateof-the-art methods on the Switchboard test set. The other models listed have used richer inputs and/or rely on the output of other systems, as well as pattern match features, as indicated by the following symbols: (cid:5) dependency parser, † hand-crafted constraints/rules, (cid:63) prosodic cues, (cid:111) tree adjoining grammar transducer, (cid:49) refined/external language models and ⊗ partial words. P = precision, R = recall and F = f-score.
2
[['model', 'Yoshikawa et al.(2016)'], ['model', 'Georgila et al. (2010)'], ['model', 'Tran et al. (2018)'], ['model', 'Kahn et al. (2005)'], ['model', 'Johnson et al. (2004)'], ['model', 'Georgila (2009)'], ['model', 'Johnson et al. (2004)'], ['model', 'Rasooli et al. (2013)'], ['model', 'Zwarts et al. (2011)'], ['model', 'Qian et al. (2013)'], ['model', 'Honnibal et al. (2014)'], ['model', 'ACNN'], ['model', 'Ferguson et al. (2015)'], ['model', 'Zayats et al. (2016)'], ['model', 'Jamshid Lou et al. (2017)']]
1
[['P'], ['R'], ['F']]
[['67.9', '57.9', '62.5'], ['77.4', '64.6', '70.4'], ['-', '-', '77.5'], ['-', '-', '78.2'], ['82.0', '77.8', '79.7'], ['-', '-', '80.1'], ['-', '-', '81.0'], ['85.1', '77.9', '81.4'], ['-', '-', '83.8'], ['-', '-', '84.1'], ['-', '-', '84.1'], ['89.5', '80.0', '84.5'], ['90.0', '81.2', '85.4'], ['91.8', '80.6', '85.9'], ['-', '-', '86.8']]
column
['P', 'R', 'F']
['ACNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>model || Yoshikawa et al.(2016)</td> <td>67.9</td> <td>57.9</td> <td>62.5</td> </tr> <tr> <td>model || Georgila et al. (2010)</td> <td>77.4</td> <td>64.6</td> <td>70.4</td> </tr> <tr> <td>model || Tran et al. (2018)</td> <td>-</td> <td>-</td> <td>77.5</td> </tr> <tr> <td>model || Kahn et al. (2005)</td> <td>-</td> <td>-</td> <td>78.2</td> </tr> <tr> <td>model || Johnson et al. (2004)</td> <td>82.0</td> <td>77.8</td> <td>79.7</td> </tr> <tr> <td>model || Georgila (2009)</td> <td>-</td> <td>-</td> <td>80.1</td> </tr> <tr> <td>model || Johnson et al. (2004)</td> <td>-</td> <td>-</td> <td>81.0</td> </tr> <tr> <td>model || Rasooli et al. (2013)</td> <td>85.1</td> <td>77.9</td> <td>81.4</td> </tr> <tr> <td>model || Zwarts et al. (2011)</td> <td>-</td> <td>-</td> <td>83.8</td> </tr> <tr> <td>model || Qian et al. (2013)</td> <td>-</td> <td>-</td> <td>84.1</td> </tr> <tr> <td>model || Honnibal et al. (2014)</td> <td>-</td> <td>-</td> <td>84.1</td> </tr> <tr> <td>model || ACNN</td> <td>89.5</td> <td>80.0</td> <td>84.5</td> </tr> <tr> <td>model || Ferguson et al. (2015)</td> <td>90.0</td> <td>81.2</td> <td>85.4</td> </tr> <tr> <td>model || Zayats et al. (2016)</td> <td>91.8</td> <td>80.6</td> <td>85.9</td> </tr> <tr> <td>model || Jamshid Lou et al. (2017)</td> <td>-</td> <td>-</td> <td>86.8</td> </tr> </tbody></table>
Table 5
table_5
D18-1490
7
emnlp2018
Finally, we compare the ACNN model to stateof-the-art methods from the literature, evaluated on the Switchboard test set. Table 5 shows that the ACNN model is competitive with recent models from the literature. The three models that score more highly than the ACNN all rely on handcrafted features, additional information sources such as partial-word features (which would not be available in a realistic ASR application), or external resources such as dependency parsers and language models. The ACNN, on the other hand, only uses whole-word inputs and learns the “rough copy” dependencies between words without requiring any manual feature engineering.
[2, 1, 2, 2]
['Finally, we compare the ACNN model to stateof-the-art methods from the literature, evaluated on the Switchboard test set.', 'Table 5 shows that the ACNN model is competitive with recent models from the literature.', 'The three models that score more highly than the ACNN all rely on handcrafted features, additional information sources such as partial-word features (which would not be available in a realistic ASR application), or external resources such as dependency parsers and language models.', 'The ACNN, on the other hand, only uses whole-word inputs and learns the “rough copy” dependencies between words without requiring any manual feature engineering.']
[['ACNN'], ['ACNN'], ['Ferguson et al. (2015)', 'Zayats et al. (2016)', 'Jamshid Lou et al. (2017)'], ['ACNN']]
1
D18-1494table_7
Word similarity results on ISEAR.
1
[['W2V'], ['siW2V'], ['SSPMI'], ['SLTM']]
1
[['MEN'], ['SimLex'], ['Rare']]
[['0.002', '-0.008', '-0.119'], ['0.002', '0.017', '0.062'], ['0.023', '0.028', '-0.004'], ['0.169', '0.037', '0.089']]
column
['similarity', 'similarity', 'similarity']
['SLTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEN</th> <th>SimLex</th> <th>Rare</th> </tr> </thead> <tbody> <tr> <td>W2V</td> <td>0.002</td> <td>-0.008</td> <td>-0.119</td> </tr> <tr> <td>siW2V</td> <td>0.002</td> <td>0.017</td> <td>0.062</td> </tr> <tr> <td>SSPMI</td> <td>0.023</td> <td>0.028</td> <td>-0.004</td> </tr> <tr> <td>SLTM</td> <td>0.169</td> <td>0.037</td> <td>0.089</td> </tr> </tbody></table>
Table 7
table_7
D18-1494
8
emnlp2018
We train W2V, siW2V, and SSPMI over each corpus by setting the number of context window size to 5. Furthermore, the dimension of word embeddings generated from all models is set to 50 according to (Lai et al., 2016). The values of word similarity on ISEAR and YouTube are respectively shown in Table 7 and Table 8, where the best results are highlighted in boldface. We can observe that SLTM outperforms baselines for all cases. The results indicate that word embeddings learned from the global label-specific topic information are better than those from the local context information without any external corpora.
[2, 2, 1, 1, 1]
['We train W2V, siW2V, and SSPMI over each corpus by setting the number of context window size to 5.', 'Furthermore, the dimension of word embeddings generated from all models is set to 50 according to (Lai et al., 2016).', 'The values of word similarity on ISEAR and YouTube are respectively shown in Table 7 and Table 8, where the best results are highlighted in boldface.', 'We can observe that SLTM outperforms baselines for all cases.', 'The results indicate that word embeddings learned from the global label-specific topic information are better than those from the local context information without any external corpora.']
[['W2V', 'siW2V', 'SSPMI'], ['W2V', 'siW2V', 'SSPMI'], None, ['SLTM'], ['SLTM']]
1
D18-1495table_3
Results for different sampling numbers in different setting for the two datasets. Score denotes the topic coherence score.
4
[['Datasets(# Topics)', '20 News (k=50)', '# Samples', '1'], ['Datasets(# Topics)', '20 News (k=50)', '# Samples', '3'], ['Datasets(# Topics)', '20 News (k=50)', '# Samples', '10'], ['Datasets(# Topics)', '20 News (k=100)', '# Samples', '1'], ['Datasets(# Topics)', '20 News (k=100)', '# Samples', '3'], ['Datasets(# Topics)', '20 News (k=100)', '# Samples', '10'], ['Datasets(# Topics)', 'All news (k=50)', '# Samples', '1'], ['Datasets(# Topics)', 'All news (k=50)', '# Samples', '3'], ['Datasets(# Topics)', 'All news (k=50)', '# Samples', '5'], ['Datasets(# Topics)', 'All news (k=100)', '# Samples', '1'], ['Datasets(# Topics)', 'All news (k=100)', '# Samples', '3'], ['Datasets(# Topics)', 'All news (k=100)', '# Samples', '5']]
1
[['Score']]
[['0.24'], ['0.28'], ['0.25'], ['0.21'], ['0.26'], ['0.25'], ['0.27'], ['0.22'], ['0.20'], ['0.26'], ['0.20'], ['0.17']]
column
['Score']
['# Samples']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>Datasets(# Topics) || 20 News (k=50) || # Samples || 1</td> <td>0.24</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=50) || # Samples || 3</td> <td>0.28</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=50) || # Samples || 10</td> <td>0.25</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=100) || # Samples || 1</td> <td>0.21</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=100) || # Samples || 3</td> <td>0.26</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=100) || # Samples || 10</td> <td>0.25</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=50) || # Samples || 1</td> <td>0.27</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=50) || # Samples || 3</td> <td>0.22</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=50) || # Samples || 5</td> <td>0.20</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=100) || # Samples || 1</td> <td>0.26</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=100) || # Samples || 3</td> <td>0.20</td> </tr> <tr> <td>Datasets(# Topics) || All news (k=100) || # Samples || 5</td> <td>0.17</td> </tr> </tbody></table>
Table 3
table_3
D18-1495
6
emnlp2018
Effect of the mini-corpus. To study the effect of our sampling strategy which has been discussed in section 3.3. Table 3 shows the performance of our model with different sample size for a mini-corpus. For the 20 Newsgroups dataset, the best performance is achieved when the sample size is 3. When we do not use our sample strategy (mini-corpus is 1), the performance drops by a large margin. From Table 1, we see that the average size of documents size in 20 Newsgroups is relatively short (88 compared with 302 in All News). Therefore, the 20 Newsgroups dataset may suffer from the sparsity problem. The experiment shows that our sampling strategy can help to overcome this problem. When sample size increases, the performance drops again. The biterm graph with large sample size may bring the same problem of the original BTM (insufficient topic representation). Compared to 20 Newsgroups dataset, documents in the All News dataset is longer and carried more topic information, so the best performance is achieved without sampling. We find that when the sample size is larger than an optimized value, the topic coherence starts to drop.
[2, 2, 1, 1, 1, 2, 2, 2, 1, 2, 1, 2]
['Effect of the mini-corpus.', 'To study the effect of our sampling strategy which has been discussed in section 3.3.', 'Table 3 shows the performance of our model with different sample size for a mini-corpus.', 'For the 20 Newsgroups dataset, the best performance is achieved when the sample size is 3.', 'When we do not use our sample strategy (mini-corpus is 1), the performance drops by a large margin.', 'From Table 1, we see that the average size of documents size in 20 Newsgroups is relatively short (88 compared with 302 in All News).', 'Therefore, the 20 Newsgroups dataset may suffer from the sparsity problem.', 'The experiment shows that our sampling strategy can help to overcome this problem.', 'When sample size increases, the performance drops again.', 'The biterm graph with large sample size may bring the same problem of the original BTM (insufficient topic representation).', 'Compared to 20 Newsgroups dataset, documents in the All News dataset is longer and carried more topic information, so the best performance is achieved without sampling.', 'We find that when the sample size is larger than an optimized value, the topic coherence starts to drop.']
[None, None, ['20 News (k=50)', '20 News (k=100)', 'All news (k=50)', 'All news (k=100)', '# Samples'], ['20 News (k=50)', '20 News (k=100)', '# Samples', '3'], ['# Samples', '1', '3'], ['20 News (k=50)', '20 News (k=100)'], ['20 News (k=50)', '20 News (k=100)'], None, ['# Samples', 'Score'], None, ['20 News (k=50)', '20 News (k=100)', 'All news (k=50)', 'All news (k=100)', '# Samples', '1'], None]
1
D18-1497table_6
Cross AUC results for different representations on the BeerAdvocate data. Row: Embedding used. Column: Aspect evaluated against.
1
[['Look'], ['Aroma'], ['Palate'], ['Taste']]
1
[['Look'], ['Aroma'], ['Palate'], ['Taste']]
[['0.92', '0.89', '0.88', '0.87'], ['0.90', '0.93', '0.91', '0.92'], ['0.89', '0.92', '0.94', '0.95'], ['0.90', '0.94', '0.95', '0.96']]
column
['AUC', 'AUC', 'AUC', 'AUC']
['Look', 'Aroma', 'Palate', 'Taste']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Look</th> <th>Aroma</th> <th>Palate</th> <th>Taste</th> </tr> </thead> <tbody> <tr> <td>Look</td> <td>0.92</td> <td>0.89</td> <td>0.88</td> <td>0.87</td> </tr> <tr> <td>Aroma</td> <td>0.90</td> <td>0.93</td> <td>0.91</td> <td>0.92</td> </tr> <tr> <td>Palate</td> <td>0.89</td> <td>0.92</td> <td>0.94</td> <td>0.95</td> </tr> <tr> <td>Taste</td> <td>0.90</td> <td>0.94</td> <td>0.95</td> <td>0.96</td> </tr> </tbody></table>
Table 6
table_6
D18-1497
6
emnlp2018
In Table 6 we present cross AUC evaluations. Rows correspond to the embedding used and columns to the aspect evaluated against. As expected, aspect-embeddings perform better w.r.t. the aspects for which they code, suggesting some disentanglement. However, the reduction in performance when using one aspect representation to discriminate w.r.t. others is not as pronounced as above. This is because aspect ratings are highly correlated: if taste is positive, aroma is very likely to be as well. Effectively, here sentiment entangles all of these aspects.
[1, 1, 1, 1, 2, 2]
['In Table 6 we present cross AUC evaluations.', 'Rows correspond to the embedding used and columns to the aspect evaluated against.', 'As expected, aspect-embeddings perform better w.r.t. the aspects for which they code, suggesting some disentanglement.', 'However, the reduction in performance when using one aspect representation to discriminate w.r.t. others is not as pronounced as above.', 'This is because aspect ratings are highly correlated: if taste is positive, aroma is very likely to be as well.', 'Effectively, here sentiment entangles all of these aspects.']
[None, ['Look', 'Aroma', 'Palate', 'Taste'], None, None, ['Aroma', 'Taste'], ['Look', 'Aroma', 'Palate', 'Taste']]
1
D18-1498table_4
POS tagging results on SANCL data. Source domains include Web, Emails, Twitter. † indicates the unified multi-source model trained without Twitter, thus can be considered as the oracle performance (upper-bound) of uni-MS.
2
[['TARGET', 'Answers'], ['TARGET', 'Reviews'], ['TARGET', 'Newsgroup'], ['TARGET', 'Average']]
2
[['NON-ADVERSARIAL', 'best-SS'], ['NON-ADVERSARIAL', 'uni-MS'], ['NON-ADVERSARIAL', 'uni-MS†'], ['NON-ADVERSARIAL', 'MoE'], ['ADVERSARIAL', 'best-SS-A'], ['ADVERSARIAL', 'uni-MS-A'], ['ADVERSARIAL', 'uni-MS-A†'], ['ADVERSARIAL', 'MoE-A']]
[['88.16', '88.89', '89.88', '90.26', '88.47', '89.04', '89.99', '89.80'], ['87.15', '87.45', '88.91', '89.37', '87.26', '87.90', '88.94', '89.40'], ['89.14', '89.95', '90.70', '91.03', '89.54', '90.20', '90.70', '91.13'], ['88.15', '88.76', '89.83', '90.22', '88.42', '89.05', '89.88', '90.11']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['MoE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NON-ADVERSARIAL || best-SS</th> <th>NON-ADVERSARIAL || uni-MS</th> <th>NON-ADVERSARIAL || uni-MS†</th> <th>NON-ADVERSARIAL || MoE</th> <th>ADVERSARIAL || best-SS-A</th> <th>ADVERSARIAL || uni-MS-A</th> <th>ADVERSARIAL || uni-MS-A†</th> <th>ADVERSARIAL || MoE-A</th> </tr> </thead> <tbody> <tr> <td>TARGET || Answers</td> <td>88.16</td> <td>88.89</td> <td>89.88</td> <td>90.26</td> <td>88.47</td> <td>89.04</td> <td>89.99</td> <td>89.80</td> </tr> <tr> <td>TARGET || Reviews</td> <td>87.15</td> <td>87.45</td> <td>88.91</td> <td>89.37</td> <td>87.26</td> <td>87.90</td> <td>88.94</td> <td>89.40</td> </tr> <tr> <td>TARGET || Newsgroup</td> <td>89.14</td> <td>89.95</td> <td>90.70</td> <td>91.03</td> <td>89.54</td> <td>90.20</td> <td>90.70</td> <td>91.13</td> </tr> <tr> <td>TARGET || Average</td> <td>88.15</td> <td>88.76</td> <td>89.83</td> <td>90.22</td> <td>88.42</td> <td>89.05</td> <td>89.88</td> <td>90.11</td> </tr> </tbody></table>
Table 4
table_4
D18-1498
9
emnlp2018
5.2 Part-of-Speech Tagging. Table 4 summarizes our results on POS tagging. Again, our approach consistently achieves the best performance across different settings and tasks. Adding Twitter as a source leads to a drop in performance for the unified model, as a result of negative transfer. Our method, however, robustly handles negative transfer and manages to even benefit from this additional source.
[2, 1, 1, 1, 2]
['5.2 Part-of-Speech Tagging.', 'Table 4 summarizes our results on POS tagging.', 'Again, our approach consistently achieves the best performance across different settings and tasks.', 'Adding Twitter as a source leads to a drop in performance for the unified model, as a result of negative transfer.', 'Our method, however, robustly handles negative transfer and manages to even benefit from this additional source.']
[None, None, ['MoE'], ['uni-MS', 'uni-MS†'], ['MoE']]
1
D18-1508table_1
Experimental results of the two baselines, as well as single and label-wise attention modifications to the “vanilla” 2-BiLSTM model.
4
[['Lab', '20*', 'Syst', 'FastText'], ['Lab', '20*', 'Syst', '2-BiLSTM'], ['Lab', '20*', 'Syst', '2-BiLSTMa'], ['Lab', '20*', 'Syst', '2-BiLSTMl'], ['Lab', '50', 'Syst', 'FastText'], ['Lab', '50', 'Syst', '2-BiLSTM'], ['Lab', '50', 'Syst', '2-BiLSTMa'], ['Lab', '50', 'Syst', '2-BiLSTMl'], ['Lab', '100', 'Syst', 'FastText'], ['Lab', '100', 'Syst', '2-BiLSTM'], ['Lab', '100', 'Syst', '2-BiLSTMa'], ['Lab', '100', 'Syst', '2-BiLSTMl'], ['Lab', '200', 'Syst', 'FastText'], ['Lab', '200', 'Syst', '2-BiLSTM'], ['Lab', '200', 'Syst', '2-BiLSTMa'], ['Lab', '200', 'Syst', '2-BiLSTMl']]
1
[['F1'], ['A@1'], ['A@5'], ['CE']]
[['30.97', '42.57', '72.45', '4.56'], ['33.52', '45.76', '75.54', '3.88'], ['34.11', '46.11', '75.68', '3.86'], ['33.51', '45.94', '76.02', '3.82'], ['18.04', '22.33', '48.13', '14.27'], ['19.07', '25.35', '53.38', '9.37'], ['19.83', '25.52', '53.51', '9.35'], ['20.08', '25.64', '53.77', '9.26'], ['16.25', '20.29', '42.65', '26.04'], ['17.44', '23.01', '47.46', '15.24'], ['17.56', '22.77', '46.93', '15.51'], ['17.92', '22.80', '47.41', '15.17'], ['13.31', '18.80', '38.99', '51.06'], ['16.16', '21.05', '42.64', '24.68'], ['16.30', '21.13', '42.50', '24.60'], ['16.91', '21.39', '43.35', '23.73']]
column
['F1', 'A@1', 'A@5', 'CE']
['2-BiLSTMl']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>A@1</th> <th>A@5</th> <th>CE</th> </tr> </thead> <tbody> <tr> <td>Lab || 20* || Syst || FastText</td> <td>30.97</td> <td>42.57</td> <td>72.45</td> <td>4.56</td> </tr> <tr> <td>Lab || 20* || Syst || 2-BiLSTM</td> <td>33.52</td> <td>45.76</td> <td>75.54</td> <td>3.88</td> </tr> <tr> <td>Lab || 20* || Syst || 2-BiLSTMa</td> <td>34.11</td> <td>46.11</td> <td>75.68</td> <td>3.86</td> </tr> <tr> <td>Lab || 20* || Syst || 2-BiLSTMl</td> <td>33.51</td> <td>45.94</td> <td>76.02</td> <td>3.82</td> </tr> <tr> <td>Lab || 50 || Syst || FastText</td> <td>18.04</td> <td>22.33</td> <td>48.13</td> <td>14.27</td> </tr> <tr> <td>Lab || 50 || Syst || 2-BiLSTM</td> <td>19.07</td> <td>25.35</td> <td>53.38</td> <td>9.37</td> </tr> <tr> <td>Lab || 50 || Syst || 2-BiLSTMa</td> <td>19.83</td> <td>25.52</td> <td>53.51</td> <td>9.35</td> </tr> <tr> <td>Lab || 50 || Syst || 2-BiLSTMl</td> <td>20.08</td> <td>25.64</td> <td>53.77</td> <td>9.26</td> </tr> <tr> <td>Lab || 100 || Syst || FastText</td> <td>16.25</td> <td>20.29</td> <td>42.65</td> <td>26.04</td> </tr> <tr> <td>Lab || 100 || Syst || 2-BiLSTM</td> <td>17.44</td> <td>23.01</td> <td>47.46</td> <td>15.24</td> </tr> <tr> <td>Lab || 100 || Syst || 2-BiLSTMa</td> <td>17.56</td> <td>22.77</td> <td>46.93</td> <td>15.51</td> </tr> <tr> <td>Lab || 100 || Syst || 2-BiLSTMl</td> <td>17.92</td> <td>22.80</td> <td>47.41</td> <td>15.17</td> </tr> <tr> <td>Lab || 200 || Syst || FastText</td> <td>13.31</td> <td>18.80</td> <td>38.99</td> <td>51.06</td> </tr> <tr> <td>Lab || 200 || Syst || 2-BiLSTM</td> <td>16.16</td> <td>21.05</td> <td>42.64</td> <td>24.68</td> </tr> <tr> <td>Lab || 200 || Syst || 2-BiLSTMa</td> <td>16.30</td> <td>21.13</td> <td>42.50</td> <td>24.60</td> </tr> <tr> <td>Lab || 200 || Syst || 2-BiLSTMl</td> <td>16.91</td> <td>21.39</td> <td>43.35</td> <td>23.73</td> </tr> </tbody></table>
Table 1
table_1
D18-1508
3
emnlp2018
Results. Table 1 shows the results of our model and the baselines in the emoji prediction task for the different evaluation splits. The evaluation metrics used are: F1, Accuracy@k (A@k, where k ∈ {1, 5}), and Coverage Error (CE) (Tsoumakas et al., 2009). We note that the latter metric is not normally used in emoji prediction settings. However, with many emojis being “near synonyms” (in the sense of being often used almost interchangeably), it seems natural to evaluate the performance of an emoji prediction system in terms of how far we would need to go through the predicted emojis to recover the true label. The results show that our proposed 2-BiLSTMsl method outperforms all baselines for F1 in three out of four settings, and for CE in all of them. In the following section we shed light on the reasons behind this performance, and we try to understand how these predictions were made.
[2, 1, 1, 2, 2, 1, 2]
['Results.', 'Table 1 shows the results of our model and the baselines in the emoji prediction task for the different evaluation splits.', 'The evaluation metrics used are: F1, Accuracy@k (A@k, where k ∈ {1, 5}), and Coverage Error (CE) (Tsoumakas et al., 2009).', 'We note that the latter metric is not normally used in emoji prediction settings.', 'However, with many emojis being “near synonyms” (in the sense of being often used almost interchangeably), it seems natural to evaluate the performance of an emoji prediction system in terms of how far we would need to go through the predicted emojis to recover the true label.', 'The results show that our proposed 2-BiLSTMsl method outperforms all baselines for F1 in three out of four settings, and for CE in all of them.', 'In the following section we shed light on the reasons behind this performance, and we try to understand how these predictions were made.']
[None, ['FastText', '2-BiLSTM', '2-BiLSTMa', '2-BiLSTMl', 'F1', 'A@1', 'A@5', 'CE'], ['F1', 'A@1', 'A@5', 'CE'], None, None, ['2-BiLSTMl', 'Lab', '50', '100', '200', 'F1', 'CE'], ['2-BiLSTMl']]
1
D18-1510table_1
Results on NIST Chinese-to-English Translation Task. AVG = average BLEU scores for test sets. The bold number indicates the highest score in the column.
2
[['System', 'BaseNMT'], ['System', 'MRT'], ['System', 'RF'], ['System', 'P-BLEU'], ['System', 'P-GLEU'], ['System', 'P-P2']]
1
[['Dev(MT02)'], ['MT03'], ['MT04'], ['MT05'], ['MT06'], ['AVG']]
[['36.72', '33.95', '37.44', '33.96', '33.09', '34.61'], ['37.17', '34.89', '37.90', '34.62', '33.78', '35.30'], ['37.13', '34.66', '37.69', '34.55', '33.74', '35.16'], ['37.26', '34.54', '38.05', '34.30', '34.11', '35.25'], ['37.44', '34.67', '38.11', '34.24', '34.58', '35.40'], ['38.03', '35.45', '39.30', '35.10', '34.59', '36.11']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['P-BLEU', 'P-GLEU', 'P-P2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev(MT02)</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>System || BaseNMT</td> <td>36.72</td> <td>33.95</td> <td>37.44</td> <td>33.96</td> <td>33.09</td> <td>34.61</td> </tr> <tr> <td>System || MRT</td> <td>37.17</td> <td>34.89</td> <td>37.90</td> <td>34.62</td> <td>33.78</td> <td>35.30</td> </tr> <tr> <td>System || RF</td> <td>37.13</td> <td>34.66</td> <td>37.69</td> <td>34.55</td> <td>33.74</td> <td>35.16</td> </tr> <tr> <td>System || P-BLEU</td> <td>37.26</td> <td>34.54</td> <td>38.05</td> <td>34.30</td> <td>34.11</td> <td>35.25</td> </tr> <tr> <td>System || P-GLEU</td> <td>37.44</td> <td>34.67</td> <td>38.11</td> <td>34.24</td> <td>34.58</td> <td>35.40</td> </tr> <tr> <td>System || P-P2</td> <td>38.03</td> <td>35.45</td> <td>39.30</td> <td>35.10</td> <td>34.59</td> <td>36.11</td> </tr> </tbody></table>
Table 1
table_1
D18-1510
4
emnlp2018
Performance. Table 1 shows the translation performance on test sets measured in BLEU score. Simply training NMT model by the probabilistic 2-gram precision achieves an improvement of 1.5 BLEU points, which significantly outperforms the reinforcement-based algorithms. We also test the precision of other n-grams and their combinations, but do not notice significant improvements over P-P2. Notice that our method only changes the loss function, without any modification on model structure and training data.
[2, 1, 1, 1, 2]
['Performance.', 'Table 1 shows the translation performance on test sets measured in BLEU score.', 'Simply training NMT model by the probabilistic 2-gram precision achieves an improvement of 1.5 BLEU points, which significantly outperforms the reinforcement-based algorithms.', 'We also test the precision of other n-grams and their combinations, but do not notice significant improvements over P-P2.', 'Notice that our method only changes the loss function, without any modification on model structure and training data.']
[None, None, ['P-P2', 'BaseNMT', 'RF'], ['P-BLEU', 'P-GLEU', 'P-P2'], None]
1
D18-1516table_3
Results (filtered setting) of the temporal knowledge graph completion experiments for the data sets YAGO15K and WIKIDATA. The best results are written bold.
1
[['TTRANSE'], ['TRANSE'], ['DISTMULT'], ['TA-TRANSE'], ['TA-DISTMULT']]
2
[['YAGO15K', 'MRR'], ['YAGO15K', 'MR'], ['YAGO15K', 'Hits@10'], ['YAGO15K', 'Hits@1'], ['WIKIDATA', 'MRR'], ['WIKIDATA', 'MR'], ['WIKIDATA', 'Hits@10'], ['WIKIDATA', 'Hits@1']]
[['32.1', '578', '51.0', '23.0', '48.8', '80', '80.6', '33.9'], ['29.6', '614', '46.8', '22.8', '31.6', '50', '65.9', '18.1'], ['27.5', '578', '43.8', '21.5', '31.6', '77', '66.1', '18.1'], ['32.1', '564', '51.2', '23.1', '48.4', '79', '80.7', '32.9'], ['29.1', '551', '47.6', '21.6', '70.0', '198', '78.5', '65.2']]
column
['MRR', 'MR', 'Hits@10', 'Hits@1', 'MRR', 'MR', 'Hits@10', 'Hits@1']
['TA-TRANSE', 'TA-DISTMULT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YAGO15K || MRR</th> <th>YAGO15K || MR</th> <th>YAGO15K || Hits@10</th> <th>YAGO15K || Hits@1</th> <th>WIKIDATA || MRR</th> <th>WIKIDATA || MR</th> <th>WIKIDATA || Hits@10</th> <th>WIKIDATA || Hits@1</th> </tr> </thead> <tbody> <tr> <td>TTRANSE</td> <td>32.1</td> <td>578</td> <td>51.0</td> <td>23.0</td> <td>48.8</td> <td>80</td> <td>80.6</td> <td>33.9</td> </tr> <tr> <td>TRANSE</td> <td>29.6</td> <td>614</td> <td>46.8</td> <td>22.8</td> <td>31.6</td> <td>50</td> <td>65.9</td> <td>18.1</td> </tr> <tr> <td>DISTMULT</td> <td>27.5</td> <td>578</td> <td>43.8</td> <td>21.5</td> <td>31.6</td> <td>77</td> <td>66.1</td> <td>18.1</td> </tr> <tr> <td>TA-TRANSE</td> <td>32.1</td> <td>564</td> <td>51.2</td> <td>23.1</td> <td>48.4</td> <td>79</td> <td>80.7</td> <td>32.9</td> </tr> <tr> <td>TA-DISTMULT</td> <td>29.1</td> <td>551</td> <td>47.6</td> <td>21.6</td> <td>70.0</td> <td>198</td> <td>78.5</td> <td>65.2</td> </tr> </tbody></table>
Table 3
table_3
D18-1516
4
emnlp2018
4.3 Results. Table 3 and 4 list the results for the KG completion tasks. TA-TRANSE and TA-DISTMULT systematically improve TRANSE and DISTMULT in MRR, hits@10 and hits@1 in almost all cases. Mean rank is a metric that is very susceptible to outliers and hence these improvements are not consistent. TTRANSE learns independent representations for each timestamp contained in the training set. At test time, timestamps unseen during training are represented by null vectors. This explains that TTRANSE is only competitive in YAGO15K, wherein the number of distinct timestamps is very small (see #Distinct TS in Table 2) and thus enough training examples exist to learn robust timestamp embeddings. TTRANSE’s performance is similar to that of TA-TRANSE, our time-aware version of TRANSE, in WIKIDATA. Similarly, TTRANSE can learn robust timestamp representations because of the small number of distinct timestamps of this data set.
[2, 1, 1, 2, 2, 2, 1, 1, 2]
['4.3 Results.', 'Table 3 and 4 list the results for the KG completion tasks.', 'TA-TRANSE and TA-DISTMULT systematically improve TRANSE and DISTMULT in MRR, hits@10 and hits@1 in almost all cases.', 'Mean rank is a metric that is very susceptible to outliers and hence these improvements are not consistent.', 'TTRANSE learns independent representations for each timestamp contained in the training set.', 'At test time, timestamps unseen during training are represented by null vectors.', 'This explains that TTRANSE is only competitive in YAGO15K, wherein the number of distinct timestamps is very small (see #Distinct TS in Table 2) and thus enough training examples exist to learn robust timestamp embeddings.', 'TTRANSE’s performance is similar to that of TA-TRANSE, our time-aware version of TRANSE, in WIKIDATA.', 'Similarly, TTRANSE can learn robust timestamp representations because of the small number of distinct timestamps of this data set.']
[None, None, ['TA-TRANSE', 'TA-DISTMULT', 'TRANSE', 'DISTMULT', 'MRR', 'Hits@10', 'Hits@1'], ['MR'], ['TTRANSE'], ['TTRANSE'], ['TTRANSE', 'YAGO15K'], ['TTRANSE', 'TA-TRANSE', 'WIKIDATA'], ['TTRANSE']]
1
D18-1525table_1
Results of transfer learning tasks performance of the proposed autoencoder models. All the models are trained on the Yelp reviews dataset with the use of fastText pre-trained word embeddings.
2
[['Model', 'Cross-entropy (vanilla AE)'], ['Model', 'Soft label N = 3'], ['Model', 'Soft label N = 5'], ['Model', 'Soft label N = 10'], ['Model', 'Weighted similarity'], ['Model', 'Weighted cross-entropy']]
2
[['MSRP', 'F1'], ['MSRP', 'Acc'], ['SNLI', 'Acc'], ['SICK-E', 'Acc']]
[['79.0', '66.9', '44.8', '56.8'], ['77.6', '67.1', '57.8', '71.8'], ['79.1', '67.3', '57.2', '71.6'], ['77.9', '66.5', '57.9', '72.4'], ['77.5', '65.6', '69.1', '56.6'], ['79.4', '68.2', '57.2', '70.2']]
column
['F1', 'Acc', 'Acc', 'Acc']
['Soft label N = 3', 'Soft label N = 5', 'Soft label N = 10', 'Weighted similarity', 'Weighted cross-entropy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSRP || F1</th> <th>MSRP || Acc</th> <th>SNLI || Acc</th> <th>SICK-E || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Cross-entropy (vanilla AE)</td> <td>79.0</td> <td>66.9</td> <td>44.8</td> <td>56.8</td> </tr> <tr> <td>Model || Soft label N = 3</td> <td>77.6</td> <td>67.1</td> <td>57.8</td> <td>71.8</td> </tr> <tr> <td>Model || Soft label N = 5</td> <td>79.1</td> <td>67.3</td> <td>57.2</td> <td>71.6</td> </tr> <tr> <td>Model || Soft label N = 10</td> <td>77.9</td> <td>66.5</td> <td>57.9</td> <td>72.4</td> </tr> <tr> <td>Model || Weighted similarity</td> <td>77.5</td> <td>65.6</td> <td>69.1</td> <td>56.6</td> </tr> <tr> <td>Model || Weighted cross-entropy</td> <td>79.4</td> <td>68.2</td> <td>57.2</td> <td>70.2</td> </tr> </tbody></table>
Table 1
table_1
D18-1525
3
emnlp2018
4 Discussion. We find that almost all of the proposed loss functions outperform the vanilla autoencoder trained with cross-entropy on all three tasks (see Table 1). The only exception is the weighted similarity loss function. Compared to the logarithm-based losses, this loss applies softer penalties when the groundtruth tokens are predicted to have lower probabilities. We conclude that the non-linearity introduced by a logarithm function contributes to more efficient training. Among the models we tested, the best scores were achieved by the weighted cross-entropy loss for MSRP (68.2%), the weighted similarity loss for SNLI (69.1%) and by the soft label loss for SICK-E (72.4%). We observe that for the paraphrase task, all the soft label losses behaved similarly, while for the inference/entailment, increasing the number of neighbors improved performance.
[2, 1, 1, 2, 2, 1, 2]
['4 Discussion.', 'We find that almost all of the proposed loss functions outperform the vanilla autoencoder trained with cross-entropy on all three tasks (see Table 1).', 'The only exception is the weighted similarity loss function.', 'Compared to the logarithm-based losses, this loss applies softer penalties when the groundtruth tokens are predicted to have lower probabilities.', 'We conclude that the non-linearity introduced by a logarithm function contributes to more efficient training.', 'Among the models we tested, the best scores were achieved by the weighted cross-entropy loss for MSRP (68.2%), the weighted similarity loss for SNLI (69.1%) and by the soft label loss for SICK-E (72.4%).', 'We observe that for the paraphrase task, all the soft label losses behaved similarly, while for the inference/entailment, increasing the number of neighbors improved performance.']
[None, ['Cross-entropy (vanilla AE)', 'Soft label N = 3', 'Soft label N = 5', 'Soft label N = 10', 'Weighted cross-entropy', 'Acc'], ['Acc', 'Weighted similarity'], None, None, ['Acc', 'Weighted cross-entropy', 'Weighted similarity', 'MSRP', 'SNLI', 'SICK-E', 'Soft label N = 10'], ['Soft label N = 10']]
1
D18-1529table_3
Performance of recent neural network based models without using pretrained embeddings. Our model’s wins are statsitically significantly better than prior work (p < 0.05 bootstrap resampling), except on PKU.
1
[['Liu et al. (2016)'], ['Zhou et al. (2017)'], ['Cai et al. (2017)'], ['Wang and Xu (2017)'], ['Ours']]
1
[['AS'], ['CITYU'], ['CTB6'], ['CTB7'], ['MSR'], ['PKU'], ['UD']]
[['-', '-', '94.6', '-', '94.8', '94.9', '-'], ['-', '-', '94.9', '-', '97.2', '95.0', '-'], ['95.2', '95.4', '-', '-', '97.0', '95.4', '-'], ['-', '-', '-', '-', '96.7', '94.7', '-'], ['95.5', '95.7', '95.5', '95.6', '97.5', '95.4', '94.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AS</th> <th>CITYU</th> <th>CTB6</th> <th>CTB7</th> <th>MSR</th> <th>PKU</th> <th>UD</th> </tr> </thead> <tbody> <tr> <td>Liu et al. (2016)</td> <td>-</td> <td>-</td> <td>94.6</td> <td>-</td> <td>94.8</td> <td>94.9</td> <td>-</td> </tr> <tr> <td>Zhou et al. (2017)</td> <td>-</td> <td>-</td> <td>94.9</td> <td>-</td> <td>97.2</td> <td>95.0</td> <td>-</td> </tr> <tr> <td>Cai et al. (2017)</td> <td>95.2</td> <td>95.4</td> <td>-</td> <td>-</td> <td>97.0</td> <td>95.4</td> <td>-</td> </tr> <tr> <td>Wang and Xu (2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>96.7</td> <td>94.7</td> <td>-</td> </tr> <tr> <td>Ours</td> <td>95.5</td> <td>95.7</td> <td>95.5</td> <td>95.6</td> <td>97.5</td> <td>95.4</td> <td>94.6</td> </tr> </tbody></table>
Table 3
table_3
D18-1529
3
emnlp2018
Table 3 contains results achieved without using any pretrained embeddings. Our model achieves the best results among NN models on 6/7 datasets. In addition, while the majority of datasets work the best if the pretrained embedding matrix is treated as constant, the MSR dataset is an outlier: fine-tuning embeddings yields a very large improvement. We observe that the likely cause is a low OOV rate in the MSR evaluation set compared to other datasets.
[1, 1, 2, 2]
['Table 3 contains results achieved without using any pretrained embeddings.', 'Our model achieves the best results among NN models on 6/7 datasets.', 'In addition, while the majority of datasets work the best if the pretrained embedding matrix is treated as constant, the MSR dataset is an outlier: fine-tuning embeddings yields a very large improvement.', 'We observe that the likely cause is a low OOV rate in the MSR evaluation set compared to other datasets.']
[None, ['Ours', 'AS', 'CITYU', 'CTB6', 'CTB7', 'MSR', 'UD'], ['AS', 'CITYU', 'CTB6', 'CTB7', 'PKU', 'UD'], ['MSR']]
1
D18-1529table_6
Ablation results on development data. Top row: absolute performance of our system. Other rows: difference relative to the top row.
2
[['System', 'This work'], ['System', '-LSTM dropout'], ['System', '-stacked bi-LSTM'], ['System', '-pretrain']]
1
[['AS'], ['CITYU'], ['CTB6'], ['CTB7'], ['MSR'], ['PKU'], ['UD'], ['Average']]
[['98.03', '98.22', '97.06', '97.07', '98.48', '97.95', '97.00', '97.69'], ['+0.03', '-0.33', '-0.31', '-0.24', '+0.04', '-0.29', '-0.76', '-0.35'], ['-0.13', '-0.20', '-0.15', '-0.14', '-0.17', '-0.17', '-0.39', '-0.27'], ['-0.13', '-0.23', '-0.94', '-0.74', '-0.45', '-0.27', '-2.73', '-0.78']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AS</th> <th>CITYU</th> <th>CTB6</th> <th>CTB7</th> <th>MSR</th> <th>PKU</th> <th>UD</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>System || This work</td> <td>98.03</td> <td>98.22</td> <td>97.06</td> <td>97.07</td> <td>98.48</td> <td>97.95</td> <td>97.00</td> <td>97.69</td> </tr> <tr> <td>System || -LSTM dropout</td> <td>+0.03</td> <td>-0.33</td> <td>-0.31</td> <td>-0.24</td> <td>+0.04</td> <td>-0.29</td> <td>-0.76</td> <td>-0.35</td> </tr> <tr> <td>System || -stacked bi-LSTM</td> <td>-0.13</td> <td>-0.20</td> <td>-0.15</td> <td>-0.14</td> <td>-0.17</td> <td>-0.17</td> <td>-0.39</td> <td>-0.27</td> </tr> <tr> <td>System || -pretrain</td> <td>-0.13</td> <td>-0.23</td> <td>-0.94</td> <td>-0.74</td> <td>-0.45</td> <td>-0.27</td> <td>-2.73</td> <td>-0.78</td> </tr> </tbody></table>
Table 6
table_6
D18-1529
5
emnlp2018
3.2 Ablation Experiments. To see which decisions had the greatest impact on the result, we performed ablation experiments on the holdout sets of the different corpora. Starting with our proposed system, we remove one decision, perform hyperparameter tuning, and see the change in performance. The results are summarized in Table 6. Negative numbers in Table 6 correspond to decreases in performance for the ablated system. Note that although each of the components help performance on average, there are cases where we observe no impact. For example using recurrent dropout on AS and MSR rarely affects accuracy.
[2, 2, 2, 1, 1, 1, 1]
['3.2 Ablation Experiments.', 'To see which decisions had the greatest impact on the result, we performed ablation experiments on the holdout sets of the different corpora.', 'Starting with our proposed system, we remove one decision, perform hyperparameter tuning, and see the change in performance.', 'The results are summarized in Table 6.', 'Negative numbers in Table 6 correspond to decreases in performance for the ablated system.', 'Note that although each of the components help performance on average, there are cases where we observe no impact.', 'For example using recurrent dropout on AS and MSR rarely affects accuracy.']
[None, None, ['This work'], None, None, ['This work', '-LSTM dropout', '-stacked bi-LSTM', '-pretrain'], ['AS', 'MSR', '-LSTM dropout']]
1
D18-1531table_2
Results of SLM-4 incorporating ad hoc guidelines, where † represents using additional 1024 segmented setences for training data and * represents using a rule-based post-processing
1
[['SLM-4'], ['SLM-4*'], ['SLM-4†'], ['SLM-4†*']]
2
[['F1 score', 'PKU'], ['F1 score', 'MSR'], ['F1 score', 'AS'], ['F1 score', 'CityU']]
[['79.2', '79.0', '79.8', '79.7'], ['81.9', '83.0', '81.0', '81.4'], ['87.5', '84.3', '84.2', '86.0'], ['87.3', '84.8', '83.9', '85.8']]
column
['F1 score', 'F1 score', 'F1 score', 'F1 score']
['SLM-4', 'SLM-4*', 'SLM-4†', 'SLM-4†*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 score || PKU</th> <th>F1 score || MSR</th> <th>F1 score || AS</th> <th>F1 score || CityU</th> </tr> </thead> <tbody> <tr> <td>SLM-4</td> <td>79.2</td> <td>79.0</td> <td>79.8</td> <td>79.7</td> </tr> <tr> <td>SLM-4*</td> <td>81.9</td> <td>83.0</td> <td>81.0</td> <td>81.4</td> </tr> <tr> <td>SLM-4†</td> <td>87.5</td> <td>84.3</td> <td>84.2</td> <td>86.0</td> </tr> <tr> <td>SLM-4†*</td> <td>87.3</td> <td>84.8</td> <td>83.9</td> <td>85.8</td> </tr> </tbody></table>
Table 2
table_2
D18-1531
4
emnlp2018
Table 2 shows the results. We can find from the table that only 1024 guideline sentences can improve the performance of “SLM-4” significantly. While rule-based post-processing is very effective, “SLM-4†” can outperform “SLM-4*” on all the four datasets. Moreover, performance drops when applying the rule-based post-processing to “SLM-4†” on three datasets. These indicate that SLMs can learn the empirical rules for word segmentation given only a small amount of training data. And these guideline data can improve the performance of SLMs naturally, superior to using explicit rules.
[1, 1, 1, 1, 2, 2]
['Table 2 shows the results.', 'We can find from the table that only 1024 guideline sentences can improve the performance of “SLM-4” significantly.', 'While rule-based post-processing is very effective, “SLM-4†” can outperform “SLM-4*” on all the four datasets.', 'Moreover, performance drops when applying the rule-based post-processing to “SLM-4†” on three datasets.', 'These indicate that SLMs can learn the empirical rules for word segmentation given only a small amount of training data.', 'And these guideline data can improve the performance of SLMs naturally, superior to using explicit rules.']
[None, ['SLM-4†', 'SLM-4'], ['SLM-4*', 'SLM-4†', 'PKU', 'MSR', 'AS', 'CityU'], ['SLM-4†', 'SLM-4†*', 'PKU', 'AS', 'CityU'], None, ['SLM-4']]
1
D18-1538table_1
Comparison of baseline models (B) with the models trained with joint objective (J).
2
[['Model/Legend', 'B100'], ['Model/Legend', 'B10'], ['Model/Legend', 'B1'], ['Model/Legend', 'J100'], ['Model/Legend', 'J10'], ['Model/Legend', 'J1']]
1
[['Test F1'], ['Average disagreement rate (%)']]
[['84.40', '14.69'], ['78.56', '17.01'], ['67.28', '21.17'], ['84.75 (+0.35)', '14.48 (1.43%)'], ['79.09 (+0.53)', '16.25 (4.47%)'], ['68.02 (+0.74)', '20.49 (3.21%)']]
column
['Test F1', 'Average disagreement rate (%)']
['J100', 'J10', 'J1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test F1</th> <th>Average disagreement rate (%)</th> </tr> </thead> <tbody> <tr> <td>Model/Legend || B100</td> <td>84.40</td> <td>14.69</td> </tr> <tr> <td>Model/Legend || B10</td> <td>78.56</td> <td>17.01</td> </tr> <tr> <td>Model/Legend || B1</td> <td>67.28</td> <td>21.17</td> </tr> <tr> <td>Model/Legend || J100</td> <td>84.75 (+0.35)</td> <td>14.48 (1.43%)</td> </tr> <tr> <td>Model/Legend || J10</td> <td>79.09 (+0.53)</td> <td>16.25 (4.47%)</td> </tr> <tr> <td>Model/Legend || J1</td> <td>68.02 (+0.74)</td> <td>20.49 (3.21%)</td> </tr> </tbody></table>
Table 1
table_1
D18-1538
3
emnlp2018
Does training with joint objective help?. We trained 3 models with random 1%, 10% and whole 100% of the training set with joint objective (α1 = α2 = 0.5). For comparison, we trained 3 SOTA models with the same training sets. All models were trained for max 150 epochs and with a patience of 20 epochs. Table 1 reports the results of this experiment. We see that models trained with joint objective (JX) improve over baseline models (BX), both in terms of F1 and average disagreement rate. These improvements provide evidence for answering (Q1-3) favorably. Further, gains are more in low resource scenarios because by training models jointly to satisfy syntactic constraints helps in better generalization when trained with limited SRL corpora.
[2, 2, 2, 2, 1, 1, 1, 2]
['Does training with joint objective help?.', 'We trained 3 models with random 1%, 10% and whole 100% of the training set with joint objective (α1 = α2 = 0.5).', 'For comparison, we trained 3 SOTA models with the same training sets.', 'All models were trained for max 150 epochs and with a patience of 20 epochs.', 'Table 1 reports the results of this experiment.', 'We see that models trained with joint objective (JX) improve over baseline models (BX), both in terms of F1 and average disagreement rate.', 'These improvements provide evidence for answering (Q1-3) favorably.', 'Further, gains are more in low resource scenarios because by training models jointly to satisfy syntactic constraints helps in better generalization when trained with limited SRL corpora.']
[None, ['J100', 'J10', 'J1'], ['B100', 'B10', 'B1'], ['B100', 'B10', 'B1', 'J100', 'J10', 'J1'], None, ['B100', 'B10', 'B1', 'J100', 'J10', 'J1', 'Test F1', 'Average disagreement rate (%)'], ['J100', 'J10', 'J1'], ['J100', 'J10', 'J1']]
1
D18-1544table_2
Language modeling performance (perplexity) on the WSJ test set, broken down by training data used and by whether early stopping is done using the parsing objective (UP) or the language modeling objective (LM).
9
[['Model', 'a', 'PRPN-LM', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'LM', 'Vocab Size', '10k'], ['Model', 'b', 'PRPN-LM', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'UP', 'Vocab Size', '10k'], ['Model', 'c', 'PRPN-UP', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'LM', 'Vocab Size', '10k'], ['Model', 'd', 'PRPN-UP', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'LM', 'Vocab Size', '15.8k'], ['Model', 'e', 'PRPN-UP', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'UP', 'Vocab Size', '15.8k'], ['Model', 'f', 'PRPN-UP', 'Training Data', 'AllNLI Train', 'Stopping Criterion', 'LM', 'Vocab Size', '76k'], ['Model', 'g', 'PRPN-UP', 'Training Data', 'AllNLI Train', 'Stopping Criterion', 'UP', 'Vocab Size', '76k']]
1
[['PPL Median']]
[['61.4'], ['81.6'], ['92.8'], ['112.1'], ['112.8'], ['797.5'], ['848.9']]
column
['PPL Median']
['PRPN-LM', 'PRPN-UP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL Median</th> </tr> </thead> <tbody> <tr> <td>Model || a || PRPN-LM || Training Data || WSJ Train || Stopping Criterion || LM || Vocab Size || 10k</td> <td>61.4</td> </tr> <tr> <td>Model || b || PRPN-LM || Training Data || WSJ Train || Stopping Criterion || UP || Vocab Size || 10k</td> <td>81.6</td> </tr> <tr> <td>Model || c || PRPN-UP || Training Data || WSJ Train || Stopping Criterion || LM || Vocab Size || 10k</td> <td>92.8</td> </tr> <tr> <td>Model || d || PRPN-UP || Training Data || WSJ Train || Stopping Criterion || LM || Vocab Size || 15.8k</td> <td>112.1</td> </tr> <tr> <td>Model || e || PRPN-UP || Training Data || WSJ Train || Stopping Criterion || UP || Vocab Size || 15.8k</td> <td>112.8</td> </tr> <tr> <td>Model || f || PRPN-UP || Training Data || AllNLI Train || Stopping Criterion || LM || Vocab Size || 76k</td> <td>797.5</td> </tr> <tr> <td>Model || g || PRPN-UP || Training Data || AllNLI Train || Stopping Criterion || UP || Vocab Size || 76k</td> <td>848.9</td> </tr> </tbody></table>
Table 2
table_2
D18-1544
4
emnlp2018
3 Experimental Results. Table 2 shows our results for language modeling. PRPN-UP, configured as-is with parsing criterion and language modeling criterion, performs dramatically worse than the standard PRPN-LM (a vs. d and e). However, this is not a fair comparison as the larger vocabulary gives PRPN-UP a harder task to solve. Adjusting the vocabulary of PRPNUP down to 10k to make a fairer comparison possible, the PPL of PRPN-UP improves significantly (c vs. d), but not enough to match PRPN-LM (a vs. c). We also observe that early stopping on parsing leads to incomplete training and a substantial decrease in perplexity (a vs. b and d vs. e). The models stop training at around the 13th epoch when we early-stop on parsing objective, while they stop training around the 65th epoch when we early-stop on language modeling objective. Both PRPN models trained on AllNLI do even worse (f and g), though the mismatch in vocabulary and domain may explain this effect. In addition, since it takes much longer to train PRPN on the larger AllNLI dataset, we train PRPN on AllNLI for only 15 epochs while we train the PRPN on WSJ for 100 epochs. Although the parsing objective converges within 15 epochs, we notice that language modeling perplexity is still improving. We expect that the perplexity of the PRPN models trained on AllNLI could be lower if we increase the number of training epochs.
[2, 1, 1, 2, 1, 1, 2, 1, 2, 2, 2]
['3 Experimental Results.', 'Table 2 shows our results for language modeling.', 'PRPN-UP, configured as-is with parsing criterion and language modeling criterion, performs dramatically worse than the standard PRPN-LM (a vs. d and e).', 'However, this is not a fair comparison as the larger vocabulary gives PRPN-UP a harder task to solve.', 'Adjusting the vocabulary of PRPNUP down to 10k to make a fairer comparison possible, the PPL of PRPN-UP improves significantly (c vs. d), but not enough to match PRPN-LM (a vs. c).', 'We also observe that early stopping on parsing leads to incomplete training and a substantial decrease in perplexity (a vs. b and d vs. e).', 'The models stop training at around the 13th epoch when we early-stop on parsing objective, while they stop training around the 65th epoch when we early-stop on language modeling objective.', 'Both PRPN models trained on AllNLI do even worse (f and g), though the mismatch in vocabulary and domain may explain this effect.', 'In addition, since it takes much longer to train PRPN on the larger AllNLI dataset, we train PRPN on AllNLI for only 15 epochs while we train the PRPN on WSJ for 100 epochs.', 'Although the parsing objective converges within 15 epochs, we notice that language modeling perplexity is still improving.', 'We expect that the perplexity of the PRPN models trained on AllNLI could be lower if we increase the number of training epochs.']
[None, ['PRPN-LM', 'PRPN-UP'], ['PRPN-UP', 'a', 'd', 'e', 'PPL Median'], ['PRPN-UP'], ['a', 'c', 'd', 'PRPN-LM', 'PRPN-UP', 'Vocab Size', '10k'], ['a', 'b', 'd', 'e', 'Stopping Criterion', 'LM', 'UP'], ['Stopping Criterion', 'LM', 'UP'], ['Training Data', 'AllNLI Train', 'f', 'g'], ['WSJ Train', 'AllNLI Train'], ['PRPN-UP'], ['PRPN-LM', 'PRPN-UP', 'AllNLI Train']]
1
D18-1544table_3
Unlabeled parsing F1 on the MultiNLI development set for models trained on AllNLI. F1 wrt. shows F1 with respect to strictly rightand left-branching (LB/RB) trees and with respect to the Stanford Parser (SP) trees supplied with the corpus; The evaluations of SPINN, RL-SPINN, and ST-Gumbel are from Williams et al. (2018a). SPINN is a supervised parsing model, and the others are latent tree models. Median F1 of each model trained with 5 different random seeds is reported.
4
[['Model', '300D SPINN', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', '300D SPINN-NC', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', '300D ST-Gumbel', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', '300D RL-SPINN', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', 'PRPN-LM', 'Stopping Criterion', 'LM'], ['Model', 'PRPN-UP', 'Stopping Criterion', 'UP'], ['Model', 'PRPN-UP', 'Stopping Criterion', 'LM'], ['Model', 'Random Trees', 'Stopping Criterion', '-'], ['Model', 'Balanced Trees', 'Stopping Criterion', '-']]
2
[['F1 wrt.', 'LB'], ['F1 wrt.', 'RB'], ['F1 wrt.', 'SP'], ['F1 wrt.', 'Depth']]
[['19.3', '36.9', '70.2', '6.2'], ['21.2', '39.0', '63.5', '6.4'], ['19.2', '36.2', '70.5', '6.1'], ['20.6', '38.9', '64.1', '6.3'], ['32.6', '37.5', '23.7', '4.1'], ['30.8', '35.6', '27.5', '4.6'], ['95.0', '13.5', '18.8', '8.6'], ['99.1', '10.7', '18.1', '8.6'], ['25.6', '26.9', '45.7', '4.9'], ['19.4', '41.0', '46.3', '4.9'], ['19.9', '37.4', '48.6', '4.9'], ['27.9', '28.0', '27.0', '4.4'], ['21.7', '36.8', '21.3', '3.9']]
column
['F1 wrt.', 'F1 wrt.', 'F1 wrt.', 'F1 wrt.']
['PRPN-LM', 'PRPN-UP', 'PRPN-UP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 wrt. || LB</th> <th>F1 wrt. || RB</th> <th>F1 wrt. || SP</th> <th>F1 wrt. || Depth</th> </tr> </thead> <tbody> <tr> <td>Model || 300D SPINN || Stopping Criterion || NLI</td> <td>19.3</td> <td>36.9</td> <td>70.2</td> <td>6.2</td> </tr> <tr> <td>Model || w/o Leaf GRU || Stopping Criterion || NLI</td> <td>21.2</td> <td>39.0</td> <td>63.5</td> <td>6.4</td> </tr> <tr> <td>Model || 300D SPINN-NC || Stopping Criterion || NLI</td> <td>19.2</td> <td>36.2</td> <td>70.5</td> <td>6.1</td> </tr> <tr> <td>Model || w/o Leaf GRU || Stopping Criterion || NLI</td> <td>20.6</td> <td>38.9</td> <td>64.1</td> <td>6.3</td> </tr> <tr> <td>Model || 300D ST-Gumbel || Stopping Criterion || NLI</td> <td>32.6</td> <td>37.5</td> <td>23.7</td> <td>4.1</td> </tr> <tr> <td>Model || w/o Leaf GRU || Stopping Criterion || NLI</td> <td>30.8</td> <td>35.6</td> <td>27.5</td> <td>4.6</td> </tr> <tr> <td>Model || 300D RL-SPINN || Stopping Criterion || NLI</td> <td>95.0</td> <td>13.5</td> <td>18.8</td> <td>8.6</td> </tr> <tr> <td>Model || w/o Leaf GRU || Stopping Criterion || NLI</td> <td>99.1</td> <td>10.7</td> <td>18.1</td> <td>8.6</td> </tr> <tr> <td>Model || PRPN-LM || Stopping Criterion || LM</td> <td>25.6</td> <td>26.9</td> <td>45.7</td> <td>4.9</td> </tr> <tr> <td>Model || PRPN-UP || Stopping Criterion || UP</td> <td>19.4</td> <td>41.0</td> <td>46.3</td> <td>4.9</td> </tr> <tr> <td>Model || PRPN-UP || Stopping Criterion || LM</td> <td>19.9</td> <td>37.4</td> <td>48.6</td> <td>4.9</td> </tr> <tr> <td>Model || Random Trees || Stopping Criterion || -</td> <td>27.9</td> <td>28.0</td> <td>27.0</td> <td>4.4</td> </tr> <tr> <td>Model || Balanced Trees || Stopping Criterion || -</td> <td>21.7</td> <td>36.8</td> <td>21.3</td> <td>3.9</td> </tr> </tbody></table>
Table 3
table_3
D18-1544
4
emnlp2018
In addition, Table 3 shows that the PRPN-UP models achieve the median parsing F1 scores of 46.3 and 48.6 respectively on the MultiNLI dev set while PRPN-LM performs the median F1 of 45.7; setting the state of the art in parsing performance on this dataset among latent tree models by a large margin. We conclude that PRPN does acquire some substantial knowledge of syntax, and that this knowledge agrees with Penn Treebank (PTB) grammar significantly better than chance.
[1, 2]
['In addition, Table 3 shows that the PRPN-UP models achieve the median parsing F1 scores of 46.3 and 48.6 respectively on the MultiNLI dev set while PRPN-LM performs the median F1 of 45.7; setting the state of the art in parsing performance on this dataset among latent tree models by a large margin.', 'We conclude that PRPN does acquire some substantial knowledge of syntax, and that this knowledge agrees with Penn Treebank (PTB) grammar significantly better than chance.']
[['PRPN-LM', 'PRPN-UP', 'F1 wrt.', 'SP'], ['PRPN-LM', 'PRPN-UP']]
1
D18-1547table_4
Performance comparison of two different model architectures using a corpus-based evaluation.
1
[['Inform (%)'], ['Success (%)'], ['BLEU']]
2
[['Cam676', 'w/o attention'], ['Cam676', 'w/ attention'], ['MultiWOZ', 'w/o attention'], ['MultiWOZ', 'w/ attention']]
[['99.17', '99.58', '71.29', '71.33'], ['75.08', '73.75', '60.29', '60.96'], ['0.219', '0.204', '0.188', '0.189']]
row
['Inform (%)', 'Success (%)', 'BLEU']
['MultiWOZ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Cam676 || w/o attention</th> <th>Cam676 || w/ attention</th> <th>MultiWOZ || w/o attention</th> <th>MultiWOZ || w/ attention</th> </tr> </thead> <tbody> <tr> <td>Inform (%)</td> <td>99.17</td> <td>99.58</td> <td>71.29</td> <td>71.33</td> </tr> <tr> <td>Success (%)</td> <td>75.08</td> <td>73.75</td> <td>60.29</td> <td>60.96</td> </tr> <tr> <td>BLEU</td> <td>0.219</td> <td>0.204</td> <td>0.188</td> <td>0.189</td> </tr> </tbody></table>
Table 4
table_4
D18-1547
9
emnlp2018
We trained the same neural architecture (taking into account different number of domains) on both MultiWOZ and Cam676 datasets. The best results on the Cam676 corpus were obtained with bidirectional GRU cell. In the case of MultiWOZ dataset, the LSTM cell serving as a decoder and an encoder achieved the highest score with the global type of attention (Bahdanau et al., 2014). Table 4 presents the results of a various of model architectures and shows several challenges. As expected, the model achieves almost perfect score on the Inform metric on the Cam676 dataset taking the advantage of an oracle belief state signal. However, even with the perfect dialogue state tracking of the user intent, the baseline models obtain almost 30% lower score on the Inform metric on the new corpus. The addition of the attention improves the score on the Success metric on the new dataset by less than 1%. Nevertheless, as expected, the best model on MultiWOZ is still falling behind by a large margin in comparison to the results on the Cam676 corpus taking into account both Inform and Success metrics. As most of dialogues span over at least two domains, the model has to be much more effective in order to execute a successful dialogue. Moreover, the BLEU score on the MultiWOZ is lower than the one reported on the Cam676 dataset.
[2, 2, 2, 1, 1, 1, 1, 1, 2, 1]
['We trained the same neural architecture (taking into account different number of domains) on both MultiWOZ and Cam676 datasets.', 'The best results on the Cam676 corpus were obtained with bidirectional GRU cell.', 'In the case of MultiWOZ dataset, the LSTM cell serving as a decoder and an encoder achieved the highest score with the global type of attention (Bahdanau et al., 2014).', 'Table 4 presents the results of a various of model architectures and shows several challenges.', 'As expected, the model achieves almost perfect score on the Inform metric on the Cam676 dataset taking the advantage of an oracle belief state signal.', 'However, even with the perfect dialogue state tracking of the user intent, the baseline models obtain almost 30% lower score on the Inform metric on the new corpus.', 'The addition of the attention improves the score on the Success metric on the new dataset by less than 1%.', 'Nevertheless, as expected, the best model on MultiWOZ is still falling behind by a large margin in comparison to the results on the Cam676 corpus taking into account both Inform and Success metrics.', 'As most of dialogues span over at least two domains, the model has to be much more effective in order to execute a successful dialogue.', 'Moreover, the BLEU score on the MultiWOZ is lower than the one reported on the Cam676 dataset.']
[['Cam676', 'MultiWOZ'], ['Cam676'], ['MultiWOZ'], None, ['Cam676', 'Inform (%)'], ['Inform (%)'], ['w/o attention', 'w/ attention', 'Success (%)'], ['Cam676', 'MultiWOZ', 'Inform (%)', 'Success (%)'], None, ['Cam676', 'MultiWOZ', 'BLEU']]
1
D19-1001table_1
Results on the SHARC test set, averaged over 3 independent runs for GPT2 and BISON, reporting micro accuracy and macro accuracy in terms of the classification task and BLEU-1 and BLEU-4 on instances for which a clarification question was generated. E&D uses no language model pre-training.
2
[['Model', 'E&D'], ['Model', 'E&D+B'], ['Model', 'GPT2'], ['Model', 'BISON']]
1
[['Micro Acc.'], ['Macro Acc.'], ['B-1'], ['B-4']]
[['31.9', '38.9', '17.1', '1.9'], ['54.7', '60.4', '24.3', '4.3'], ['60.4', '65.1', '53.7', '33.9'], ['64.9', '68.8', '61.8', '46.2']]
column
['Micro Acc.', 'Macro Acc.', 'B-1', 'B-4']
['BISON']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro Acc.</th> <th>Macro Acc.</th> <th>B-1</th> <th>B-4</th> </tr> </thead> <tbody> <tr> <td>Model || E&amp;D</td> <td>31.9</td> <td>38.9</td> <td>17.1</td> <td>1.9</td> </tr> <tr> <td>Model || E&amp;D+B</td> <td>54.7</td> <td>60.4</td> <td>24.3</td> <td>4.3</td> </tr> <tr> <td>Model || GPT2</td> <td>60.4</td> <td>65.1</td> <td>53.7</td> <td>33.9</td> </tr> <tr> <td>Model || BISON</td> <td>64.9</td> <td>68.8</td> <td>61.8</td> <td>46.2</td> </tr> </tbody></table>
Table 1
table_1
D19-1001
5
emnlp2019
We submitted the best BISON model out of the random three of Table 1 to be evaluated on the hidden test set and report results in comparison to the best model on the leaderboard,3 E3 (Zhong and Zettlemoyer, 2019) in Table 2. BISON outperforms E3 by 5.6 BLEU-4 points, while it is only slightly worse than E3 in terms of accuracy.
[1, 1]
['We submitted the best BISON model out of the random three of Table 1 to be evaluated on the hidden test set and report results in comparison to the best model on the leaderboard,3 E3 (Zhong and Zettlemoyer, 2019) in Table 2.', 'BISON outperforms E3 by 5.6 BLEU-4 points, while it is only slightly worse than E3 in terms of accuracy.']
[['BISON'], ['BISON']]
1
D19-1005table_1
Comparison of masked LM perplexity, Wikidata probing MRR, and number of parameters (in millions) in the masked LM (word piece embeddings, transformer layers, and output layers), KAR, and entity embeddings for BERT and KnowBert. The table also includes the total time to run one forward and backward pass (in seconds) on a TITAN Xp GPU (12 GB RAM) for a batch of 32 sentence pairs with total length 80 word pieces. Due to memory constraints, the BERTLARGE batch is accumulated over two smaller batches.
2
[['System', 'BERTBASE'], ['System', 'BERTLARGE'], ['System', 'KnowBert-Wiki'], ['System', 'KnowBert-WordNet'], ['System', 'KnowBert-W+W']]
2
[['PPL', '-'], ['MRR', 'Wikidata'], ['# params', 'masked LM'], ['# params', 'KAR'], ['# params', 'entity embed.'], ['time', 'Fwd. / Bwd.']]
[['5.5', '0.09', '110', '0', '0', '0.25'], ['4.5', '0.11', '336', '0', '0', '0.75'], ['4.3', '0.26', '110', '2.4', '141', '0.27'], ['4.1', '0.22', '110', '4.9', '265', '0.31'], ['3.5', '0.31', '110', '7.3', '406', '0.33']]
column
['PPL', 'MRR', '# params', '# params', '# params', 'time']
['KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'PPL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL || -</th> <th>MRR || Wikidata</th> <th># params || masked LM</th> <th># params || KAR</th> <th># params || entity embed.</th> <th>time || Fwd. / Bwd.</th> </tr> </thead> <tbody> <tr> <td>System || BERTBASE</td> <td>5.5</td> <td>0.09</td> <td>110</td> <td>0</td> <td>0</td> <td>0.25</td> </tr> <tr> <td>System || BERTLARGE</td> <td>4.5</td> <td>0.11</td> <td>336</td> <td>0</td> <td>0</td> <td>0.75</td> </tr> <tr> <td>System || KnowBert-Wiki</td> <td>4.3</td> <td>0.26</td> <td>110</td> <td>2.4</td> <td>141</td> <td>0.27</td> </tr> <tr> <td>System || KnowBert-WordNet</td> <td>4.1</td> <td>0.22</td> <td>110</td> <td>4.9</td> <td>265</td> <td>0.31</td> </tr> <tr> <td>System || KnowBert-W+W</td> <td>3.5</td> <td>0.31</td> <td>110</td> <td>7.3</td> <td>406</td> <td>0.33</td> </tr> </tbody></table>
Table 1
table_1
D19-1005
6
emnlp2019
Perplexity. Table 1 compares masked LM perplexity for KnowBert with BERTBASE and BERTLARGE. To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated. Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outperforming BERTLARGE, despite being derived from BERTBASE.
[2, 1, 2, 1]
['Perplexity.', 'Table 1 compares masked LM perplexity for KnowBert with BERTBASE and BERTLARGE.', 'To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated.', 'Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outperforming BERTLARGE, despite being derived from BERTBASE.']
[None, ['PPL', 'KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'BERTBASE', 'BERTLARGE'], None, ['KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'masked LM', 'BERTLARGE', 'BERTBASE']]
1
D19-1009table_4
WSDGα results on the SICK dataset.
1
[['sense'], ['word']]
1
[['Pearson'], ['Spearman'], ['MSE']]
[['46.5', '43.9', '7.9'], ['39.8', '39.9', '8.6']]
column
['Pearson', 'Spearman', 'MSE']
['sense']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pearson</th> <th>Spearman</th> <th>MSE</th> </tr> </thead> <tbody> <tr> <td>sense</td> <td>46.5</td> <td>43.9</td> <td>7.9</td> </tr> <tr> <td>word</td> <td>39.8</td> <td>39.9</td> <td>8.6</td> </tr> </tbody></table>
Table 4
table_4
D19-1009
10
emnlp2019
Sentence similarity . We used the SICK dataset (Marelli et al., 2014) for this task. It consists of 9841 sentence pairs that had been annotated with relatedness scores on a 5-point rating scale. We used the test split of this dataset that contains 4906 sentence pairs. The aim of this experiment was to test if disambiguated sense vectors can provide a better representation of sentences than word vectors. We used a simple method to test the two representations: it consisted of representing a sentence as the sum of the disambiguated sense vectors in one case and as the sum of word vectors in the other case. Once the sentence representations had been obtained for both methods the cosine similarity was used to measure their relatedness. The results of this experiment are reported in Table 4 as Pearson and Spearman correlation and Mean Squared Error (MSE). We used the ? configuration of our model with Chen2014 to represent senses and BERT-l-u-4 to represent words. As we can see the simplicity of the method leads to low performances for both representations, but sense vectors correlate better than word vectors.
[2, 2, 2, 2, 2, 2, 2, 1, 1, 1]
['Sentence similarity .', 'We used the SICK dataset (Marelli et al., 2014) for this task.', 'It consists of 9841 sentence pairs that had been annotated with relatedness scores on a 5-point rating scale.', 'We used the test split of this dataset that contains 4906 sentence pairs.', 'The aim of this experiment was to test if disambiguated sense vectors can provide a better representation of sentences than word vectors.', 'We used a simple method to test the two representations: it consisted of representing a sentence as the sum of the disambiguated sense vectors in one case and as the sum of word vectors in the other case.', 'Once the sentence representations had been obtained for both methods the cosine similarity was used to measure their relatedness.', 'The results of this experiment are reported in Table 4 as Pearson and Spearman correlation and Mean Squared Error (MSE).', 'We used the ? configuration of our model with Chen2014 to represent senses and BERT-l-u-4 to represent words.', 'As we can see the simplicity of the method leads to low performances for both representations, but sense vectors correlate better than word vectors.']
[None, None, None, None, None, None, None, ['Pearson', 'Spearman', 'MSE'], None, ['sense', 'word']]
1
D19-1010table_3
Performance of different dialog agents on the multi-domain dialog corpus by interacting with the agenda-based user simulator. All the results except “dialog turns” are shown in percentage terms. Real human-human performance computed from the test set (i.e. the last row) serves as the upper bounds.
2
[['Method', 'GP-MBCM'], ['Method', 'ACER'], ['Method', 'PPO'], ['Method', 'ALDM'], ['Method', 'GDPL-sess'], ['Method', 'GDPL-discr'], ['Method', 'GDPL'], ['Method', 'Human']]
2
[['Agenda', 'Turns'], ['Agenda', 'Inform'], ['Agenda', 'Match'], ['Agenda', 'Success']]
[['2.99', '19.04', '44.29', '28.9'], ['10.49', '77.98', '62.83', '50.8'], ['9.83', '83.34', '69.09', '59.1'], ['12.47', '81.20', '62.60', '61.2'], ['7.49', '88.39', '77.56', '76.4'], ['7.86', '93.21', '80.43', '80.5'], ['7.64', '94.97', '83.90', '86.5'], ['7.37', '66.89', '95.29', '75.0']]
column
['F1', 'F1', 'F1', 'F1']
['GDPL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Agenda || Turns</th> <th>Agenda || Inform</th> <th>Agenda || Match</th> <th>Agenda || Success</th> </tr> </thead> <tbody> <tr> <td>Method || GP-MBCM</td> <td>2.99</td> <td>19.04</td> <td>44.29</td> <td>28.9</td> </tr> <tr> <td>Method || ACER</td> <td>10.49</td> <td>77.98</td> <td>62.83</td> <td>50.8</td> </tr> <tr> <td>Method || PPO</td> <td>9.83</td> <td>83.34</td> <td>69.09</td> <td>59.1</td> </tr> <tr> <td>Method || ALDM</td> <td>12.47</td> <td>81.20</td> <td>62.60</td> <td>61.2</td> </tr> <tr> <td>Method || GDPL-sess</td> <td>7.49</td> <td>88.39</td> <td>77.56</td> <td>76.4</td> </tr> <tr> <td>Method || GDPL-discr</td> <td>7.86</td> <td>93.21</td> <td>80.43</td> <td>80.5</td> </tr> <tr> <td>Method || GDPL</td> <td>7.64</td> <td>94.97</td> <td>83.90</td> <td>86.5</td> </tr> <tr> <td>Method || Human</td> <td>7.37</td> <td>66.89</td> <td>95.29</td> <td>75.0</td> </tr> </tbody></table>
Table 3
table_3
D19-1010
6
emnlp2019
The performance of each approach that interact swith the agenda-based user simulator is shown in Table 3. GDPL achieves extremely high perfor-mance in the task success on account of the substantial improvement in inform F1 and match rateover the baselines. Since the reward estimator of GDPL evaluates stateaction pairs, it can always guide the dialog policy during the conversation thus leading the dialog policy to a successful strategy, which also indirectly demonstrates that the reward estimator has learned a reasonable reward at each dialog turn. Surprisingly, GDPL even outperforms human in completing the task, and its average dialog turns are close to those of humans, though GDPL is inferior in terms of match rate. Humans almost manage to make a reservation in each session, which contributes to high task success. However, it is also interesting to find that human have low inform F1, and that may explain why the task is not always completed successfully. Actually, there have high recall (86.75%) but low precision (54.43%) in human dialogs when answering the requested information. This is possibly because during data collection human users forget to ask for all required information of the task, as reported in (Su et al., 2016).
[1, 1, 2, 1, 1, 1, 2]
['The performance of each approach that interact swith the agenda-based user simulator is shown in Table 3.', 'GDPL achieves extremely high perfor-mance in the task success on account of the substantial improvement in inform F1 and match rateover the baselines.', ' Since the reward estimator of GDPL evaluates stateaction pairs, it can always guide the dialog policy during the conversation thus leading the dialog policy to a successful strategy, which also indirectly demonstrates that the reward estimator has learned a reasonable reward at each dialog turn.', 'Surprisingly, GDPL even outperforms human in completing the task, and its average dialog turns are close to those of humans, though GDPL is inferior in terms of match rate. Humans almost manage to make a reservation in each session, which contributes to high task success.', ' However, it is also interesting to find that human have low inform F1, and that may explain why the task is not always completed successfully.', 'Actually, there have high recall (86.75%) but low precision (54.43%) in human dialogs when answering the requested information.', ' This is possibly because during data collection human users forget to ask for all required information of the task, as reported in (Su et al., 2016).']
[None, ['GDPL', 'Success'], ['GDPL'], ['GDPL', 'Human', 'Match'], ['Human'], ['Human'], None]
1
D19-1020table_3
Main results on PGR testest. † denotes previous numbers rounded into 3 significant digits. * and ** indicate significance over DEPTREE at p < 0.05 and p < 0.01 with 1000 bootstrap tests.
2
[['Model', 'BO-LSTM (Lamurias et al., 2019)†'], ['Model', 'BioBERT (Lee et al., 2019)†'], ['Model', 'TEXTONLY'], ['Model', 'DEPTREE'], ['Model', 'KBESTEISNERPS'], ['Model', 'EDGEWISEPS']]
1
[['F1 score']]
[['52.3'], ['67.2'], ['76.0'], ['78.9'], ['83.6*'], ['85.7**']]
column
['F1 Score']
['EDGEWISEPS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 score</th> </tr> </thead> <tbody> <tr> <td>Model || BO-LSTM (Lamurias et al., 2019)†</td> <td>52.3</td> </tr> <tr> <td>Model || BioBERT (Lee et al., 2019)†</td> <td>67.2</td> </tr> <tr> <td>Model || TEXTONLY</td> <td>76.0</td> </tr> <tr> <td>Model || DEPTREE</td> <td>78.9</td> </tr> <tr> <td>Model || KBESTEISNERPS</td> <td>83.6*</td> </tr> <tr> <td>Model || EDGEWISEPS</td> <td>85.7**</td> </tr> </tbody></table>
Table 3
table_3
D19-1020
8
emnlp2019
7.8 Main results on PGR . Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models. This is likely because the previous models do not utilize all the information from inputs: BO-LSTM only takes the words (without arc labels) along the shortest dependency path between the target mentions; the pretrained weights of BioBERT are kept constant during training for relation extraction.
[2, 1, 2]
['7.8 Main results on PGR .', 'Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models.', 'This is likely because the previous models do not utilize all the information from inputs: BO-LSTM only takes the words (without arc labels) along the shortest dependency path between the target mentions; the pretrained weights of BioBERT are kept constant during training for relation extraction.']
[None, ['EDGEWISEPS'], ['BO-LSTM (Lamurias et al., 2019)†', 'BioBERT (Lee et al., 2019)†']]
1
D19-1021table_1
Precision, recall and F1 results (%) for different models. The first two models are baselines. The next five models are different variants of our model.
2
[['Approach', 'VAE'], ['Approach', 'RW-HAC'], ['Approach', 'SN-HAC'], ['Approach', 'SN-L'], ['Approach', 'SN-L+V'], ['Approach', 'SN-L+C'], ['Approach', 'SN-L+CV1']]
1
[['P'], ['R'], ['F1'], ['P'], ['R'], ['F1']]
[['17.9', '69.7', '28.5', '17.9', '69.7', '28.5'], ['31.8', '46', '37.6', '31.8', '46.0', '37.6'], ['36.2', '53.3', '43.1', '34.5', '53.3', '41.5'], ['36.5', '69.2', '47.8', '34.6', '59.8', '43.9'], ['46.1', '77.3', '57.8', '40.7', '52.4', '45.8'], ['47.1', '78.1', '58.8', '42.3', '66.0', '51.5'], ['48.9', '77.5', '59.9', '40.8', '74.0', '52.6']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['SN-HAC', 'SN-L', 'SN-L+V', 'SN-L+C', 'SN-L+CV1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || VAE</td> <td>17.9</td> <td>69.7</td> <td>28.5</td> <td>17.9</td> <td>69.7</td> <td>28.5</td> </tr> <tr> <td>Approach || RW-HAC</td> <td>31.8</td> <td>46</td> <td>37.6</td> <td>31.8</td> <td>46.0</td> <td>37.6</td> </tr> <tr> <td>Approach || SN-HAC</td> <td>36.2</td> <td>53.3</td> <td>43.1</td> <td>34.5</td> <td>53.3</td> <td>41.5</td> </tr> <tr> <td>Approach || SN-L</td> <td>36.5</td> <td>69.2</td> <td>47.8</td> <td>34.6</td> <td>59.8</td> <td>43.9</td> </tr> <tr> <td>Approach || SN-L+V</td> <td>46.1</td> <td>77.3</td> <td>57.8</td> <td>40.7</td> <td>52.4</td> <td>45.8</td> </tr> <tr> <td>Approach || SN-L+C</td> <td>47.1</td> <td>78.1</td> <td>58.8</td> <td>42.3</td> <td>66.0</td> <td>51.5</td> </tr> <tr> <td>Approach || SN-L+CV1</td> <td>48.9</td> <td>77.5</td> <td>59.9</td> <td>40.8</td> <td>74.0</td> <td>52.6</td> </tr> </tbody></table>
Table 1
table_1
D19-1021
7
emnlp2019
Experimental Result Analysis. Table 1 shows the experimental results, from which we can observe that:. (1) RSN models outperform all baseline models on precision, recall, and F1-score, among which Weakly-supervised RSN (SN-L+CV) achieves state-of-the-art performances. This indicates that RSN is capable of understanding new relations’ semantic meanings within sentences. (2) Supervised and distantly-supervised relational representations improve clustering performances. Compared with RW-HAC, SN-HAC achieves better clustering results because of its supervised relational representation and similarity metric. Specifically, unsupervised baselines mainly use sparse one-hot features. RW-HAC uses word embeddings, but integrates them in a rule-based way. In contrast, RSN uses distributed feature representations, and can optimize information integration process according to supervision. (3) Louvain outperforms HAC for clustering with RSN, comparing SN-HAC with SN-L. One explanation is that our model does not put additional constraints on the prior distribution of relational vectors, and therefore the relation clusters might have odd shapes in violation of HAC’s assumption. Moreover, when representations are not distinguishable enough, forcing HAC to find fine-grained clusters may harm recall while contributing minimally to precision. In practice, we do observe that the number of relations SN-L extracts is constantly less than the true number 16.
[2, 1, 1, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2]
['Experimental Result Analysis.', 'Table 1 shows the experimental results, from which we can observe that:.', '(1) RSN models outperform all baseline models on precision, recall, and F1-score, among which Weakly-supervised RSN (SN-L+CV) achieves state-of-the-art performances.', 'This indicates that RSN is capable of understanding new relations’ semantic meanings within sentences.', '(2) Supervised and distantly-supervised relational representations improve clustering performances.', 'Compared with RW-HAC, SN-HAC achieves better clustering results because of its supervised relational representation and similarity metric.', 'Specifically, unsupervised baselines mainly use sparse one-hot features.', 'RW-HAC uses word embeddings, but integrates them in a rule-based way.', 'In contrast, RSN uses distributed feature representations, and can optimize information integration process according to supervision.', '(3) Louvain outperforms HAC for clustering with RSN, comparing SN-HAC with SN-L.', 'One explanation is that our model does not put additional constraints on the prior distribution of relational vectors, and therefore the relation clusters might have odd shapes in violation of HAC’s assumption.', 'Moreover, when representations are not distinguishable enough, forcing HAC to find fine-grained clusters may harm recall while contributing minimally to precision.', 'In practice, we do observe that the number of relations SN-L extracts is constantly less than the true number 16.']
[None, None, ['SN-HAC', 'SN-L', 'SN-L+V', 'SN-L+C', 'SN-L+CV1', 'P', 'R', 'F1'], None, None, ['RW-HAC', 'SN-HAC'], None, ['RW-HAC'], None, ['SN-HAC', 'SN-L'], None, ['SN-HAC'], ['SN-L']]
1
D19-1022table_1
Micro-averaged precision (P), recall (R) and F1 score on TACRED dataset. †, ‡ and †† mark the results reported in (Zhang et al., 2017), (Zhang et al., ∗ 2018) and (Bilan and Roth, 2018) respectively. marks statistically significant improvements over Selfattn with p < 0.01 under one-tailed t-test.
2
[['Model', 'CNN'], ['Model', 'CNN-PE'], ['Model', 'GCN'], ['Model', 'LSTM'], ['Model', 'PA-LSTM'], ['Model', 'C-GCN'], ['Model', 'Self-attn'], ['Model', 'Knwl-attn'], ['Model', 'Knwl+Self (MCA)'], ['Model', 'Knwl+Self (SI)'], ['Model', ' Know+Self (KISA)']]
1
[['P'], ['R'], ['F1']]
[['72.1', '50.3', '59.2'], ['68.2', '55.4', '61.1'], ['69.8', '59', '64'], ['61.4', '61.7', '61.5'], ['65.7', '64.5', '65.1'], ['69.9', '63.3', '66.4'], ['64.6', '68.6', '66.5'], ['70', '63.1', '66.4'], ['68.4', '66.1', '67.3*'], ['67.1', '68.4', '67.8*'], ['69.4', '66', '67.7*']]
column
['P', 'R', 'F1']
['Knwl-attn']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>72.1</td> <td>50.3</td> <td>59.2</td> </tr> <tr> <td>Model || CNN-PE</td> <td>68.2</td> <td>55.4</td> <td>61.1</td> </tr> <tr> <td>Model || GCN</td> <td>69.8</td> <td>59</td> <td>64</td> </tr> <tr> <td>Model || LSTM</td> <td>61.4</td> <td>61.7</td> <td>61.5</td> </tr> <tr> <td>Model || PA-LSTM</td> <td>65.7</td> <td>64.5</td> <td>65.1</td> </tr> <tr> <td>Model || C-GCN</td> <td>69.9</td> <td>63.3</td> <td>66.4</td> </tr> <tr> <td>Model || Self-attn</td> <td>64.6</td> <td>68.6</td> <td>66.5</td> </tr> <tr> <td>Model || Knwl-attn</td> <td>70</td> <td>63.1</td> <td>66.4</td> </tr> <tr> <td>Model || Knwl+Self (MCA)</td> <td>68.4</td> <td>66.1</td> <td>67.3*</td> </tr> <tr> <td>Model || Knwl+Self (SI)</td> <td>67.1</td> <td>68.4</td> <td>67.8*</td> </tr> <tr> <td>Model || Know+Self (KISA)</td> <td>69.4</td> <td>66</td> <td>67.7*</td> </tr> </tbody></table>
Table 1
table_1
D19-1022
7
emnlp2019
5.3 Results and Analysis. 5.3.1 Results on TACRED dataset . Table 1 shows the results of baseline as well as our proposed models on TACRED dataset. It is observed that our proposed knowledge-attention encoder outperforms all CNN-based and RNNbased models by at least 1.3 F1. Meanwhile, it achieves comparable results with C-GCN and selfattention encoder, which are the current start-ofthe-art single-model systems.
[2, 2, 1, 1, 1]
['5.3 Results and Analysis.', ' 5.3.1 Results on TACRED dataset .', 'Table 1 shows the results of baseline as well as our proposed models on TACRED dataset.', 'It is observed that our proposed knowledge-attention encoder outperforms all CNN-based and RNNbased models by at least 1.3 F1.', 'Meanwhile, it achieves comparable results with C-GCN and selfattention encoder, which are the current start-ofthe-art single-model systems.']
[None, None, None, ['Knwl-attn', 'CNN', 'LSTM', 'F1'], ['C-GCN', 'Self-attn']]
1
D19-1025table_2
Performance (%) on low-resource languages.
1
[['CNN-CRFs'], ['BiLSTM-CRFs'], ['Trans-CRFs'], ['BiLSTM-PCRFs'], ['Ours']]
2
[['CY', 'P'], ['CY', 'R'], ['CY', 'F1'], ['BN', 'P'], ['BN', 'R'], ['BN', 'F1'], ['YO', 'P'], ['YO', 'R'], ['YO', 'F1'], ['MN', 'P'], ['MN', 'R'], ['MN', 'F1'], ['ARZ', 'P'], ['ARZ', 'R'], ['ARZ', 'F1']]
[['84.4', '76.2', '80.1', '92', '89.1', '90.5', '80.9', '68.9', '74.4', '87.3', '85.5', '86.3', '88.6', '86.7', '87.6'], ['86', '77.8', '81.6', '93.3', '91.5', '92.3', '74.1', '68.9', '71.3', '89', '85.5', '87.1', '89.5', '88.5', '89'], ['83.7', '73.2', '78.1', '93', '85.9', '89.3', '80.2', '60.5', '69', '88', '80', '83.8', '88.9', '83.2', '85.9'], ['85.2', '79.6', '82.3', '91.2', '92.7', '91.9', '68.1', '70.2', '69.1', '82.5', '91.2', '86.6', '84', '90.7', '87.1'], ['82.8', '82.5', '82.6', '93.4', '93.5', '93.4', '73.5', '76.8', '75.1', '86.9', '93.6', '90.1', '87.7', '91.5', '89.5']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CY || P</th> <th>CY || R</th> <th>CY || F1</th> <th>BN || P</th> <th>BN || R</th> <th>BN || F1</th> <th>YO || P</th> <th>YO || R</th> <th>YO || F1</th> <th>MN || P</th> <th>MN || R</th> <th>MN || F1</th> <th>ARZ || P</th> <th>ARZ || R</th> <th>ARZ || F1</th> </tr> </thead> <tbody> <tr> <td>CNN-CRFs</td> <td>84.4</td> <td>76.2</td> <td>80.1</td> <td>92</td> <td>89.1</td> <td>90.5</td> <td>80.9</td> <td>68.9</td> <td>74.4</td> <td>87.3</td> <td>85.5</td> <td>86.3</td> <td>88.6</td> <td>86.7</td> <td>87.6</td> </tr> <tr> <td>BiLSTM-CRFs</td> <td>86</td> <td>77.8</td> <td>81.6</td> <td>93.3</td> <td>91.5</td> <td>92.3</td> <td>74.1</td> <td>68.9</td> <td>71.3</td> <td>89</td> <td>85.5</td> <td>87.1</td> <td>89.5</td> <td>88.5</td> <td>89</td> </tr> <tr> <td>Trans-CRFs</td> <td>83.7</td> <td>73.2</td> <td>78.1</td> <td>93</td> <td>85.9</td> <td>89.3</td> <td>80.2</td> <td>60.5</td> <td>69</td> <td>88</td> <td>80</td> <td>83.8</td> <td>88.9</td> <td>83.2</td> <td>85.9</td> </tr> <tr> <td>BiLSTM-PCRFs</td> <td>85.2</td> <td>79.6</td> <td>82.3</td> <td>91.2</td> <td>92.7</td> <td>91.9</td> <td>68.1</td> <td>70.2</td> <td>69.1</td> <td>82.5</td> <td>91.2</td> <td>86.6</td> <td>84</td> <td>90.7</td> <td>87.1</td> </tr> <tr> <td>Ours</td> <td>82.8</td> <td>82.5</td> <td>82.6</td> <td>93.4</td> <td>93.5</td> <td>93.4</td> <td>73.5</td> <td>76.8</td> <td>75.1</td> <td>86.9</td> <td>93.6</td> <td>90.1</td> <td>87.7</td> <td>91.5</td> <td>89.5</td> </tr> </tbody></table>
Table 2
table_2
D19-1025
7
emnlp2019
6.2 Results on Low-Resource Languages . Table 2 shows the overall performance of our proposed model as well as the baseline methods (P and R denote Precision and Recall). We can see:. Our method consistently outperforms all baselines in five languages w.r.t F1, mainly because we greatly improve recall (2.7% to 9.34% on average) by taking best advantage of WL data and being robust to noise via two modules. As for the precision, partial-CRFs perform poorly compared with CRFs due to the uncertainty of unlabeled words, while our method alleviates this issue by introducing linguistic features in non-entity sampling. An exception occurs in CY, because it has the most training data, which may bring more accurate information than sampling. Actually, we can tune hyper-parameter non-entity ratio ? to improve precision, more studies can be found in Section 6.5. Besides, the sampling technique can utilize more prior features if available, we leave it in future.
[2, 1, 2, 1, 1, 2, 2, 2]
['6.2 Results on Low-Resource Languages .', 'Table 2 shows the overall performance of our proposed model as well as the baseline methods (P and R denote Precision and Recall).', 'We can see:.', 'Our method consistently outperforms all baselines in five languages w.r.t F1, mainly because we greatly improve recall (2.7% to 9.34% on average) by taking best advantage of WL data and being robust to noise via two modules.', 'As for the precision, partial-CRFs perform poorly compared with CRFs due to the uncertainty of unlabeled words, while our method alleviates this issue by introducing linguistic features in non-entity sampling.', 'An exception occurs in CY, because it has the most training data, which may bring more accurate information than sampling.', 'Actually, we can tune hyper-parameter non-entity ratio ? to improve precision, more studies can be found in Section 6.5.', 'Besides, the sampling technique can utilize more prior features if available, we leave it in future.']
[None, ['Ours', 'P', 'R'], None, ['Ours', 'F1'], ['BiLSTM-PCRFs', 'CNN-CRFs', 'BiLSTM-CRFs', 'Trans-CRFs'], ['CY'], None, None]
1
D19-1026table_3
Performance Comparison on Cross-domain Datasets using F1 score (%). The best results are in bold. Note that our own results all retain two decimal places. Other results with uncertain amount of decimal places are directly retrieved from their original paper.
2
[['System', 'AIDA (Hoffart et al., 2011)'], ['System', 'GLOW (Ratinov et al., 2011)'], ['System', 'RI (Cheng and Roth, 2013)'], ['System', 'WNED (Guo and Barbosa, 2016)'], ['System', 'Deep-ED (Ganea and Hofmann, 2017)'], ['System', 'Ment-Norm (Le and Titov, 2018)'], ['System', 'Prior (p(ejm)) (Ganea and Hofmann, 2017)'], ['System', 'Berkeley-CNN (Section 2.2)'], ['System', 'Berkeley-CNN + DCA-SL'], ['System', 'Berkeley-CNN + DCA-RL'], ['System', 'ETHZ-Attn (Section 2.2)'], ['System', 'ETHZ-Attn + DCA-SL'], ['System', 'ETHZ-Attn + DCA-RL']]
1
[['MSBNC'], ['AQUAINT'], ['ACE2004'], ['CWEB'], ['WIKI']]
[['79', '56', '80', '58.6', '63'], ['75', '83', '82', '56.2', '67.2'], ['90', '90', '86', '67.5', '73.4'], ['92', '87 88', '77', '84.5'], ['93.7', '88.5', '88.5', '77.9', '77.5'], ['93.9', '88.3', '89.9', '77.5', '78.0'], ['89.3', '83.2', '84.4', '69.8', '64.2'], ['89.05', '80.55', '87.32', '67.97', '60.27'], ['93.38 ± 0.2', '85.63 ± 0.3', '88.73 ± 0.3', '71.01 ± 0.1', '72.55 ± 0.2'], ['93.65 ± 0.2', '88.53 ± 0.3', '89.73 ± 0.4', '72.66 ± 0.4', '73.98 ± 0.2'], ['91.97', '84.06', '86.92', '70.07', '74.37'], ['94.57 ± 0.2', '87.38 ± 0.5', '89.44 ± 0.4', '73.47 ± 0.1', '78.16 ± 0.1'], ['93.80 ± 0.0', '88.25 ± 0.4', '90.14 ± 0.0', '75.59 ± 0.3', '78.84 ± 0.2']]
column
['F1', 'F1', 'F1', 'F1', 'F1']
['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSBNC</th> <th>AQUAINT</th> <th>ACE2004</th> <th>CWEB</th> <th>WIKI</th> </tr> </thead> <tbody> <tr> <td>System || AIDA (Hoffart et al., 2011)</td> <td>79</td> <td>56</td> <td>80</td> <td>58.6</td> <td>63</td> </tr> <tr> <td>System || GLOW (Ratinov et al., 2011)</td> <td>75</td> <td>83</td> <td>82</td> <td>56.2</td> <td>67.2</td> </tr> <tr> <td>System || RI (Cheng and Roth, 2013)</td> <td>90</td> <td>90</td> <td>86</td> <td>67.5</td> <td>73.4</td> </tr> <tr> <td>System || WNED (Guo and Barbosa, 2016)</td> <td>92</td> <td>87 88</td> <td>77</td> <td>84.5</td> <td>None</td> </tr> <tr> <td>System || Deep-ED (Ganea and Hofmann, 2017)</td> <td>93.7</td> <td>88.5</td> <td>88.5</td> <td>77.9</td> <td>77.5</td> </tr> <tr> <td>System || Ment-Norm (Le and Titov, 2018)</td> <td>93.9</td> <td>88.3</td> <td>89.9</td> <td>77.5</td> <td>78.0</td> </tr> <tr> <td>System || Prior (p(ejm)) (Ganea and Hofmann, 2017)</td> <td>89.3</td> <td>83.2</td> <td>84.4</td> <td>69.8</td> <td>64.2</td> </tr> <tr> <td>System || Berkeley-CNN (Section 2.2)</td> <td>89.05</td> <td>80.55</td> <td>87.32</td> <td>67.97</td> <td>60.27</td> </tr> <tr> <td>System || Berkeley-CNN + DCA-SL</td> <td>93.38 ± 0.2</td> <td>85.63 ± 0.3</td> <td>88.73 ± 0.3</td> <td>71.01 ± 0.1</td> <td>72.55 ± 0.2</td> </tr> <tr> <td>System || Berkeley-CNN + DCA-RL</td> <td>93.65 ± 0.2</td> <td>88.53 ± 0.3</td> <td>89.73 ± 0.4</td> <td>72.66 ± 0.4</td> <td>73.98 ± 0.2</td> </tr> <tr> <td>System || ETHZ-Attn (Section 2.2)</td> <td>91.97</td> <td>84.06</td> <td>86.92</td> <td>70.07</td> <td>74.37</td> </tr> <tr> <td>System || ETHZ-Attn + DCA-SL</td> <td>94.57 ± 0.2</td> <td>87.38 ± 0.5</td> <td>89.44 ± 0.4</td> <td>73.47 ± 0.1</td> <td>78.16 ± 0.1</td> </tr> <tr> <td>System || ETHZ-Attn + DCA-RL</td> <td>93.80 ± 0.0</td> <td>88.25 ± 0.4</td> <td>90.14 ± 0.0</td> <td>75.59 ± 0.3</td> <td>78.84 ± 0.2</td> </tr> </tbody></table>
Table 3
table_3
D19-1026
6
emnlp2019
Table 3 shows the results on the five crossdomain datasets. As shown, none of existing methods can consistently win on all datasets. DCA-based models achieve state-of-the-art performance on the MSBNC and the ACE2004 dataset. On remaining datasets, DCA-RL achieves comparable performance with other complex global models. In addition, RL-based models show on average 1.1% improvement on F1 score over the SL-based models across all the crossdomain datasets. At the same time, DCA-based methods are much more efficient, both in time complexity and in resource requirement. Detailed efficiency analysis will be presented in following sections.
[1, 1, 1, 1, 1, 2, 2]
['Table 3 shows the results on the five crossdomain datasets.', 'As shown, none of existing methods can consistently win on all datasets.', 'DCA-based models achieve state-of-the-art performance on the MSBNC and the ACE2004 dataset.', 'On remaining datasets, DCA-RL achieves comparable performance with other complex global models.', 'In addition, RL-based models show on average 1.1% improvement on F1 score over the SL-based models across all the crossdomain datasets.', 'At the same time, DCA-based methods are much more efficient, both in time complexity and in resource requirement.', 'Detailed efficiency analysis will be presented in following sections.']
[None, None, ['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL'], ['Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-RL'], ['Berkeley-CNN + DCA-RL', 'Berkeley-CNN + DCA-SL'], ['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL'], None]
1
D19-1026table_4
Ablation Study on Neighbor Entities. We compare the performance of DCA with or without neighbor entities (i.e., 2-hop vs. 1-hop).
2
[['System', 'ETHZ-Attn (Section 2.2)'], ['System', 'ETHZ-Attn + 1-hop DCA'], ['System', 'ETHZ-Attn + 2-hop DCA']]
2
[['In-KB acc. (%)', 'SL'], ['In-KB acc. (%)', ' RL']]
[[' 90.88', ' -'], [' 93.69', ' 93.20'], [' 94.47', ' 93.76']]
column
['In-KB acc. (%)', 'In-KB acc. (%)']
['ETHZ-Attn + 1-hop DCA', 'ETHZ-Attn + 2-hop DCA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-KB acc. (%) || SL</th> <th>In-KB acc. (%) || RL</th> </tr> </thead> <tbody> <tr> <td>System || ETHZ-Attn (Section 2.2)</td> <td>90.88</td> <td>-</td> </tr> <tr> <td>System || ETHZ-Attn + 1-hop DCA</td> <td>93.69</td> <td>93.20</td> </tr> <tr> <td>System || ETHZ-Attn + 2-hop DCA</td> <td>94.47</td> <td>93.76</td> </tr> </tbody></table>
Table 4
table_4
D19-1026
6
emnlp2019
2. Effect of neighbor entities. In contrast to traditional global models, we include both previously linked entities and their close neighbors for global signal. Table 4 shows the effectiveness of this strategy. We observe that incorporating these neighbor significantly improve the performance (compared to 1-hop) by introducing more related information. And our analysis shows that on average 0.72% and 3.56% relative improvement of 2-hop DCA-(SL/RL) over 1-hop DCA-(SL/RL) or baseline-SL (without DCA) is statistically significant (with P-value < 0.005). This is consistent with our design of DCA.
[2, 2, 1, 1, 1, 2]
['2. Effect of neighbor entities.', 'In contrast to traditional global models, we include both previously linked entities and their close neighbors for global signal.', 'Table 4 shows the effectiveness of this strategy.', 'We observe that incorporating these neighbor significantly improve the performance (compared to 1-hop) by introducing more related information.', 'And our analysis shows that on average 0.72% and 3.56% relative improvement of 2-hop DCA-(SL/RL) over 1-hop DCA-(SL/RL) or baseline-SL (without DCA) is statistically significant (with P-value < 0.005).', 'This is consistent with our design of DCA.']
[None, None, None, ['ETHZ-Attn + 1-hop DCA'], ['ETHZ-Attn + 1-hop DCA', 'ETHZ-Attn + 2-hop DCA'], None]
1
D19-1028table_2
Overall results for entity set expansion on Google Web 1T,where Oursfull is the full version of our method, Ours-MCTS is our method with the MCTS disabled, and Ours-PMSN is our method but replacing the PMSN with fixed word embeddings. * indicates COB using the human feedback for seed entity selection.
2
[['Method', 'POS'], ['Method', 'MEB'], ['Method', 'COB*'], ['Method', 'Ours full'], ['Method', 'Ours -MCTS'], ['Method', 'Ours -PMSN']]
1
[['P@10'], ['P@20'], ['P@50'], ['P@100'], ['P@200'], ['MAP']]
[['0.84', '0.74', '0.55', '0.41', '0.34', '0.42'], ['0.83', '0.79', '0.68', '0.58', '0.51', '-'], ['0.97', '0.96', '0.9', '0.79', '0.66', '0.85'], ['0.97', '0.96', '0.92', '0.82', '0.69', '0.87'], ['0.85', '0.81', '0.73', '0.63', '0.52', '0.75'], ['0.63', '0.6', '0.56', '0.48', '0.42', '0.61']]
column
['P@10', 'P@20', 'P@50', 'P@100', 'P@200', 'MAP']
['Ours full']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@10</th> <th>P@20</th> <th>P@50</th> <th>P@100</th> <th>P@200</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Method || POS</td> <td>0.84</td> <td>0.74</td> <td>0.55</td> <td>0.41</td> <td>0.34</td> <td>0.42</td> </tr> <tr> <td>Method || MEB</td> <td>0.83</td> <td>0.79</td> <td>0.68</td> <td>0.58</td> <td>0.51</td> <td>-</td> </tr> <tr> <td>Method || COB*</td> <td>0.97</td> <td>0.96</td> <td>0.9</td> <td>0.79</td> <td>0.66</td> <td>0.85</td> </tr> <tr> <td>Method || Ours full</td> <td>0.97</td> <td>0.96</td> <td>0.92</td> <td>0.82</td> <td>0.69</td> <td>0.87</td> </tr> <tr> <td>Method || Ours -MCTS</td> <td>0.85</td> <td>0.81</td> <td>0.73</td> <td>0.63</td> <td>0.52</td> <td>0.75</td> </tr> <tr> <td>Method || Ours -PMSN</td> <td>0.63</td> <td>0.6</td> <td>0.56</td> <td>0.48</td> <td>0.42</td> <td>0.61</td> </tr> </tbody></table>
Table 2
table_2
D19-1028
7
emnlp2019
5.2 Experimental Results . Comparison with three baseline methods on Google Web 1T. Table 2 shows the performance of different bootstrapping methods on Google Web 1T. We can see that our full model outperforms three baseline methods: comparing with POS, our method achieves 41% improvement in P@100, 35% improvement in P@200 and 45% improvement in MAP; comparing with MEB, our method achieves 24% improvement in P@100 and 18% improvement in P@200; comparing with COB, our method achieves 3% improvement in both P@100 and P@200 metrics, and 2% improvement in MAP. The above findings indicate that our method can extract more correct entities with higher ranking scores than the baselines.
[2, 2, 1, 1, 1]
['5.2 Experimental Results .', 'Comparison with three baseline methods on Google Web 1T.', 'Table 2 shows the performance of different bootstrapping methods on Google Web 1T.', 'We can see that our full model outperforms three baseline methods: comparing with POS, our method achieves 41% improvement in P@100, 35% improvement in P@200 and 45% improvement in MAP; comparing with MEB, our method achieves 24% improvement in P@100 and 18% improvement in P@200; comparing with COB, our method achieves 3% improvement in both P@100 and P@200 metrics, and 2% improvement in MAP.', 'The above findings indicate that our method can extract more correct entities with higher ranking scores than the baselines.']
[None, None, None, ['Ours full', 'MAP', 'MEB', 'COB*', 'P@100', 'P@200'], ['Ours full']]
1
D19-1030table_7
Event Argument Role Labeling results (F1 %) on Chinese and Arabic using English as training data (with system generated entity mentions)
2
[['Target Language', 'Chinese'], ['Target Language', 'Arabic']]
1
[['F1 Score']]
[['56.9'], ['60.1']]
column
['F1 Score']
['Chinese', 'Arabic']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Target Language || Chinese</td> <td>56.9</td> </tr> <tr> <td>Target Language || Arabic</td> <td>60.1</td> </tr> </tbody></table>
Table 7
table_7
D19-1030
6
emnlp2019
Table 7 shows the results of event argument role labeling on Chinese and Arabic entity mentions automatically extracted by Stanford CoreNLP instead of manually annotated mentions. The system extracted entity mentions introduce noise and thus decrease the performance of the model, but the overall results are still promising.
[1, 1]
['Table 7 shows the results of event argument role labeling on Chinese and Arabic entity mentions automatically extracted by Stanford CoreNLP instead of manually annotated mentions.', 'The system extracted entity mentions introduce noise and thus decrease the performance of the model, but the overall results are still promising.']
[['Chinese', 'Arabic'], None]
1
D19-1034table_4
Our results on five categories compared to Ju et al. (2018) and Sohrab and Miwa (2018) on GENIA test set.
2
[['Category', 'DNA'], ['Category', 'RNA'], ['Category', 'protein'], ['Category', 'cell line'], ['Category', 'cell type'], ['Category', 'overall']]
2
[['Ours', 'P (%)'], ['Ours', 'R (%)'], ['Ours', 'F (%)'], ['Ju', 'F (%)'], ['Soh', 'F (%)']]
[['73.6', '67.8', '70.6', '70.1', '67.8'], ['82.2', '80.7', '81.5', '80.8', '75.9'], ['76.7', '76', '76.4', '72.7', '72.9'], ['77.8', '65.8', '71.3', '66.9', '63.6'], ['73.9', '71.2', '72.5', '71.3', '69.8'], ['75.8', '73.6', '74.7', '71.1', '70.7']]
column
['P (%)', 'R (%)', 'F (%)', 'F (%)', 'F (%)']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ours || P (%)</th> <th>Ours || R (%)</th> <th>Ours || F (%)</th> <th>Ju || F (%)</th> <th>Soh || F (%)</th> </tr> </thead> <tbody> <tr> <td>Category || DNA</td> <td>73.6</td> <td>67.8</td> <td>70.6</td> <td>70.1</td> <td>67.8</td> </tr> <tr> <td>Category || RNA</td> <td>82.2</td> <td>80.7</td> <td>81.5</td> <td>80.8</td> <td>75.9</td> </tr> <tr> <td>Category || protein</td> <td>76.7</td> <td>76</td> <td>76.4</td> <td>72.7</td> <td>72.9</td> </tr> <tr> <td>Category || cell line</td> <td>77.8</td> <td>65.8</td> <td>71.3</td> <td>66.9</td> <td>63.6</td> </tr> <tr> <td>Category || cell type</td> <td>73.9</td> <td>71.2</td> <td>72.5</td> <td>71.3</td> <td>69.8</td> </tr> <tr> <td>Category || overall</td> <td>75.8</td> <td>73.6</td> <td>74.7</td> <td>71.1</td> <td>70.7</td> </tr> </tbody></table>
Table 4
table_4
D19-1034
7
emnlp2019
Table 4 describes the performances of our model on the five categories on the test dataset. Our model outperforms the model described in Ju et al. (2018) and Sohrab and Miwa (2018) with F-score value on all categories.
[1, 1]
['Table 4 describes the performances of our model on the five categories on the test dataset.', 'Our model outperforms the model described in Ju et al. (2018) and Sohrab and Miwa (2018) with F-score value on all categories.']
[None, ['Ours', 'Ju', 'Soh', 'F (%)']]
1
D19-1034table_5
Performance of Boundary Detection on GENIA test set.
2
[['Model', 'Sohrab and Miwa (2018)'], ['Model', 'Ju et al. (2018)'], ['Model', 'Our model(softmax)']]
2
[['Boundary Detection', 'P (%)'], ['Boundary Detection', 'R (%)'], ['Boundary Detection', 'F (%)']]
[['76.6', '69.2', '72.7'], ['79.9', '67.08', '73.4'], ['79.7', '76.9', '78.3']]
column
['P (%)', 'R (%)', 'F (%)']
['Our model(softmax)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Boundary Detection || P (%)</th> <th>Boundary Detection || R (%)</th> <th>Boundary Detection || F (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Sohrab and Miwa (2018)</td> <td>76.6</td> <td>69.2</td> <td>72.7</td> </tr> <tr> <td>Model || Ju et al. (2018)</td> <td>79.9</td> <td>67.08</td> <td>73.4</td> </tr> <tr> <td>Model || Our model(softmax)</td> <td>79.7</td> <td>76.9</td> <td>78.3</td> </tr> </tbody></table>
Table 5
table_5
D19-1034
7
emnlp2019
5.2 Performance of Boundary Detection . We conduct experiments on boundary detection to illustrate that our model extract entity boundaries more precisely comparing to Sohrab and Miwa (2018) and Ju et al. (2018). Table 5 shows the results of boundary detection on GENIA test dataset. Our model locates entities more accurately with a higher recall value (76.9%) than the comparing methods. It gives a reason why our model outperforms other state-of-the-art methods in recall value. We exploit boundary information explicitly and consider the dependencies of boundaries and entity categorical labels with a multitask loss. While in the method of Sohrab and Miwa (2018), candidate entity regions are classified individually.
[2, 2, 1, 1, 1, 2, 2]
['5.2 Performance of Boundary Detection .', 'We conduct experiments on boundary detection to illustrate that our model extract entity boundaries more precisely comparing to Sohrab and Miwa (2018) and Ju et al. (2018).', 'Table 5 shows the results of boundary detection on GENIA test dataset.', 'Our model locates entities more accurately with a higher recall value (76.9%) than the comparing methods.', 'It gives a reason why our model outperforms other state-of-the-art methods in recall value.', 'We exploit boundary information explicitly and consider the dependencies of boundaries and entity categorical labels with a multitask loss.', 'While in the method of Sohrab and Miwa (2018), candidate entity regions are classified individually.']
[None, None, ['Boundary Detection'], ['Our model(softmax)', 'Ju et al. (2018)', 'Sohrab and Miwa (2018)'], ['Our model(softmax)'], None, None]
1
D19-1034table_7
Performance Comparison of our pipeline model and multitask model on GENIA development set and test set.
2
[['Model', 'Pipeline'], ['Model', 'Multitask']]
2
[['Development Set', 'P (%)'], ['Development Set', 'R (%)'], ['Development Set', 'F (%)'], ['Test Set', 'P (%)'], ['Test Set', 'R (%)'], ['Test Set', 'F (%)']]
[['74.5', '74.8', '74.6', '75.4', '72.2', '73.8'], ['74.5', '75.6', '75', '75.9', '73.4', '74.7']]
column
['P (%)', 'R (%)', 'F (%)', 'P (%)', 'R (%)', 'F (%)']
['Multitask']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Development Set || P (%)</th> <th>Development Set || R (%)</th> <th>Development Set || F (%)</th> <th>Test Set || P (%)</th> <th>Test Set || R (%)</th> <th>Test Set || F (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Pipeline</td> <td>74.5</td> <td>74.8</td> <td>74.6</td> <td>75.4</td> <td>72.2</td> <td>73.8</td> </tr> <tr> <td>Model || Multitask</td> <td>74.5</td> <td>75.6</td> <td>75</td> <td>75.9</td> <td>73.4</td> <td>74.7</td> </tr> </tbody></table>
Table 7
table_7
D19-1034
8
emnlp2019
5.4 Performance of Multitask Learning . Table 7 shows the performance of our pipeline model and multitask model on GENIA development set and test set. For pipeline model, we train the boundary detection module and entity categorical label prediction module separately. Our multitask model has a higher F value both in development set and test set.
[2, 1, 2, 1]
['5.4 Performance of Multitask Learning .', 'Table 7 shows the performance of our pipeline model and multitask model on GENIA development set and test set.', 'For pipeline model, we train the boundary detection module and entity categorical label prediction module separately.', 'Our multitask model has a higher F value both in development set and test set.']
[None, ['Pipeline', 'Multitask', 'Development Set', 'Test Set'], ['Pipeline'], ['Multitask', 'F (%)', 'Development Set', 'Test Set']]
1
D19-1037table_3
P@N results for models with internal CNNs self-attention and curriculum learning
3
[['P@N (%)', 'CNN-based Models', 'CNN+ONE'], ['P@N (%)', 'CNN-based Models', 'ResCNN-9'], ['P@N (%)', 'CNN-based Models', 'CNN+ONE+SelfAtt'], ['P@N (%)', 'CNN-based Models', 'CNN+ATT'], ['P@N (%)', 'CNN-based Models', 'CNN+ATT+SelfAtt'], ['P@N (%)', 'PCNN-based Models', 'PCNN+ONE'], ['P@N (%)', 'PCNN-based Models', 'PCNN+ONE+SelAtt'], ['P@N (%)', 'PCNN-based Models', '[NetMax+SelfAtt]+CCL-CT'], ['P@N (%)', 'PCNN-based Models', 'PCNN+ATT'], ['P@N (%)', 'PCNN-based Models', 'PCNN+ATT+SelfAtt'], ['P@N (%)', 'PCNN-based Models', '[NetAtt+SelfAtt]+CCL-CT']]
1
[['100'], ['200'], ['300'], ['Mean']]
[['67.3', '64.7', '58.1', '63.4'], ['79', '69', '61', '69.7'], ['81.1', '75.1', '70.4', '75.5'], ['76.2', '68.6', '59.8', '68.2'], ['81.1', '74.1', '72.4', '75.9'], ['72.3', '69.7', '64.1', '68.7'], ['84.1', '75.1', '69.1', '76.1'], ['85.1', '78.6', '74.4', '79.4'], ['76.2', '73.1', '67.4', '72.2'], ['81.1', '71.6', '70.4', '74.4'], ['82.2', '79.1', '73.1', '78.1']]
row
['P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)']
['[NetMax+SelfAtt]+CCL-CT', '[NetAtt+SelfAtt]+CCL-CT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>100</th> <th>200</th> <th>300</th> <th>Mean</th> </tr> </thead> <tbody> <tr> <td>P@N (%) || CNN-based Models || CNN+ONE</td> <td>67.3</td> <td>64.7</td> <td>58.1</td> <td>63.4</td> </tr> <tr> <td>P@N (%) || CNN-based Models || ResCNN-9</td> <td>79</td> <td>69</td> <td>61</td> <td>69.7</td> </tr> <tr> <td>P@N (%) || CNN-based Models || CNN+ONE+SelfAtt</td> <td>81.1</td> <td>75.1</td> <td>70.4</td> <td>75.5</td> </tr> <tr> <td>P@N (%) || CNN-based Models || CNN+ATT</td> <td>76.2</td> <td>68.6</td> <td>59.8</td> <td>68.2</td> </tr> <tr> <td>P@N (%) || CNN-based Models || CNN+ATT+SelfAtt</td> <td>81.1</td> <td>74.1</td> <td>72.4</td> <td>75.9</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || PCNN+ONE</td> <td>72.3</td> <td>69.7</td> <td>64.1</td> <td>68.7</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || PCNN+ONE+SelAtt</td> <td>84.1</td> <td>75.1</td> <td>69.1</td> <td>76.1</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || [NetMax+SelfAtt]+CCL-CT</td> <td>85.1</td> <td>78.6</td> <td>74.4</td> <td>79.4</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || PCNN+ATT</td> <td>76.2</td> <td>73.1</td> <td>67.4</td> <td>72.2</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || PCNN+ATT+SelfAtt</td> <td>81.1</td> <td>71.6</td> <td>70.4</td> <td>74.4</td> </tr> <tr> <td>P@N (%) || PCNN-based Models || [NetAtt+SelfAtt]+CCL-CT</td> <td>82.2</td> <td>79.1</td> <td>73.1</td> <td>78.1</td> </tr> </tbody></table>
Table 3
table_3
D19-1037
8
emnlp2019
From Figures 5(a) and 5(b), we can see that the CCL based models have further improvements in terms of PR-curves compared with PCNN+ATT/ONE+SelfAtt. The P@N results in Table 3 indicate that CCL further improves the model's performance when compared to PCNN+ATT/ONE+SelfAtt as well.
[0, 1]
['From Figures 5(a) and 5(b), we can see that the CCL based models have further improvements in terms of PR-curves compared with PCNN+ATT/ONE+SelfAtt.', "The P@N results in Table 3 indicate that CCL further improves the model's performance when compared to PCNN+ATT/ONE+SelfAtt as well."]
[None, ['PCNN+ATT', 'PCNN+ONE+SelAtt', '[NetMax+SelfAtt]+CCL-CT', '[NetAtt+SelfAtt]+CCL-CT']]
1
D19-1040table_3
Performances of entity representations on EntEval tasks. Best performing model in each task is boldfaced. CAP: coreference arc prediction, CERP: contexualized entity relationship prediction, EFP: entity factuality prediction, ET: entity typing, ESR: entity similarity and relatedness, ERT: entity relationship typing, NED: named entity disambiguation. EntELMo baseline is trained on the same dataset as EntELMo but not using the hyperlink-based training. EntELMo w/ letn is trained with a modified version of lctx, where we only decode entity mentions instead of the whole context.
1
[['GloVe'], ['BERT Base'], ['BERT Large'], ['ELMo'], ['EntELMo baseline'], ['EntELMo'], ['EntELMo w/o lctx'], ['EntELMo w/ letn']]
1
[['CAP'], ['CERP'], ['EFP'], ['ET'], ['ESR'], ['ERT'], ['NED'], ['Average']]
[['71.9', '52.6', '67', '10.3', '50.9', '40.8', '41.2', '47.8'], ['80.6', '65.6', '74.8', '32', '28.8', '42.2', '50.6', '53.5'], ['79.1', '66.9', '76.7', '32.3', '32.6', '48.8', '54.3', '55.8'], ['80.2', '61.2', '75.8', '35.6', '60.3', '46.8', '51.6', '58.8'], ['78', '59.6', '71.5', '31.3', '61.6', '46.5', '48.5', '56.7'], ['76.9', '59.9', '72.4', '32.2', '59.7', '45.7', '49', '56.5'], ['73.5', '59.4', '71.1', '33.2', '53.3', '44.6', '48.9', '54.9'], ['76.2', '60.4', '70.9', '33.6', '49', '42.9', '49.3', '54.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['BERT Base', 'BERT Large', 'ELMo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAP</th> <th>CERP</th> <th>EFP</th> <th>ET</th> <th>ESR</th> <th>ERT</th> <th>NED</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>GloVe</td> <td>71.9</td> <td>52.6</td> <td>67</td> <td>10.3</td> <td>50.9</td> <td>40.8</td> <td>41.2</td> <td>47.8</td> </tr> <tr> <td>BERT Base</td> <td>80.6</td> <td>65.6</td> <td>74.8</td> <td>32</td> <td>28.8</td> <td>42.2</td> <td>50.6</td> <td>53.5</td> </tr> <tr> <td>BERT Large</td> <td>79.1</td> <td>66.9</td> <td>76.7</td> <td>32.3</td> <td>32.6</td> <td>48.8</td> <td>54.3</td> <td>55.8</td> </tr> <tr> <td>ELMo</td> <td>80.2</td> <td>61.2</td> <td>75.8</td> <td>35.6</td> <td>60.3</td> <td>46.8</td> <td>51.6</td> <td>58.8</td> </tr> <tr> <td>EntELMo baseline</td> <td>78</td> <td>59.6</td> <td>71.5</td> <td>31.3</td> <td>61.6</td> <td>46.5</td> <td>48.5</td> <td>56.7</td> </tr> <tr> <td>EntELMo</td> <td>76.9</td> <td>59.9</td> <td>72.4</td> <td>32.2</td> <td>59.7</td> <td>45.7</td> <td>49</td> <td>56.5</td> </tr> <tr> <td>EntELMo w/o lctx</td> <td>73.5</td> <td>59.4</td> <td>71.1</td> <td>33.2</td> <td>53.3</td> <td>44.6</td> <td>48.9</td> <td>54.9</td> </tr> <tr> <td>EntELMo w/ letn</td> <td>76.2</td> <td>60.4</td> <td>70.9</td> <td>33.6</td> <td>49</td> <td>42.9</td> <td>49.3</td> <td>54.6</td> </tr> </tbody></table>
Table 3
table_3
D19-1040
8
emnlp2019
5.2 Results . Table 3 shows the performance of our models on the EntEval tasks. Our findings are detailed below: . Pretrained CWRs (ELMo, BERT) perform the best on EntEval overall, indicating that they capture knowledge about entities in contextual mentions or as entity descriptions. BERT performs poorly on entity similarity and relatedness tasks. Since this task is zero-shot, it validates the recommended setting of finetuning BERT (Devlin et al., 2018) on downstream tasks, while the embedding of the [CLS] token does not necessarily capture the semantics of the entity. BERT Large is better than BERT Base on average, showing large improvements in ERT and NED. To perform well at ERT, a model must either glean particular relationships from pairs of lengthy entity descriptions or else leverage knowledge from pretraining about the entities considered. Relatedly, performance on NED is expected to increase with both the ability to extract knowledge from descriptions and by starting with increased knowledge from pretraining. The Large model appears to be handling these capabilities better than the Base model. EntELMo improves over the EntELMo baseline (trained without the hyperlinking loss) on some tasks but suffers on others. The hyperlink-based training helps on CERP, EFP, ET, and NED. Since the hyperlink loss is closely-associated to the NED problem, it is unsurprising that NED performance is improved. Overall, we believe that hyperlink-based training benefits contextualized entity representations but does not benefit descriptive entity representations (see, for example, the drop of nearly 2 points on ESR, which is based solely on descriptive representations). This pattern may be due to the difficulty of using descriptive entity representations to reconstruct their appearing context.
[2, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 1, 2]
['5.2 Results .', 'Table 3 shows the performance of our models on the EntEval tasks.', 'Our findings are detailed below: .', 'Pretrained CWRs (ELMo, BERT) perform the best on EntEval overall, indicating that they capture knowledge about entities in contextual mentions or as entity descriptions.', 'BERT performs poorly on entity similarity and relatedness tasks.', 'Since this task is zero-shot, it validates the recommended setting of finetuning BERT (Devlin et al., 2018) on downstream tasks, while the embedding of the [CLS] token does not necessarily capture the semantics of the entity.', 'BERT Large is better than BERT Base on average, showing large improvements in ERT and NED.', 'To perform well at ERT, a model must either glean particular relationships from pairs of lengthy entity descriptions or else leverage knowledge from pretraining about the entities considered.', 'Relatedly, performance on NED is expected to increase with both the ability to extract knowledge from descriptions and by starting with increased knowledge from pretraining.', 'The Large model appears to be handling these capabilities better than the Base model.', 'EntELMo improves over the EntELMo baseline (trained without the hyperlinking loss) on some tasks but suffers on others. The hyperlink-based training helps on CERP, EFP, ET, and NED.', 'Since the hyperlink loss is closely-associated to the NED problem, it is unsurprising that NED performance is improved. Overall, we believe that hyperlink-based training benefits contextualized entity representations but does not benefit descriptive entity representations (see, for example, the drop of nearly 2 points on ESR, which is based solely on descriptive representations).', 'This pattern may be due to the difficulty of using descriptive entity representations to reconstruct their appearing context.']
[None, None, None, ['ELMo', 'BERT Base', 'BERT Large'], ['BERT Base', 'BERT Large'], None, ['BERT Large', 'BERT Base', 'ERT', 'NED'], ['ERT'], ['NED'], ['BERT Large', 'BERT Base'], ['CERP', 'EFP', 'ET', 'NED', 'EntELMo', 'EntELMo baseline'], ['NED', 'ESR'], None]
1
D19-1041table_4
Model performance breakdown for TB-Dense. “-” indicates no predictions were made for that particular label, probably due to the small size of the training sample. BEFORE (B), AFTER (A), INCLUDES (I), IS INCLUDED (II), SIMULTANEOUS (S), VAGUE (V)
1
[['B'], ['A'], ['I'], ['II'], ['S'], ['V'], ['Avg']]
2
[['CAEVO', 'P'], ['CAEVO', 'R'], ['CAEVO', 'F1'], ['Pipeline Joint', 'P'], ['Pipeline Joint', 'R'], ['Pipeline Joint', 'F1'], ['Structure Joint', 'P'], ['Structure Joint', 'R'], ['Structure Joint', 'F1']]
[['41.4', '19.5', '26.5', '59', '46.9', '52.3', '59.8', '46.9', '52.6'], ['42.1', '17.5', '24.7', '69.3', '45.3', '54.8', '71.9', '46.7', '56.6'], ['50', '3.6', '6.7', '-', '-', '-', '-', '-', '-'], ['38.5', '9.4', '15.2', '-', '-', '-', '-', '-', '-'], ['14.3', '4.5', '6.9', '-', '-', '-', '-', '-', '-'], ['44.9', '59.4', '51.1', '45.1', '55', '49.5', '45.9', '55.8', '50.4'], ['43.8', '35.7', '39.4', '51.5', '45.9', '48.5', '52.6', '46.5', '49.4']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['B', 'A', 'I', 'II', 'S', 'V']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAEVO || P</th> <th>CAEVO || R</th> <th>CAEVO || F1</th> <th>Pipeline Joint || P</th> <th>Pipeline Joint || R</th> <th>Pipeline Joint || F1</th> <th>Structure Joint || P</th> <th>Structure Joint || R</th> <th>Structure Joint || F1</th> </tr> </thead> <tbody> <tr> <td>B</td> <td>41.4</td> <td>19.5</td> <td>26.5</td> <td>59</td> <td>46.9</td> <td>52.3</td> <td>59.8</td> <td>46.9</td> <td>52.6</td> </tr> <tr> <td>A</td> <td>42.1</td> <td>17.5</td> <td>24.7</td> <td>69.3</td> <td>45.3</td> <td>54.8</td> <td>71.9</td> <td>46.7</td> <td>56.6</td> </tr> <tr> <td>I</td> <td>50</td> <td>3.6</td> <td>6.7</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>II</td> <td>38.5</td> <td>9.4</td> <td>15.2</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>S</td> <td>14.3</td> <td>4.5</td> <td>6.9</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>V</td> <td>44.9</td> <td>59.4</td> <td>51.1</td> <td>45.1</td> <td>55</td> <td>49.5</td> <td>45.9</td> <td>55.8</td> <td>50.4</td> </tr> <tr> <td>Avg</td> <td>43.8</td> <td>35.7</td> <td>39.4</td> <td>51.5</td> <td>45.9</td> <td>48.5</td> <td>52.6</td> <td>46.5</td> <td>49.4</td> </tr> </tbody></table>
Table 4
table_4
D19-1041
8
emnlp2019
In Table 4 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.
[1, 2, 1, 1]
['In Table 4 we further show the breakdown performances for each positive relation on TB-Dense.', 'The breakdown on MATRES is shown in Table 10 in the appendix.', 'BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense.', 'We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.']
[None, None, ['B', 'A', 'V'], ['CAEVO']]
1
D19-1043table_1
Experimental results of our model compared with other models. Performance is measured in accuracy (%). Models are divided into 3 categories. The first part is baseline methods including SVM and Naive Bayes and their variations. The second part contains models about recurrent neural networks. The third part contains models about convolutional neural networks.
2
[['Method', 'SVM [Socher et al. 2013]'], ['Method', 'NB [Socher et al. 2013]'], ['Method', 'NBSVM-bi [Wang and Manning 2012b]'], ['Method', 'Standard-LSTM'], ['Method', 'bi-LSTM'], ['Method', 'RCNN [Lai et al. 2015]'], ['Method', 'SNN [Zhao et al. 2018]'], ['Method', 'CNN-non-static [Kim 2014]'], ['Method', 'VD-CNN [Schwenk et al. 2017]'], ['Method', 'CL-CNN [Zhang et al. 2015a]'], ['Method', 'Capsule-B [Yang et al. 2018]'], ['Method', 'HCapsNet']]
1
[['SST-2'], ['SST-5'], ['MR'], ['Subj'], ['TREC'], ['AG news']]
[['79.4', '40.7', '-', '-', '-', '-'], ['81.8', '41', '-', '-', '-', '-'], ['-', '-', '79.4', '93.2', '-', '-'], ['80.6', '45.3', '75.9', '89.3', '86.8', '86.1'], ['83.2', '46.7', '79.3', '90.5', '89.6', '88.2'], ['-', '47.21', '-', '-', '-', ''], ['-', '50.4', '82.1', '93.9', '96', '-'], ['87.2', '48', '81.5', '93.4', '93.6', '92.3'], ['-', '-', '-', '88.2', '85.4', '91.3'], ['-', '-', '-', '88.4', '85.7', '92.3'], ['86.8', '-', '82.3', '93.8', '93.2', '92.6'], ['88.7', '50.8', '83.5', '94.2', '94.2', '93.5']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['HCapsNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-2</th> <th>SST-5</th> <th>MR</th> <th>Subj</th> <th>TREC</th> <th>AG news</th> </tr> </thead> <tbody> <tr> <td>Method || SVM [Socher et al. 2013]</td> <td>79.4</td> <td>40.7</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || NB [Socher et al. 2013]</td> <td>81.8</td> <td>41</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || NBSVM-bi [Wang and Manning 2012b]</td> <td>-</td> <td>-</td> <td>79.4</td> <td>93.2</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Standard-LSTM</td> <td>80.6</td> <td>45.3</td> <td>75.9</td> <td>89.3</td> <td>86.8</td> <td>86.1</td> </tr> <tr> <td>Method || bi-LSTM</td> <td>83.2</td> <td>46.7</td> <td>79.3</td> <td>90.5</td> <td>89.6</td> <td>88.2</td> </tr> <tr> <td>Method || RCNN [Lai et al. 2015]</td> <td>-</td> <td>47.21</td> <td>-</td> <td>-</td> <td>-</td> <td></td> </tr> <tr> <td>Method || SNN [Zhao et al. 2018]</td> <td>-</td> <td>50.4</td> <td>82.1</td> <td>93.9</td> <td>96</td> <td>-</td> </tr> <tr> <td>Method || CNN-non-static [Kim 2014]</td> <td>87.2</td> <td>48</td> <td>81.5</td> <td>93.4</td> <td>93.6</td> <td>92.3</td> </tr> <tr> <td>Method || VD-CNN [Schwenk et al. 2017]</td> <td>-</td> <td>-</td> <td>-</td> <td>88.2</td> <td>85.4</td> <td>91.3</td> </tr> <tr> <td>Method || CL-CNN [Zhang et al. 2015a]</td> <td>-</td> <td>-</td> <td>-</td> <td>88.4</td> <td>85.7</td> <td>92.3</td> </tr> <tr> <td>Method || Capsule-B [Yang et al. 2018]</td> <td>86.8</td> <td>-</td> <td>82.3</td> <td>93.8</td> <td>93.2</td> <td>92.6</td> </tr> <tr> <td>Method || HCapsNet</td> <td>88.7</td> <td>50.8</td> <td>83.5</td> <td>94.2</td> <td>94.2</td> <td>93.5</td> </tr> </tbody></table>
Table 1
table_1
D19-1043
6
emnlp2019
4.3 Results and Discussions . Table 1 reports the results of our model on different datasets comparing with the widely used text classification methods and state-of-the-art approaches. We can have the following observations. Our HCapsNet achieves the best results on 5 out of 6 datasets, which verifies the effectiveness of our model. In particular, HCapsNet outperforms vanilla capsule network Capsule-B[Yang et al., 2018] by a remarkable margin, which only utilizes the dynamic routing mechanism without hyperplane projecting.
[2, 1, 1, 1, 1]
['4.3 Results and Discussions .', 'Table 1 reports the results of our model on different datasets comparing with the widely used text classification methods and state-of-the-art approaches.', 'We can have the following observations.', 'Our HCapsNet achieves the best results on 5 out of 6 datasets, which verifies the effectiveness of our model.', 'In particular, HCapsNet outperforms vanilla capsule network Capsule-B[Yang et al., 2018] by a remarkable margin, which only utilizes the dynamic routing mechanism without hyperplane projecting.']
[None, ['HCapsNet'], None, ['HCapsNet'], ['HCapsNet', 'Capsule-B [Yang et al. 2018]']]
1
D19-1056table_5
Cross-lingual (XL) system results using BLEU score on individual languages inside the Dev set. We compute BLEU on labeled sequences (F-Seq), and separately for words and only labels. We also show scores when pre-filtering on F-Seq with BLEU ≥ 10.
2
[['Model [Filter]', 'XL-GloVe [All]'], ['Model [Filter]', 'XL-BERT [All]'], ['Model [Filter]', 'XL-GloVe [greater than equal 10]'], ['Model [Filter]', 'XL-BERT [greater than equal 10]']]
2
[['German', 'F-Seq'], ['German', 'Word'], ['German', 'Label'], ['French', 'F-Seq'], ['French', 'Word'], ['French', 'Label']]
[['18.86', '17.17', '25.52', '28.99', '17.36', '32.76'], ['27.22', '27.36', '29.59', '33.59', '22.48', '37.17'], ['30.58', '36.71', '51.68', '38.99', '43.79', '61.73'], ['36.95', '41.36', '55.73', '42.66', '46.52', '65.32']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['XL-GloVe [greater than equal 10]', 'XL-BERT [greater than equal 10]']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>German || F-Seq</th> <th>German || Word</th> <th>German || Label</th> <th>French || F-Seq</th> <th>French || Word</th> <th>French || Label</th> </tr> </thead> <tbody> <tr> <td>Model [Filter] || XL-GloVe [All]</td> <td>18.86</td> <td>17.17</td> <td>25.52</td> <td>28.99</td> <td>17.36</td> <td>32.76</td> </tr> <tr> <td>Model [Filter] || XL-BERT [All]</td> <td>27.22</td> <td>27.36</td> <td>29.59</td> <td>33.59</td> <td>22.48</td> <td>37.17</td> </tr> <tr> <td>Model [Filter] || XL-GloVe [greater than equal 10]</td> <td>30.58</td> <td>36.71</td> <td>51.68</td> <td>38.99</td> <td>43.79</td> <td>61.73</td> </tr> <tr> <td>Model [Filter] || XL-BERT [greater than equal 10]</td> <td>36.95</td> <td>41.36</td> <td>55.73</td> <td>42.66</td> <td>46.52</td> <td>65.32</td> </tr> </tbody></table>
Table 5
table_5
D19-1056
7
emnlp2019
The bottom part of Table 5 shows the scores when restricting the evaluation to sentences with score greater than equal 10. We observed that this threshold is a good trade-off in both the amount of kept sentences (above the threshold) and average BLEU score increase (presumably sentence quality).
[1, 1]
['The bottom part of Table 5 shows the scores when restricting the evaluation to sentences with score greater than equal 10.', 'We observed that this threshold is a good trade-off in both the amount of kept sentences (above the threshold) and average BLEU score increase (presumably sentence quality).']
[None, ['XL-GloVe [greater than equal 10]', 'XL-BERT [greater than equal 10]']]
1
D19-1057table_5
SRL results with different incorporation methods of the syntactic information on the Chinese dev set. Experiments are conducted on the BIAFFINE parsing results.
2
[['INPUT', 'DEP'], ['INPUT', 'DEP&REL'], ['INPUT', 'DEP&RELPATH'], ['INPUT', 'DEPPATH&RELPATH'], ['LISA', 'DEP'], ['LISA', 'DEP&REL'], ['LISA', 'DEP&RELPATH'], ['LISA', 'DEPPATH&RELPATH9'], ['RELAWE', 'DEP'], ['RELAWE', 'DEP&REL'], ['RELAWE', 'DEP&RELPATH'], ['RELAWE', 'DEPPATH&RELPATH']]
1
[['P'], ['R'], ['F1']]
[['83.89', '83.61', '83.75'], ['86.21', '85', '85.6'], ['86.01', '85.38', '85.69'], ['85.84', '85.54', '85.69'], ['84.68', '85.38', '85.03'], ['85.56', '85.89', '85.73'], ['85.84', '85.64', '85.74'], ['-', '-', '-'], ['84.33', '84.47', '84.4'], ['86.04', '85.43', '85.73'], ['86.21', '85.01', '85.6'], ['86.4', '85.52', '85.96']]
column
['P', 'R', 'F1']
['INPUT', 'LISA', 'RELAWE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>INPUT || DEP</td> <td>83.89</td> <td>83.61</td> <td>83.75</td> </tr> <tr> <td>INPUT || DEP&amp;REL</td> <td>86.21</td> <td>85</td> <td>85.6</td> </tr> <tr> <td>INPUT || DEP&amp;RELPATH</td> <td>86.01</td> <td>85.38</td> <td>85.69</td> </tr> <tr> <td>INPUT || DEPPATH&amp;RELPATH</td> <td>85.84</td> <td>85.54</td> <td>85.69</td> </tr> <tr> <td>LISA || DEP</td> <td>84.68</td> <td>85.38</td> <td>85.03</td> </tr> <tr> <td>LISA || DEP&amp;REL</td> <td>85.56</td> <td>85.89</td> <td>85.73</td> </tr> <tr> <td>LISA || DEP&amp;RELPATH</td> <td>85.84</td> <td>85.64</td> <td>85.74</td> </tr> <tr> <td>LISA || DEPPATH&amp;RELPATH9</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>RELAWE || DEP</td> <td>84.33</td> <td>84.47</td> <td>84.4</td> </tr> <tr> <td>RELAWE || DEP&amp;REL</td> <td>86.04</td> <td>85.43</td> <td>85.73</td> </tr> <tr> <td>RELAWE || DEP&amp;RELPATH</td> <td>86.21</td> <td>85.01</td> <td>85.6</td> </tr> <tr> <td>RELAWE || DEPPATH&amp;RELPATH</td> <td>86.4</td> <td>85.52</td> <td>85.96</td> </tr> </tbody></table>
Table 5
table_5
D19-1057
7
emnlp2019
Firstly, results in Table 5 show that with little dependency information (DEP), LISA performs better, while incorporating richer syntactic knowledge (DEP&REL or DEP&RELPATH), three methods achieve similar performance. Overall, RELAWE achieves best results given enough syntactic knowledge.
[1, 1]
['Firstly, results in Table 5 show that with little dependency information (DEP), LISA performs better, while incorporating richer syntactic knowledge (DEP&REL or DEP&RELPATH), three methods achieve similar performance.', 'Overall, RELAWE achieves best results given enough syntactic knowledge.']
[['DEP', 'LISA', 'DEP&REL', 'DEP&RELPATH', 'INPUT', 'RELAWE'], ['RELAWE']]
1
D19-1057table_7
SRL results on the Chinese test set. We choose the best settings for each configuration of our model.
3
[['Chinese', 'NONE', 'Metric'], ['Chinese', 'Closed', 'CoNLL09 SRL Only'], ['Chinese', 'Closed', 'INPUT(DEPPATH&RELPATH)'], ['Chinese', 'Closed', 'LISA(DEP&RELPATH)'], ['Chinese', 'Closed', 'RELAWE(DEPPATH&RELPATH)'], ['Chinese', 'Open', 'Marcheggiani and Titov (2017)'], ['Chinese', 'Open', 'Cai et al. (2018)'], ['Chinese', 'Open', 'INPUT(DEPPATH&RELPATH) + BERT'], ['Chinese', 'Open', 'LISA(DEP&RELPATH) + BERT'], ['Chinese', 'Open', 'RELAWE(DEPPATH&RELPATH) + BERT'], ['Chinese', 'GOLD', 'Metric']]
1
[['P'], ['R'], ['F1']]
[['81.99', '80.65', '81.31'], ['-', '-', '78.6'], ['84.19', '83.65', '83.92'], ['83.84', '83.54', '83.69'], ['84.77', '83.68', '84.22'], ['-', '-', '82.5'], ['84.7', '84', '84.3'], ['86.89', '87.75', '87.32'], ['86.45', '87.9', '87.17'], ['86.73', '87.98', '87.35'], ['91.93', '92.36', '92.14']]
column
['P', 'R', 'F1']
['Open', 'Closed', 'GOLD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Chinese || NONE || Metric</td> <td>81.99</td> <td>80.65</td> <td>81.31</td> </tr> <tr> <td>Chinese || Closed || CoNLL09 SRL Only</td> <td>-</td> <td>-</td> <td>78.6</td> </tr> <tr> <td>Chinese || Closed || INPUT(DEPPATH&amp;RELPATH)</td> <td>84.19</td> <td>83.65</td> <td>83.92</td> </tr> <tr> <td>Chinese || Closed || LISA(DEP&amp;RELPATH)</td> <td>83.84</td> <td>83.54</td> <td>83.69</td> </tr> <tr> <td>Chinese || Closed || RELAWE(DEPPATH&amp;RELPATH)</td> <td>84.77</td> <td>83.68</td> <td>84.22</td> </tr> <tr> <td>Chinese || Open || Marcheggiani and Titov (2017)</td> <td>-</td> <td>-</td> <td>82.5</td> </tr> <tr> <td>Chinese || Open || Cai et al. (2018)</td> <td>84.7</td> <td>84</td> <td>84.3</td> </tr> <tr> <td>Chinese || Open || INPUT(DEPPATH&amp;RELPATH) + BERT</td> <td>86.89</td> <td>87.75</td> <td>87.32</td> </tr> <tr> <td>Chinese || Open || LISA(DEP&amp;RELPATH) + BERT</td> <td>86.45</td> <td>87.9</td> <td>87.17</td> </tr> <tr> <td>Chinese || Open || RELAWE(DEPPATH&amp;RELPATH) + BERT</td> <td>86.73</td> <td>87.98</td> <td>87.35</td> </tr> <tr> <td>Chinese || GOLD || Metric</td> <td>91.93</td> <td>92.36</td> <td>92.14</td> </tr> </tbody></table>
Table 7
table_7
D19-1057
8
emnlp2019
Table 7 shows that our OPEN model achieves more than 3 points of f1-score than the stateand RELAWE with DEPof-the-art PATH&RELPATH achieves in both CLOSED and OPEN settings. Notice that our best CLOSED model can almost perform as well as the state-of-the-art model while the latter utilizes pretrained word embeddings. Besides, performance gap between three models under OPEN setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the GOLD result is much higher than the other models, indicating that there is still large space for improvement for this task.
[1, 2, 1, 2, 1]
['Table 7 shows that our OPEN model achieves more than 3 points of f1-score than the stateand RELAWE with DEPof-the-art PATH&RELPATH achieves in both CLOSED and OPEN settings.', 'Notice that our best CLOSED model can almost perform as well as the state-of-the-art model while the latter utilizes pretrained word embeddings.', 'Besides, performance gap between three models under OPEN setting is very small.', 'It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information.', 'At last, the GOLD result is much higher than the other models, indicating that there is still large space for improvement for this task.']
[['Open', 'Closed', 'RELAWE(DEPPATH&RELPATH) + BERT'], ['RELAWE(DEPPATH&RELPATH) + BERT', 'Closed'], ['Open'], None, ['GOLD']]
1
D19-1061table_8
Results for predicting temporal anchors with the neural network (only text, and text and image).
2
[['NN, only text', 'yes'], ['NN, only text', 'no'], ['NN, only text', 'Macro Avg.'], ['NN, text + img', 'yes'], ['NN, text + img', 'no'], ['NN, text + img', 'Macro Avg.']]
2
[['Before', 'P'], ['Before', 'R'], ['Before', 'F1'], ['During', 'P'], ['During', 'R'], ['During', 'F1'], ['After', 'P'], ['After', 'R'], ['After', 'F1']]
[['0.74', '0.96', '0.83', '0.92', '0.98', '0.95', '0.82', '0.88', '0.85'], ['0.35', '0.07', '0.11', '0', '0', '0', '0.29', '0.21', '0.24'], ['0.55', '0.52', '0.47', '0.46', '0.49', '0.48', '0.56', '0.55', '0.55'], ['0.7', '0.78', '0.74', '0.88', '0.97', '0.92', '0.84', '0.89', '0.87'], ['0.48', '0.38', '0.43', '0.25', '0.08', '0.12', '0.53', '0.41', '0.46'], ['0.59', '0.58', '0.59', '0.57', '0.53', '0.52', '0.69', '0.65', '0.67']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['NN, only text', 'NN, text + img']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Before || P</th> <th>Before || R</th> <th>Before || F1</th> <th>During || P</th> <th>During || R</th> <th>During || F1</th> <th>After || P</th> <th>After || R</th> <th>After || F1</th> </tr> </thead> <tbody> <tr> <td>NN, only text || yes</td> <td>0.74</td> <td>0.96</td> <td>0.83</td> <td>0.92</td> <td>0.98</td> <td>0.95</td> <td>0.82</td> <td>0.88</td> <td>0.85</td> </tr> <tr> <td>NN, only text || no</td> <td>0.35</td> <td>0.07</td> <td>0.11</td> <td>0</td> <td>0</td> <td>0</td> <td>0.29</td> <td>0.21</td> <td>0.24</td> </tr> <tr> <td>NN, only text || Macro Avg.</td> <td>0.55</td> <td>0.52</td> <td>0.47</td> <td>0.46</td> <td>0.49</td> <td>0.48</td> <td>0.56</td> <td>0.55</td> <td>0.55</td> </tr> <tr> <td>NN, text + img || yes</td> <td>0.7</td> <td>0.78</td> <td>0.74</td> <td>0.88</td> <td>0.97</td> <td>0.92</td> <td>0.84</td> <td>0.89</td> <td>0.87</td> </tr> <tr> <td>NN, text + img || no</td> <td>0.48</td> <td>0.38</td> <td>0.43</td> <td>0.25</td> <td>0.08</td> <td>0.12</td> <td>0.53</td> <td>0.41</td> <td>0.46</td> </tr> <tr> <td>NN, text + img || Macro Avg.</td> <td>0.59</td> <td>0.58</td> <td>0.59</td> <td>0.57</td> <td>0.53</td> <td>0.52</td> <td>0.69</td> <td>0.65</td> <td>0.67</td> </tr> </tbody></table>
Table 8
table_8
D19-1061
8
emnlp2019
Regarding interest in the possessee, all models but the majority baseline (including logistic regression) obtain similar F1s (0.58–0.59). While there is certainly room for improvement, the current results lead to the conclusion that a few keywords are sufficient to obtain 0.58 F1: neither images nor word embeddings bring improvements. Temporal Anchors. Table 8 presents results obtained with the neural network when predicting temporal anchors. The image components are beneficial with all anchors, especially before (F1:0.47 vs. 0.59, +25%) and after (0.55 vs. 0.67, +22%), and to a lesser degree during (0.48 vs. 0.52; 8%). F1 scores are higher for yes label than no label across all temporal anchors.
[0, 0, 2, 1, 1, 1]
['Regarding interest in the possessee, all models but the majority baseline (including logistic regression) obtain similar F1s (0.58–0.59).', 'While there is certainly room for improvement, the current results lead to the conclusion that a few keywords are sufficient to obtain 0.58 F1: neither images nor word embeddings bring improvements.', 'Temporal Anchors.', 'Table 8 presents results obtained with the neural network when predicting temporal anchors.', 'The image components are beneficial with all anchors, especially before (F1:0.47 vs. 0.59, +25%) and after (0.55 vs. 0.67, +22%), and to a lesser degree during (0.48 vs. 0.52; 8%).', 'F1 scores are higher for yes label than no label across all temporal anchors.']
[None, None, None, None, ['NN, only text', 'NN, text + img', 'Before', 'After', 'During'], ['F1', 'yes', 'no']]
1
D19-1063table_6
Results on TEST UNSEENALL of our model, trained with and without curiosity-encouraging loss, and an LSTM-based encoder-decoder model (both models have about 15M parameters). “Navigation mistake repeat” is the fraction of time steps on which the agent repeats a non-optimal navigation action at a previously visited location while executing the same task. “Help-request repeat” is the fraction of help requests made at a previously visited location while executing the same task.
2
[['Model', 'LSTM-ENCDEC'], ['Model', 'Our model (alpha = 0)'], ['Model', 'Our model (alpha = 1)']]
1
[['SR (%)'], ['Nav. mistake repeat (%)'], ['Help-request repeat (%)']]
[['19.25', '31.09', '49.37'], ['43.12', '25', '40.17'], ['47.45', '17.85', '21.1']]
column
['SR (%)', 'Nav. mistake repeat (%)', 'Help-request repeat (%)']
['Our model (alpha = 1)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SR (%)</th> <th>Nav. mistake repeat (%)</th> <th>Help-request repeat (%)</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-ENCDEC</td> <td>19.25</td> <td>31.09</td> <td>49.37</td> </tr> <tr> <td>Model || Our model (alpha = 0)</td> <td>43.12</td> <td>25</td> <td>40.17</td> </tr> <tr> <td>Model || Our model (alpha = 1)</td> <td>47.45</td> <td>17.85</td> <td>21.1</td> </tr> </tbody></table>
Table 6
table_6
D19-1063
9
emnlp2019
Does the proposed imitation learning algorithm achieve its goals?. The curiosity-encouraging training objective is proposed to prevent the agent from making the same mistakes at previously encountered situations. Table 6 shows that training with the curiosity-encouraging objective reduces the chance of the agent looping and making the same decisions repeatedly. As a result, its success rate is greatly boosted (+4.33% on TEST UNSEENALL) over no curiosity-encouraging.
[2, 2, 1, 1]
['Does the proposed imitation learning algorithm achieve its goals?.', 'The curiosity-encouraging training objective is proposed to prevent the agent from making the same mistakes at previously encountered situations.', 'Table 6 shows that training with the curiosity-encouraging objective reduces the chance of the agent looping and making the same decisions repeatedly.', 'As a result, its success rate is greatly boosted (+4.33% on TEST UNSEENALL) over no curiosity-encouraging.']
[None, None, ['Our model (alpha = 1)', 'Nav. mistake repeat (%)', 'Help-request repeat (%)'], ['Our model (alpha = 1)', 'Our model (alpha = 0)']]
1
D19-1068table_2
Experimental results in exploring different lexical mapping methods.
2
[['Method', 'Embbeding_Proj'], ['Method', 'CL Trans (1 cand.)'], ['Method', 'CL Trans (2 cand.)'], ['Method', 'CL Trans (3 cand.)'], ['Method', 'CL Trans (4 cand.)'], ['Method', 'CL Trans (5 cand.)']]
1
[['Pre.'], ['Rec.'], ['F1']]
[['26', '20', '22.6'], ['31.2', '21.4', '25.4'], ['31.7', '22.3', '26.2'], ['32', '23.4', '27'], ['30.7', '23.6', '26.7'], ['30.2', '23.6', '26.5']]
column
['Pre.', 'Rec.', 'F1']
['CL Trans (1 cand.)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pre.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Embbeding_Proj</td> <td>26</td> <td>20</td> <td>22.6</td> </tr> <tr> <td>Method || CL Trans (1 cand.)</td> <td>31.2</td> <td>21.4</td> <td>25.4</td> </tr> <tr> <td>Method || CL Trans (2 cand.)</td> <td>31.7</td> <td>22.3</td> <td>26.2</td> </tr> <tr> <td>Method || CL Trans (3 cand.)</td> <td>32</td> <td>23.4</td> <td>27</td> </tr> <tr> <td>Method || CL Trans (4 cand.)</td> <td>30.7</td> <td>23.6</td> <td>26.7</td> </tr> <tr> <td>Method || CL Trans (5 cand.)</td> <td>30.2</td> <td>23.6</td> <td>26.5</td> </tr> </tbody></table>
Table 2
table_2
D19-1068
8
emnlp2019
Exploring Lexical Mapping Method. To explore our lexical mapping method, we compare the performance of several variant systems retrieving a different number of candidates (ranging from 1 to 5) and the embedding-projection method (Embedding proj). Note the system retrieving only one candidate actually takes the nearest Chinese neighbor as the word translation. The lexical mapping in it is still context-independent. Table 2 summarizes the results. From the results, we observe that even though both of CL Trans (1 cand.) and Embedding proj are content-independent mapping methods, the former outperforms the latter by a margin (+3.2% on F1). This implies that the embedding-projection method might suffer from the misalignment in the shared embedding space, and enforcing a word-to-word alignment (as in CL Trans (1 cand.)) could alleviate this problem to some extent. Retrieving more translation candidates could consistently improve Recall. But when too many candidates (e.g., 5) are added, the Precision drops, which harms the overall F1 measure.
[2, 1, 2, 2, 1, 1, 2, 2, 1]
['Exploring Lexical Mapping Method.', 'To explore our lexical mapping method, we compare the performance of several variant systems retrieving a different number of candidates (ranging from 1 to 5) and the embedding-projection method (Embedding proj).', 'Note the system retrieving only one candidate actually takes the nearest Chinese neighbor as the word translation.', 'The lexical mapping in it is still context-independent.', 'Table 2 summarizes the results.', 'From the results, we observe that even though both of CL Trans (1 cand.) and Embedding proj are content-independent mapping methods, the former outperforms the latter by a margin (+3.2% on F1).', 'This implies that the embedding-projection method might suffer from the misalignment in the shared embedding space, and enforcing a word-to-word alignment (as in CL Trans (1 cand.)) could alleviate this problem to some extent.', 'Retrieving more translation candidates could consistently improve Recall.', 'But when too many candidates (e.g., 5) are added, the Precision drops, which harms the overall F1 measure.']
[None, ['CL Trans (1 cand.)', 'CL Trans (2 cand.)', 'CL Trans (3 cand.)', 'CL Trans (4 cand.)', 'CL Trans (5 cand.)'], None, None, None, ['CL Trans (1 cand.)', 'F1'], ['CL Trans (1 cand.)'], ['Rec.'], ['CL Trans (5 cand.)', 'Pre.', 'F1']]
1
D19-1070table_2
Performance of the rule-based baselines and the post conditioned models on the ingredient detection task of the RECIPES dataset. These models all underperform First Occ.
3
[['Model', 'Performance Benchmarks', 'Majority'], ['Model', 'Performance Benchmarks', 'Exact Match'], ['Model', 'Performance Benchmarks', 'First Occ'], ['Model', 'Models', 'GPTattn'], ['Model', 'Models', 'GPTindep'], ['Model', 'Models', 'ELMotoken'], ['Model', 'Models', 'ELMosent']]
1
[['P'], ['R'], ['F1'], ['Acc'], ['UR'], ['CR']]
[['-', '-', '-', '57.27', '-', '-'], ['84.94', '20.25', '32.7', '64.39', '73.42', '4.02'], ['65.23', '87.17', '74.6', '74.65', '84.88', '87.79'], ['63.94', '71.72', '67.6', '70.63', '54.3', '77.04'], ['67.05', '69.07', '68.04', '72.28', '47.09', '75.79'], ['64.96', '76.64', '70.32', '72.35', '69.14', '78.94'], ['69.09', '72.88', '70.9', '74.48', '57.05', '77.71']]
column
['P', 'R', 'F1', 'Acc', 'UR', 'CR']
None
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>Acc</th> <th>UR</th> <th>CR</th> </tr> </thead> <tbody> <tr> <td>Model || Performance Benchmarks || Majority</td> <td>-</td> <td>-</td> <td>-</td> <td>57.27</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Performance Benchmarks || Exact Match</td> <td>84.94</td> <td>20.25</td> <td>32.7</td> <td>64.39</td> <td>73.42</td> <td>4.02</td> </tr> <tr> <td>Model || Performance Benchmarks || First Occ</td> <td>65.23</td> <td>87.17</td> <td>74.6</td> <td>74.65</td> <td>84.88</td> <td>87.79</td> </tr> <tr> <td>Model || Models || GPTattn</td> <td>63.94</td> <td>71.72</td> <td>67.6</td> <td>70.63</td> <td>54.3</td> <td>77.04</td> </tr> <tr> <td>Model || Models || GPTindep</td> <td>67.05</td> <td>69.07</td> <td>68.04</td> <td>72.28</td> <td>47.09</td> <td>75.79</td> </tr> <tr> <td>Model || Models || ELMotoken</td> <td>64.96</td> <td>76.64</td> <td>70.32</td> <td>72.35</td> <td>69.14</td> <td>78.94</td> </tr> <tr> <td>Model || Models || ELMosent</td> <td>69.09</td> <td>72.88</td> <td>70.9</td> <td>74.48</td> <td>57.05</td> <td>77.71</td> </tr> </tbody></table>
Table 2
table_2
D19-1070
4
emnlp2019
Table 2 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively.
[1, 1, 1]
['Table 2 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance.', "Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined).", 'Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively.']
[None, ['UR', 'CR'], ['Exact Match', 'First Occ', 'P', 'R']]
1
D19-1070table_9
Model’s performance degradation with input ablations. We see that the model’s major source of performance is from verbs than compared to other ingredient’s explicit mentions.
2
[['Input', 'Complete Process'], ['Input', 'w/o Other Ingredients'], ['Input', 'w/o Verbs'], ['Input', 'w/o Verbs & Other Ingredients']]
1
[['Accuracy']]
[['84.59'], ['82.71'], ['79.08'], ['77.79']]
column
['Accuracy']
['Input']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Input || Complete Process</td> <td>84.59</td> </tr> <tr> <td>Input || w/o Other Ingredients</td> <td>82.71</td> </tr> <tr> <td>Input || w/o Verbs</td> <td>79.08</td> </tr> <tr> <td>Input || w/o Verbs &amp; Other Ingredients</td> <td>77.79</td> </tr> </tbody></table>
Table 9
table_9
D19-1070
9
emnlp2019
Table 9 presents these ablation studies. We only observe a minor performance drop from 84.59 to 82.71 (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to 79.08 and further omitting both leads to 77.79. This shows the models dependence on verb semantics over tracking the other ingredients.
[1, 1, 1, 2]
['Table 9 presents these ablation studies.', 'We only observe a minor performance drop from 84.59 to 82.71 (accuracy) when other ingredients are removed entirely.', 'Removing verbs dropped the performance to 79.08 and further omitting both leads to 77.79.', 'This shows the models dependence on verb semantics over tracking the other ingredients.']
[None, ['Accuracy', 'w/o Other Ingredients'], ['Accuracy', 'w/o Verbs', 'w/o Verbs & Other Ingredients'], None]
1
D19-1076table_1
The comparison between the proposed methods LLMap and RGP, and the MUSE supervised method. The values are average precision over 10 random 90-10 splits of the dictionaries, statistically significant results between LLMap and MUSE are shown in bold and between LLMap and RGP are underlined.
2
[['Language', 'Czech (CS)'], ['Language', 'Norwegian (NO)'], ['Language', 'Dutch (NL)'], ['Language', 'Chinese (ZH)'], ['Language', 'Korean (KO)'], ['Language', 'Japanese (JA)'], ['Language', 'Croatian (HR)'], ['Language', 'Indonesian (ID)'], ['Language', 'Farsi (FA)'], ['Language', 'Bulgarian (BG)'], ['Language', 'Spanish (ES)'], ['Language', 'Tamil (TA)'], ['Language', 'Hindi (HI)'], ['Language', 'Bengali (BN)'], ['Language', 'Average Improvement (LLMap-MUSE)']]
2
[['P@1', 'LLMap'], ['P@1', 'MUSE'], ['P@1', 'RGP'], ['P@5', 'LLMap'], ['P@5', 'MUSE'], ['P@5', 'RGP'], ['P@10', 'LLMap'], ['P@10', 'MUSE'], ['P@10', 'RGP']]
[['28.29', '28.37', '28.37', '56.92', '55.65', '55.88', '65.99', '64.72', '64.94'], ['32.9', '31.62', '31.63', '58.74', '56.13', '56.53', '66.23', '63.57', '64.01'], ['42.3', '41.06', '41.18', '67.13', '65.43', '65.7', '73.73', '72.03', '72.49'], ['17.4', '14.19', '19.51', '43.19', '35.6', '44.05', '52.11', '44.62', '52.68'], ['17.12', '17.02', '16.81', '35.8', '34.41', '34.23', '44.05', '42.69', '42.12'], ['9.46', '2.45', '9.97', '20.3', '6.97', '20.72', '25.83', '9.85', '26.66'], ['19.29', '18.71', '18.87', '46.4', '43.55', '44.25', '56.36', '53.38', '54.12'], ['31.9', '30.37', '30.45', '54.63', '51.96', '52.4', '62.06', '59.24', '59.67'], ['17.48', '16.69', '17.12', '35.36', '33.14', '33.7', '42.7', '40.24', '40.75'], ['21.64', '23.4', '23.02', '56.19', '55.22', '55.35', '67.12', '65.95', '66.11'], ['42.36', '42.01', '42.08', '70.75', '69.94', '70.12', '77.32', '76.41', '76.73'], ['9.79', '9.76', '10.07', '24.3', '22.34', '23.08', '31.38', '28.78', '29.8'], ['16.83', '16.26', '16.79', '36.51', '32.88', '34.69', '43.91', '39.92', '41.85'], ['15.51', '15.3', '15.95', '39.67', '35.55', '37.06', '48.43', '43.94', '45.51'], ['1.07', '1.07', '1.07', '3.36', '3.36', '3.36', '3.7', '3.7', '3.7']]
column
['P@1', 'P@1', 'P@1', 'P@5', 'P@5', 'P@5', 'P@10', 'P@10', 'P@10']
['LLMap']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1 || LLMap</th> <th>P@1 || MUSE</th> <th>P@1 || RGP</th> <th>P@5 || LLMap</th> <th>P@5 || MUSE</th> <th>P@5 || RGP</th> <th>P@10 || LLMap</th> <th>P@10 || MUSE</th> <th>P@10 || RGP</th> </tr> </thead> <tbody> <tr> <td>Language || Czech (CS)</td> <td>28.29</td> <td>28.37</td> <td>28.37</td> <td>56.92</td> <td>55.65</td> <td>55.88</td> <td>65.99</td> <td>64.72</td> <td>64.94</td> </tr> <tr> <td>Language || Norwegian (NO)</td> <td>32.9</td> <td>31.62</td> <td>31.63</td> <td>58.74</td> <td>56.13</td> <td>56.53</td> <td>66.23</td> <td>63.57</td> <td>64.01</td> </tr> <tr> <td>Language || Dutch (NL)</td> <td>42.3</td> <td>41.06</td> <td>41.18</td> <td>67.13</td> <td>65.43</td> <td>65.7</td> <td>73.73</td> <td>72.03</td> <td>72.49</td> </tr> <tr> <td>Language || Chinese (ZH)</td> <td>17.4</td> <td>14.19</td> <td>19.51</td> <td>43.19</td> <td>35.6</td> <td>44.05</td> <td>52.11</td> <td>44.62</td> <td>52.68</td> </tr> <tr> <td>Language || Korean (KO)</td> <td>17.12</td> <td>17.02</td> <td>16.81</td> <td>35.8</td> <td>34.41</td> <td>34.23</td> <td>44.05</td> <td>42.69</td> <td>42.12</td> </tr> <tr> <td>Language || Japanese (JA)</td> <td>9.46</td> <td>2.45</td> <td>9.97</td> <td>20.3</td> <td>6.97</td> <td>20.72</td> <td>25.83</td> <td>9.85</td> <td>26.66</td> </tr> <tr> <td>Language || Croatian (HR)</td> <td>19.29</td> <td>18.71</td> <td>18.87</td> <td>46.4</td> <td>43.55</td> <td>44.25</td> <td>56.36</td> <td>53.38</td> <td>54.12</td> </tr> <tr> <td>Language || Indonesian (ID)</td> <td>31.9</td> <td>30.37</td> <td>30.45</td> <td>54.63</td> <td>51.96</td> <td>52.4</td> <td>62.06</td> <td>59.24</td> <td>59.67</td> </tr> <tr> <td>Language || Farsi (FA)</td> <td>17.48</td> <td>16.69</td> <td>17.12</td> <td>35.36</td> <td>33.14</td> <td>33.7</td> <td>42.7</td> <td>40.24</td> <td>40.75</td> </tr> <tr> <td>Language || Bulgarian (BG)</td> <td>21.64</td> <td>23.4</td> <td>23.02</td> <td>56.19</td> <td>55.22</td> <td>55.35</td> <td>67.12</td> <td>65.95</td> <td>66.11</td> </tr> <tr> <td>Language || Spanish (ES)</td> <td>42.36</td> <td>42.01</td> <td>42.08</td> <td>70.75</td> <td>69.94</td> <td>70.12</td> <td>77.32</td> <td>76.41</td> <td>76.73</td> </tr> <tr> <td>Language || Tamil (TA)</td> <td>9.79</td> <td>9.76</td> <td>10.07</td> <td>24.3</td> <td>22.34</td> <td>23.08</td> <td>31.38</td> <td>28.78</td> <td>29.8</td> </tr> <tr> <td>Language || Hindi (HI)</td> <td>16.83</td> <td>16.26</td> <td>16.79</td> <td>36.51</td> <td>32.88</td> <td>34.69</td> <td>43.91</td> <td>39.92</td> <td>41.85</td> </tr> <tr> <td>Language || Bengali (BN)</td> <td>15.51</td> <td>15.3</td> <td>15.95</td> <td>39.67</td> <td>35.55</td> <td>37.06</td> <td>48.43</td> <td>43.94</td> <td>45.51</td> </tr> <tr> <td>Language || Average Improvement (LLMap-MUSE)</td> <td>1.07</td> <td>1.07</td> <td>1.07</td> <td>3.36</td> <td>3.36</td> <td>3.36</td> <td>3.7</td> <td>3.7</td> <td>3.7</td> </tr> </tbody></table>
Table 1
table_1
D19-1076
7
emnlp2019
4.3 Results. Table 1 shows the average of precision over the 10 random splits @k = 1, 5 and 10. The bold values are statistically significant results between LLMap and the MUSE supervised method. The RGP column refers to our model without the piecewise mapping, which we discuss later in this section. In all cases, except Czech (CS) and Bulgarian (BG) @1 where MUSE has a slight edge over LLMap, our method achieved higher precision on average over 10-fold cross-validation than the MUSE algorithm. In the majority of the cases the improvements are statistically significant. We can see the most significant improvements (over 8%) are observed in Japanese (JA) language and Chinese (ZH). The other languages mostly see between 1%-3% improvement in the precision. The average gain in precision @10 sits at 3.7%.
[2, 1, 2, 1, 1, 1, 1, 1]
['4.3 Results.', 'Table 1 shows the average of precision over the 10 random splits @k = 1, 5 and 10.', 'The bold values are statistically significant results between LLMap and the MUSE supervised method.', 'The RGP column refers to our model without the piecewise mapping, which we discuss later in this section.', 'In all cases, except Czech (CS) and Bulgarian (BG) @1 where MUSE has a slight edge over LLMap, our method achieved higher precision on average over 10-fold cross-validation than the MUSE algorithm.', 'In the majority of the cases the improvements are statistically significant. We can see the most significant improvements (over 8%) are observed in Japanese (JA) language and Chinese (ZH).', 'The other languages mostly see between 1%-3% improvement in the precision.', 'The average gain in precision @10 sits at 3.7%.']
[None, ['P@1', 'P@5', 'P@10'], ['LLMap', 'MUSE'], ['RGP'], ['LLMap', 'MUSE', 'Language', 'Czech (CS)', 'Bulgarian (BG)'], ['Japanese (JA)', 'Chinese (ZH)'], ['Language'], ['P@10']]
1
D19-1076table_2
The comparison between the proposed method LLMap and MUSE on pre-split train and test dictionaries for any and all senses recovery by the algorithms for precision@5. Last row shows the average improvement achieved by LLMap over MUSE
2
[['Lang', 'CS'], ['Lang', 'NO'], ['Lang', 'NL'], ['Lang', 'ZH'], ['Lang', 'KO'], ['Lang', 'JA'], ['Lang', 'HR'], ['Lang', 'ID'], ['Lang', 'FA'], ['Lang', 'BG'], ['Lang', 'ES'], ['Lang', 'TA'], ['Lang', 'HI'], ['Lang', 'BN'], ['Lang', 'AVG']]
2
[['Any Sense', 'MUSE'], ['Any Sense', 'LLMap'], ['All Senses', 'MUSE'], ['All Senses', 'LLMap']]
[['76.66', '78.86', '61.32', '62.17'], ['80', '81.85', '65.57', '67.35'], ['88.53', '89.33', '69.28', '70.52'], ['55', '64.77', '41.88', '51.5'], ['51.94', '54.23', '42.81', '44.4'], ['3.97', '17.62', '3.22', '14.51'], ['62.46', '64.7', '52.19', '53.78'], ['81.2', '85.19', '64.87', '68.2'], ['53.73', '56.03', '42.64', '44.29'], ['65.07', '68.13', '59.45', '59.48'], ['92.1', '92.23', '71.46', '72.11'], ['28.53', '29', '25.6', '26.01'], ['51.2', '53.93', '38.22', '44.53'], ['33.8', '36.42', '42.42', '45.1'], ['3.45', '3.45', '3.07', '3.07']]
column
['precision@5', 'precision@5', 'precision@5', 'precision@5']
['MUSE', 'LLMap']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Any Sense || MUSE</th> <th>Any Sense || LLMap</th> <th>All Senses || MUSE</th> <th>All Senses || LLMap</th> </tr> </thead> <tbody> <tr> <td>Lang || CS</td> <td>76.66</td> <td>78.86</td> <td>61.32</td> <td>62.17</td> </tr> <tr> <td>Lang || NO</td> <td>80</td> <td>81.85</td> <td>65.57</td> <td>67.35</td> </tr> <tr> <td>Lang || NL</td> <td>88.53</td> <td>89.33</td> <td>69.28</td> <td>70.52</td> </tr> <tr> <td>Lang || ZH</td> <td>55</td> <td>64.77</td> <td>41.88</td> <td>51.5</td> </tr> <tr> <td>Lang || KO</td> <td>51.94</td> <td>54.23</td> <td>42.81</td> <td>44.4</td> </tr> <tr> <td>Lang || JA</td> <td>3.97</td> <td>17.62</td> <td>3.22</td> <td>14.51</td> </tr> <tr> <td>Lang || HR</td> <td>62.46</td> <td>64.7</td> <td>52.19</td> <td>53.78</td> </tr> <tr> <td>Lang || ID</td> <td>81.2</td> <td>85.19</td> <td>64.87</td> <td>68.2</td> </tr> <tr> <td>Lang || FA</td> <td>53.73</td> <td>56.03</td> <td>42.64</td> <td>44.29</td> </tr> <tr> <td>Lang || BG</td> <td>65.07</td> <td>68.13</td> <td>59.45</td> <td>59.48</td> </tr> <tr> <td>Lang || ES</td> <td>92.1</td> <td>92.23</td> <td>71.46</td> <td>72.11</td> </tr> <tr> <td>Lang || TA</td> <td>28.53</td> <td>29</td> <td>25.6</td> <td>26.01</td> </tr> <tr> <td>Lang || HI</td> <td>51.2</td> <td>53.93</td> <td>38.22</td> <td>44.53</td> </tr> <tr> <td>Lang || BN</td> <td>33.8</td> <td>36.42</td> <td>42.42</td> <td>45.1</td> </tr> <tr> <td>Lang || AVG</td> <td>3.45</td> <td>3.45</td> <td>3.07</td> <td>3.07</td> </tr> </tbody></table>
Table 2
table_2
D19-1076
8
emnlp2019
Table 2 shows the precision@5 for the pre-split dictionaries. In all 14 languages the LLMap outperforms the MUSE algorithm for recovering both all senses and any sense of a word with significant gains in Chinese and Japanese. These results are consistent with the more comprehensive crossvalidation settings. Note that the any sense recovery is on average higher than all senses, pointing to the same fact the model is better at creating a better neighborhood around words where at least one sense of a word can be recovered.
[1, 1, 2, 2]
['Table 2 shows the precision@5 for the pre-split dictionaries.', 'In all 14 languages the LLMap outperforms the MUSE algorithm for recovering both all senses and any sense of a word with significant gains in Chinese and Japanese.', 'These results are consistent with the more comprehensive crossvalidation settings.', 'Note that the any sense recovery is on average higher than all senses, pointing to the same fact the model is better at creating a better neighborhood around words where at least one sense of a word can be recovered.']
[None, ['LLMap', 'All Senses', 'Any Sense', 'ZH', 'JA'], None, None]
1
D19-1081table_6
Results for DocRepair trained on different amount of data. For ellipsis, we show inflection/VP scores.
1
[['2.5m'], ['5m'], ['30m']]
1
[['BLEU'], ['deixis'], ['lex. c.'], ['ellipsis']]
[['34.15', '89.2', '75.5', '81.8/71.6'], ['34.44', '90.3', '77.7', '83.6/74.0'], ['34.6', '91.8', '80.6', '86.4/75.2']]
column
['BLEU', 'deixis', 'lex. c.', 'ellipsis']
['2.5m', '5m', '30m']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>deixis</th> <th>lex. c.</th> <th>ellipsis</th> </tr> </thead> <tbody> <tr> <td>2.5m</td> <td>34.15</td> <td>89.2</td> <td>75.5</td> <td>81.8/71.6</td> </tr> <tr> <td>5m</td> <td>34.44</td> <td>90.3</td> <td>77.7</td> <td>83.6/74.0</td> </tr> <tr> <td>30m</td> <td>34.6</td> <td>91.8</td> <td>80.6</td> <td>86.4/75.2</td> </tr> </tbody></table>
Table 6
table_6
D19-1081
6
emnlp2019
6.1 The amount of training data . Table 6 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores. The reason might be that, as we observed in our previous work (Voita et al., 2019), inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies. Also, as we show in Section 7, this is the phenomenon the model learns faster in training.
[2, 1, 1, 1, 2, 2]
['6.1 The amount of training data .', 'Table 6 provides BLEU and consistency scores for the DocRepair model trained on different amount of data.', 'We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments).', 'Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores.', 'The reason might be that, as we observed in our previous work (Voita et al., 2019), inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies.', 'Also, as we show in Section 7, this is the phenomenon the model learns faster in training.']
[None, ['BLEU'], ['5m', '30m'], ['deixis', 'lex. c.', 'ellipsis'], None, None]
1
D19-1083table_4
Tokenized case-sensitive BLEU (BLEU) and perplexity (PPL) on training (Train) and development (newstest2013, Dev) set. We randomly select 3K sentence pairs as our training data for evaluation. Lower PPL is better.
2
[['ID', '1'], ['ID', '11'], ['ID', '12'], ['ID', '13'], ['ID', '14'], ['ID', '15'], ['ID', '16']]
2
[['BLEU', 'Train'], ['BLEU', 'Dev'], ['PPL', 'Train'], ['PPL', 'Dev']]
[['28.64', '26.16', '5.23', '4.76'], ['29.63', '26.44', '4.48', '4.38'], ['29.75', '26.16', '4.6', '4.49'], ['29.43', '26.51', '5.09', '4.71'], ['30.71', '26.52', '3.96', '4.32'], ['30.89', '26.53', '4.09', '4.41'], ['30.25', '26.56', '4.62', '4.58']]
column
['BLEU', 'BLEU', 'PPL', 'PPL']
['ID']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || Train</th> <th>BLEU || Dev</th> <th>PPL || Train</th> <th>PPL || Dev</th> </tr> </thead> <tbody> <tr> <td>ID || 1</td> <td>28.64</td> <td>26.16</td> <td>5.23</td> <td>4.76</td> </tr> <tr> <td>ID || 11</td> <td>29.63</td> <td>26.44</td> <td>4.48</td> <td>4.38</td> </tr> <tr> <td>ID || 12</td> <td>29.75</td> <td>26.16</td> <td>4.6</td> <td>4.49</td> </tr> <tr> <td>ID || 13</td> <td>29.43</td> <td>26.51</td> <td>5.09</td> <td>4.71</td> </tr> <tr> <td>ID || 14</td> <td>30.71</td> <td>26.52</td> <td>3.96</td> <td>4.32</td> </tr> <tr> <td>ID || 15</td> <td>30.89</td> <td>26.53</td> <td>4.09</td> <td>4.41</td> </tr> <tr> <td>ID || 16</td> <td>30.25</td> <td>26.56</td> <td>4.62</td> <td>4.58</td> </tr> </tbody></table>
Table 4
table_4
D19-1083
7
emnlp2019
Surprisingly, training deep Transformers with both DS-Init and MAtt improves not only running efficiency but also translation quality (by 0.2 BLEU), compared with DS-Init alone. To get an improved understanding, we analyze model performance on both training and development set. Results in Table 4 show that models with DS-Init yield the best perplexity on both training and development set, and those with T2T achieve the best BLEU on the training set. However, DSInit+MAtt performs best in terms of BLEU on the development set. This indicates that the success of DS-Init+MAtt comes from its better generalization rather than better fitting training data.
[2, 2, 1, 1, 2]
['Surprisingly, training deep Transformers with both DS-Init and MAtt improves not only running efficiency but also translation quality (by 0.2 BLEU), compared with DS-Init alone.', 'To get an improved understanding, we analyze model performance on both training and development set.', 'Results in Table 4 show that models with DS-Init yield the best perplexity on both training and development set, and those with T2T achieve the best BLEU on the training set.', 'However, DSInit+MAtt performs best in terms of BLEU on the development set.', 'This indicates that the success of DS-Init+MAtt comes from its better generalization rather than better fitting training data.']
[['15', 'BLEU', '13'], None, ['13', 'PPL', 'Train', 'Dev', 'BLEU', '12', '14'], ['15', 'BLEU'], ['15']]
1
D19-1084table_4
F1 results on OntoNotes test for systems trained on data projected via FastAlign and DiscAlign.
4
[['Method', 'Zh Gold', '# train', '36K'], ['Method', 'FastAlign', '# train', '36K'], ['Method', 'FastAlign', '# train', '53K'], ['Method', 'DiscAlign', '# train', '36K'], ['Method', 'DiscAlign', '# train', '53K']]
1
[['P'], ['R'], ['F1']]
[['75.46', '80.55', '77.81'], ['38.99', '36.61', '37.55'], ['39.46', '36.65', '37.77'], ['51.94', '52.37', '51.76'], ['51.92', '51.93', '51.57']]
column
['P', 'R', 'F1']
['DiscAlign', 'FastAlign']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Zh Gold || # train || 36K</td> <td>75.46</td> <td>80.55</td> <td>77.81</td> </tr> <tr> <td>Method || FastAlign || # train || 36K</td> <td>38.99</td> <td>36.61</td> <td>37.55</td> </tr> <tr> <td>Method || FastAlign || # train || 53K</td> <td>39.46</td> <td>36.65</td> <td>37.77</td> </tr> <tr> <td>Method || DiscAlign || # train || 36K</td> <td>51.94</td> <td>52.37</td> <td>51.76</td> </tr> <tr> <td>Method || DiscAlign || # train || 53K</td> <td>51.92</td> <td>51.93</td> <td>51.57</td> </tr> </tbody></table>
Table 4
table_4
D19-1084
7
emnlp2019
5.1 Results & Analysis. Table 4 shows that while NER systems trained on projected data do categorically worse than an NER system trained on gold-standard data, the higherquality alignments obtained from DiscAlign lead to a major improvement in F1 when compared to FastAlign.
[2, 1]
['5.1 Results & Analysis.', 'Table 4 shows that while NER systems trained on projected data do categorically worse than an NER system trained on gold-standard data, the higherquality alignments obtained from DiscAlign lead to a major improvement in F1 when compared to FastAlign.']
[None, ['DiscAlign', 'FastAlign']]
1
D19-1085table_3
Translation quality on Japanese–English data. As seen, the proposed models can also significantly improve translation performance, which shares the same trend with that on Chinese–English translation.
2
[['Model', 'Baseline'], ['Model', 'External ZP Prediction'], ['Model', 'Joint Model'], ['Model', '+ Discourse-Level Context']]
1
[['BLEU'], ['delta']]
[['19.94', '–'], ['20.86', '0.92'], ['21.39', '1.45'], ['22', '2.06']]
column
['BLEU', 'delta']
['Joint Model', '+ Discourse-Level Context']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>delta</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>19.94</td> <td>–</td> </tr> <tr> <td>Model || External ZP Prediction</td> <td>20.86</td> <td>0.92</td> </tr> <tr> <td>Model || Joint Model</td> <td>21.39</td> <td>1.45</td> </tr> <tr> <td>Model || + Discourse-Level Context</td> <td>22</td> <td>2.06</td> </tr> </tbody></table>
Table 3
table_3
D19-1085
6
emnlp2019
4.3 Results on Japanese->English Task. Table 3 lists the results. We compare our models and the best external ZP prediction approach. As seen, our models also significantly improve translation performance, demonstrating the effectiveness and universality of the proposed approach. This improvement on Japanese->English translation is lower than that on Chinese->English, showing that ZP prediction and translation are more challenging for Japanese. The reason may be two folds: 1) Japanese language has a larger number of pronoun variations borrowed from archaism, which leads to more difficulties in learning ZPs; 2) Japanese language is subject-objectverb (SOV) while English has subject-verb-object (SVO) structure, and this poses difficulties for ZP annotation via alignment method.
[2, 1, 1, 1, 2, 2]
['4.3 Results on Japanese->English Task.', 'Table 3 lists the results.', 'We compare our models and the best external ZP prediction approach.', 'As seen, our models also significantly improve translation performance, demonstrating the effectiveness and universality of the proposed approach.', 'This improvement on Japanese->English translation is lower than that on Chinese->English, showing that ZP prediction and translation are more challenging for Japanese.', 'The reason may be two folds: 1) Japanese language has a larger number of pronoun variations borrowed from archaism, which leads to more difficulties in learning ZPs; 2) Japanese language is subject-objectverb (SOV) while English has subject-verb-object (SVO) structure, and this poses difficulties for ZP annotation via alignment method.']
[None, None, ['Joint Model', '+ Discourse-Level Context', 'External ZP Prediction'], ['Joint Model', '+ Discourse-Level Context'], None, None]
1
D19-1092table_4
Comparison with previous work (UAS).
3
[['Model', 'TreeBank Transferring', 'This'], ['Model', 'TreeBank Transferring', 'Guo15'], ['Model', 'TreeBank Transferring', 'Guo16'], ['Model', 'TreeBank Transferring', 'TA16'], ['Model', 'Annotation Projection', 'MX14'], ['Model', 'Annotation Projection', 'RC15'], ['Model', 'Annotation Projection', 'LA16'], ['Model', 'TreeBank Transferring + Annotation Projection', 'RC17']]
1
[['DE'], ['ES'], ['FR'], ['IT'], ['PT']]
[['72.78', '81.44', '83.77', '86.13', '84.05'], ['60.35', '71.9', '72.93', '-', '-'], ['65.01', '79', '77.69', '78.49', '81.86'], ['75.27', '76.85', '79.21', '-', '-'], ['74.3', '75.53', '70.14', '77.74', '76.65'], ['79.68', '80.86', '82.72', '83.67', '82.07'], ['75.99', '78.94', '80.8', '79.39', '-'], ['82.1', '82.6', '83.9', '84.4', '84.6']]
column
['uas', 'uas', 'uas', 'uas', 'uas']
['This']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE</th> <th>ES</th> <th>FR</th> <th>IT</th> <th>PT</th> </tr> </thead> <tbody> <tr> <td>Model || TreeBank Transferring || This</td> <td>72.78</td> <td>81.44</td> <td>83.77</td> <td>86.13</td> <td>84.05</td> </tr> <tr> <td>Model || TreeBank Transferring || Guo15</td> <td>60.35</td> <td>71.9</td> <td>72.93</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || TreeBank Transferring || Guo16</td> <td>65.01</td> <td>79</td> <td>77.69</td> <td>78.49</td> <td>81.86</td> </tr> <tr> <td>Model || TreeBank Transferring || TA16</td> <td>75.27</td> <td>76.85</td> <td>79.21</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Annotation Projection || MX14</td> <td>74.3</td> <td>75.53</td> <td>70.14</td> <td>77.74</td> <td>76.65</td> </tr> <tr> <td>Model || Annotation Projection || RC15</td> <td>79.68</td> <td>80.86</td> <td>82.72</td> <td>83.67</td> <td>82.07</td> </tr> <tr> <td>Model || Annotation Projection || LA16</td> <td>75.99</td> <td>78.94</td> <td>80.8</td> <td>79.39</td> <td>-</td> </tr> <tr> <td>Model || TreeBank Transferring + Annotation Projection || RC17</td> <td>82.1</td> <td>82.6</td> <td>83.9</td> <td>84.4</td> <td>84.6</td> </tr> </tbody></table>
Table 4
table_4
D19-1092
8
emnlp2019
5.5 Comparison with Previous Work . We compare our method with previous work in the literature. Table 4 shows the results, where the UAS values are reported. Our model denoted by This refers to the model of Src + Mix. Note that these models are not directly comparable due to the setting and baseline parser differences. The first block shows several models by directly transferring gold-standard source treebank knowledge into the target side, including the models of Guo15 (Guo et al., 2015), Guo16 (Guo et al., 2016b) and TA16 (Tiedemann and Agic´, 2016). Our model gives the best performance with one exception on the German language. One possible reason may be that TA16 has exploited multiple sources of treebanks besides English.
[2, 2, 1, 2, 2, 1, 1, 2]
['5.5 Comparison with Previous Work .', 'We compare our method with previous work in the literature.', 'Table 4 shows the results, where the UAS values are reported.', 'Our model denoted by This refers to the model of Src + Mix.', 'Note that these models are not directly comparable due to the setting and baseline parser differences.', 'The first block shows several models by directly transferring gold-standard source treebank knowledge into the target side, including the models of Guo15 (Guo et al., 2015), Guo16 (Guo et al., 2016b) and TA16 (Tiedemann and Agic´, 2016).', 'Our model gives the best performance with one exception on the German language.', 'One possible reason may be that TA16 has exploited multiple sources of treebanks besides English.']
[None, None, None, ['This'], None, ['Guo15', 'Guo16', 'TA16'], ['This', ' DE'], [' DE']]
1
D19-1093table_2
Dependency parsing results on English Penn Treebank v3.0.
3
[['Approach', 'Baselines', 'StackPtr (paper)'], ['Approach', 'Baselines', 'StackPtr (code)'], ['Approach', 'Proposed Model', ' H-PtrNet-PST (Gate)'], ['Approach', 'Proposed Model', ' H-PtrNet-PST (SGate)'], ['Approach', 'Proposed Model', ' H-PtrNet-PS (Gate)']]
1
[['UAS'], ['LAS']]
[[' 96.12±0.03', ' 95.06±0.05'], [' 95.94±0.03', ' 94.91±0.05'], [' 96.03±0.02', ' 94.99±0.02'], [' 96.04±0.05 95.00±0.06', ''], [' 96.09±0.05 95.03±0.03', '']]
column
['UAS', 'LAS']
[' H-PtrNet-PST (Gate)', ' H-PtrNet-PST (SGate)', ' H-PtrNet-PS (Gate)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Approach || Baselines || StackPtr (paper)</td> <td>96.12±0.03</td> <td>95.06±0.05</td> </tr> <tr> <td>Approach || Baselines || StackPtr (code)</td> <td>95.94±0.03</td> <td>94.91±0.05</td> </tr> <tr> <td>Approach || Proposed Model || H-PtrNet-PST (Gate)</td> <td>96.03±0.02</td> <td>94.99±0.02</td> </tr> <tr> <td>Approach || Proposed Model || H-PtrNet-PST (SGate)</td> <td>96.04±0.05 95.00±0.06</td> <td></td> </tr> <tr> <td>Approach || Proposed Model || H-PtrNet-PS (Gate)</td> <td>96.09±0.05 95.03±0.03</td> <td></td> </tr> </tbody></table>
Table 2
table_2
D19-1093
7
emnlp2019
Results on English Penn Treebank. Table 2 presents the results on English Penn Treebank. StackPtr (paper) refer to the results reported by Ma et al.(2018), and StackPtr (code) is our run of their code in identical settings as ours. Our model H-PtrNet-PST (Gate) outperforms the baseline by 0.09 and 0.08 in terms of UAS and LAS, respectively. Performance of H-PtrNet-PST (SGate) is close to that of H-PtrNet-PST (Gate), though we see slight improvement. We also test H-PtrNetPS (Gate), the model with parent and sibling connections only, which further improves the performance to 96.09 and 95.03 in UAS and LAS.
[2, 1, 2, 1, 1, 1]
['Results on English Penn Treebank.', 'Table 2 presents the results on English Penn Treebank.', 'StackPtr (paper) refer to the results reported by Ma et al.(2018), and StackPtr (code) is our run of their code in identical settings as ours.', 'Our model H-PtrNet-PST (Gate) outperforms the baseline by 0.09 and 0.08 in terms of UAS and LAS, respectively.', 'Performance of H-PtrNet-PST (SGate) is close to that of H-PtrNet-PST (Gate), though we see slight improvement.', 'We also test H-PtrNetPS (Gate), the model with parent and sibling connections only, which further improves the performance to 96.09 and 95.03 in UAS and LAS.']
[None, None, ['StackPtr (paper)', 'StackPtr (code)'], [' H-PtrNet-PST (Gate)', 'UAS', ' LAS'], [' H-PtrNet-PST (SGate)', ' H-PtrNet-PST (Gate)'], [' H-PtrNet-PS (Gate)', 'UAS', ' LAS']]
1
D19-1094table_4
CoNLL-2009 results on Chinese, German, and Spanish (test sets). Differences in F1 between our models and previous systems are statistically significant (p < 0.05) using stratified shuffling (Noreen, 1989).
2
[['Chinese', 'Björkelund et al. (2010)'], ['Chinese', 'Roth and Lapata (2016)'], ['Chinese', 'Marcheggiani and Titov (2017)'], ['Chinese', 'He et al. (2018b)'], ['Chinese', 'Cai et al. (2018)'], ['Chinese', 'Li et al. (2018)'], ['Chinese', 'Ours (supervised training)'], ['Chinese', 'Ours (with CVT)'], ['German', 'Björkelund et al. (2010)'], ['German', 'Roth and Lapata (2016)'], ['German', 'Ours (supervised training)'], ['German', 'Ours (with CVT)'], ['Spanish', 'Björkelund et al. (2010)'], ['Spanish', 'Roth and Lapata (2016)'], ['Spanish', 'Marcheggiani et al. (2017)'], ['Spanish', 'Ours (supervised training)'], ['Spanish', 'Ours(with CVT)']]
1
[['P'], ['R'], ['F 1']]
[['82.4', '75.1', '78.6'], ['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['84.7', '84', '84.3'], ['84.8', '81.2', '83'], ['84.9', '84.3', '84.6'], ['85.4', '84.6', '85'], ['81.2', '78.3', '79.7'], ['81.8', '78.5', '80.1'], ['84.5', '82.1', '83.3'], ['84.9', '82.7', '83.8'], ['78.9', '74.3', '76.5'], ['83.2', '77.4', '80.2'], ['81.4', '79.3', '80.3'], ['83', '81.3', '82.1'], ['83.6', '82.2', '82.9']]
column
['P', 'R', 'F1']
['Ours (supervised training)', 'Ours (with CVT)', 'Ours(with CVT)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F 1</th> </tr> </thead> <tbody> <tr> <td>Chinese || Björkelund et al. (2010)</td> <td>82.4</td> <td>75.1</td> <td>78.6</td> </tr> <tr> <td>Chinese || Roth and Lapata (2016)</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr> <td>Chinese || Marcheggiani and Titov (2017)</td> <td>84.6</td> <td>80.4</td> <td>82.5</td> </tr> <tr> <td>Chinese || He et al. (2018b)</td> <td>84.2</td> <td>81.5</td> <td>82.8</td> </tr> <tr> <td>Chinese || Cai et al. (2018)</td> <td>84.7</td> <td>84</td> <td>84.3</td> </tr> <tr> <td>Chinese || Li et al. (2018)</td> <td>84.8</td> <td>81.2</td> <td>83</td> </tr> <tr> <td>Chinese || Ours (supervised training)</td> <td>84.9</td> <td>84.3</td> <td>84.6</td> </tr> <tr> <td>Chinese || Ours (with CVT)</td> <td>85.4</td> <td>84.6</td> <td>85</td> </tr> <tr> <td>German || Björkelund et al. (2010)</td> <td>81.2</td> <td>78.3</td> <td>79.7</td> </tr> <tr> <td>German || Roth and Lapata (2016)</td> <td>81.8</td> <td>78.5</td> <td>80.1</td> </tr> <tr> <td>German || Ours (supervised training)</td> <td>84.5</td> <td>82.1</td> <td>83.3</td> </tr> <tr> <td>German || Ours (with CVT)</td> <td>84.9</td> <td>82.7</td> <td>83.8</td> </tr> <tr> <td>Spanish || Björkelund et al. (2010)</td> <td>78.9</td> <td>74.3</td> <td>76.5</td> </tr> <tr> <td>Spanish || Roth and Lapata (2016)</td> <td>83.2</td> <td>77.4</td> <td>80.2</td> </tr> <tr> <td>Spanish || Marcheggiani et al. (2017)</td> <td>81.4</td> <td>79.3</td> <td>80.3</td> </tr> <tr> <td>Spanish || Ours (supervised training)</td> <td>83</td> <td>81.3</td> <td>82.1</td> </tr> <tr> <td>Spanish || Ours(with CVT)</td> <td>83.6</td> <td>82.2</td> <td>82.9</td> </tr> </tbody></table>
Table 4
table_4
D19-1094
7
emnlp2019
Table 4 presents the results of our experiments (without ELMo) on Chinese, German, and Spanish. Although we have not performed detailed parameter selection in these languages (i.e., we used the same parameters as in English), our model achieves state-of-the-art performance across all three languages.
[1, 1]
['Table 4 presents the results of our experiments (without ELMo) on Chinese, German, and Spanish.', 'Although we have not performed detailed parameter selection in these languages (i.e., we used the same parameters as in English), our model achieves state-of-the-art performance across all three languages.']
[['Chinese', 'German', 'Spanish'], ['Ours (supervised training)', 'Ours (with CVT)', 'Ours(with CVT)', 'Chinese', 'German', 'Spanish']]
1
D19-1102table_3
LAS results on North Sámi development data. mono-base and cross-base are models without data augmentation. % improvements over mono-base shown in parentheses.
2
[['size', 'T100'], ['size', 'T50'], ['size', 'T10']]
2
[['MONOLINGUAL', 'mono-base'], ['MONOLINGUAL', 'Morph'], ['MONOLINGUAL', 'Nonce'], ['CROSS-LINGUAL', 'cross-base'], ['CROSS-LINGUAL', 'Morph'], ['CROSS-LINGUAL', 'Nonce']]
[['53.3', '56.0 (+3.3)', ' 56.3 (+3.0)', '61.3 (+8.0)', '60.9 (+7.6)', '61.7 (+8.4)'], ['42.5', '46.6 (+4.1)', ' 46.5 (+4.0)', '52.0 (+9.5)', '51.7 (+9.2)', '52.0 (+9.5)'], ['18.5', '27.1 (+8.6)', ' 27.8 (+9.3)', '34.7 (+16.2)', '37.3 (+18.8)', '35.4 (+16.9)']]
column
['LAS', 'LAS', 'LAS', 'LAS', 'LAS', 'LAS']
['Morph', 'Nonce']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MONOLINGUAL || mono-base</th> <th>MONOLINGUAL || Morph</th> <th>MONOLINGUAL || Nonce</th> <th>CROSS-LINGUAL || cross-base</th> <th>CROSS-LINGUAL || Morph</th> <th>CROSS-LINGUAL || Nonce</th> </tr> </thead> <tbody> <tr> <td>size || T100</td> <td>53.3</td> <td>56.0 (+3.3)</td> <td>56.3 (+3.0)</td> <td>61.3 (+8.0)</td> <td>60.9 (+7.6)</td> <td>61.7 (+8.4)</td> </tr> <tr> <td>size || T50</td> <td>42.5</td> <td>46.6 (+4.1)</td> <td>46.5 (+4.0)</td> <td>52.0 (+9.5)</td> <td>51.7 (+9.2)</td> <td>52.0 (+9.5)</td> </tr> <tr> <td>size || T10</td> <td>18.5</td> <td>27.1 (+8.6)</td> <td>27.8 (+9.3)</td> <td>34.7 (+16.2)</td> <td>37.3 (+18.8)</td> <td>35.4 (+16.9)</td> </tr> </tbody></table>
Table 3
table_3
D19-1102
5
emnlp2019
We employ two baselines: a monolingual model (ยง3.1) and a cross-lingual model (ยง2.3), both without data augmentation. The monolingual model acts as a simple baseline, to resemble a situation when the target treebank does not have any source treebank (i.e., no available treebanks from related languages). The cross-lingual model serves as a strong baseline, simulating a case when there is a source treebank. We compare both baselines to models trained with MORPH and NONCE augmentation methods. Table 3 reports our results. We see that the cross-lingual training (cross-base) performs better than monolingual models even with augmentation. For the T10 setting, cross-base achieves almost twice as much as the monolingual baseline (mono-base). The benefits of data augmentation are less evident in the cross-lingual setting, but in the T10 scenario, data augmentation still clearly helps. Overall, cross-lingual combined with data augmentation yields the best result.
[1, 2, 2, 1, 1, 1, 1, 1, 1]
['We employ two baselines: a monolingual model (ยง3.1) and a cross-lingual model (ยง2.3), both without data augmentation.', 'The monolingual model acts as a simple baseline, to resemble a situation when the target treebank does not have any source treebank (i.e., no available treebanks from related languages).', 'The cross-lingual model serves as a strong baseline, simulating a case when there is a source treebank.', 'We compare both baselines to models trained with MORPH and NONCE augmentation methods.', 'Table 3 reports our results.', 'We see that the cross-lingual training (cross-base) performs better than monolingual models even with augmentation.', 'For the T10 setting, cross-base achieves almost twice as much as the monolingual baseline (mono-base).', 'The benefits of data augmentation are less evident in the cross-lingual setting, but in the T10 scenario, data augmentation still clearly helps.', 'Overall, cross-lingual combined with data augmentation yields the best result.']
[['mono-base', 'cross-base'], ['mono-base'], ['cross-base'], ['mono-base', 'cross-base', 'Morph', 'Nonce'], None, ['CROSS-LINGUAL', 'MONOLINGUAL'], ['T10', 'CROSS-LINGUAL', 'cross-base', 'MONOLINGUAL', 'mono-base'], ['T10', 'CROSS-LINGUAL', 'Morph', 'Nonce'], ['CROSS-LINGUAL', 'Morph', 'Nonce']]
1
D19-1102table_7
LAS results on development sets. zero-shot denotes results where we predict using model trained only on the source treebank.
2
[['Language', 'Galician'], ['Language', 'Kazakh'], ['Language', 'Kazakh (translit.)']]
2
[['-', 'zero-shot'], ['CROSS-LINGUAL', ' +fastText'], ['CROSS-LINGUAL', ' +Morph']]
[['51.9', '72.8', '71'], ['12.5', '27.7', '28.4'], ['21.2', '31.1', '36.7']]
column
['LAS', 'LAS', 'LAS']
['CROSS-LINGUAL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || zero-shot</th> <th>CROSS-LINGUAL || +fastText</th> <th>CROSS-LINGUAL || +Morph</th> </tr> </thead> <tbody> <tr> <td>Language || Galician</td> <td>51.9</td> <td>72.8</td> <td>71</td> </tr> <tr> <td>Language || Kazakh</td> <td>12.5</td> <td>27.7</td> <td>28.4</td> </tr> <tr> <td>Language || Kazakh (translit.)</td> <td>21.2</td> <td>31.1</td> <td>36.7</td> </tr> </tbody></table>
Table 7
table_7
D19-1102
7
emnlp2019
5.1 Experimental results . Table 7 reports the LAS performance on the development sets. MORPH augmentation improves performance over the zero-shot baseline and achieves comparable or better LAS with a cross-lingual model trained with pre-trained word embeddings. Next, we look at the effects of transliteration (see Kazakh vs Kazakh (translit.) in Table 7). In the zero-shot experiments, simply mapping both Turkish and Kazakh characters to the Latin alphabet improves accuracy from 12.5 to 21.2 LAS. Cross-lingual training with MORPH further improves performance to 36.7 LAS.
[2, 1, 1, 2, 1, 1]
['5.1 Experimental results .', 'Table 7 reports the LAS performance on the development sets.', 'MORPH augmentation improves performance over the zero-shot baseline and achieves comparable or better LAS with a cross-lingual model trained with pre-trained word embeddings.', 'Next, we look at the effects of transliteration (see Kazakh vs Kazakh (translit.) in Table 7).', 'In the zero-shot experiments, simply mapping both Turkish and Kazakh characters to the Latin alphabet improves accuracy from 12.5 to 21.2 LAS.', 'Cross-lingual training with MORPH further improves performance to 36.7 LAS.']
[None, None, [' +Morph', 'zero-shot', 'CROSS-LINGUAL'], ['Kazakh', 'Kazakh (translit.)'], ['zero-shot'], ['CROSS-LINGUAL', ' +Morph']]
1
D19-1109table_2
Main results for Task 1: Commonsense knowledge base completion (test F1 score) and Task 2: Wikipedia mining (quality scores out of 4). Results are included from the sentence generation methods of simple concatenation, hand-crafted templates, templates plus grammatical transformations, and coherency ranking. DNN, Factorized, and Prototypical models are described in Jastrzebski et al. (2018).
3
[['Model', 'Unsupervised', 'CONCATENATION'], ['Model', 'Unsupervised', 'TEMPLATE'], ['Model', 'Unsupervised', 'TEMPL.+GRAMMAR'], ['Model', 'Unsupervised', 'COHERENCY RANK'], ['Model', 'Supervised', 'DNN'], ['Model', 'Supervised', 'FACTORIZED'], ['Model', 'Supervised', 'PROTOTYPICAL']]
1
[[' Task 1'], [' Task 2']]
[[' 68.8', ' 2.95 ± 0.11'], [' 72.2', ' 2.98 ± 0.11'], [' 74.4', ' 2.56 ± 0.13'], [' 78.8', ' 3.00 ± 0.12'], [' 89.2', ' 2.50'], [' 89.0', ' 2.61'], [' 79.4', ' 2.55']]
column
['F1', 'F1']
['Unsupervised', 'COHERENCY RANK']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task 1</th> <th>Task 2</th> </tr> </thead> <tbody> <tr> <td>Model || Unsupervised || CONCATENATION</td> <td>68.8</td> <td>2.95 ± 0.11</td> </tr> <tr> <td>Model || Unsupervised || TEMPLATE</td> <td>72.2</td> <td>2.98 ± 0.11</td> </tr> <tr> <td>Model || Unsupervised || TEMPL.+GRAMMAR</td> <td>74.4</td> <td>2.56 ± 0.13</td> </tr> <tr> <td>Model || Unsupervised || COHERENCY RANK</td> <td>78.8</td> <td>3.00 ± 0.12</td> </tr> <tr> <td>Model || Supervised || DNN</td> <td>89.2</td> <td>2.50</td> </tr> <tr> <td>Model || Supervised || FACTORIZED</td> <td>89.0</td> <td>2.61</td> </tr> <tr> <td>Model || Supervised || PROTOTYPICAL</td> <td>79.4</td> <td>2.55</td> </tr> </tbody></table>
Table 2
table_2
D19-1109
5
emnlp2019
Table 2 shows the full results. Our unsupervised approach achieves a test set F1 score of 78.8, comparable to the 79.4 F1 score found by the supervised prototypical approach. The Factorized and DNN models significantly outperformed our approach with F1 scores of 89.2 and 89.0, respectively. Our grid search found an optimal λ value of 1.65 for the Concatenation sentence generation model and 1.55 for the Coherency Ranking model. The Template and Template + Grammar methods found lambda values of 1.20 and 0.95, respectively.
[1, 1, 1, 2, 2]
['Table 2 shows the full results.', 'Our unsupervised approach achieves a test set F1 score of 78.8, comparable to the 79.4 F1 score found by the supervised prototypical approach.', 'The Factorized and DNN models significantly outperformed our approach with F1 scores of 89.2 and 89.0, respectively.', 'Our grid search found an optimal λ value of 1.65 for the Concatenation sentence generation model and 1.55 for the Coherency Ranking model.', 'The Template and Template + Grammar methods found lambda values of 1.20 and 0.95, respectively.']
[None, ['Unsupervised', 'COHERENCY RANK', 'PROTOTYPICAL'], ['FACTORIZED', 'DNN'], None, None]
1
D19-1112table_1
Results on GLUE test sets. Metrics differ per task (explained in Appendix A) but the best result is highlighted.
2
[['Model', 'BERT'], ['Model', 'MT-DNN'], ['Model', 'MAML'], ['Model', 'FOMAML'], ['Model', 'Reptile']]
2
[['Test Dataset', 'CoLA'], [' Test Dataset', ' MRPC'], [' Test Dataset', ' STS-B'], [' Test Dataset', ' RTE']]
[['52.1', ' 88.9/84.8', ' 87.1/85.8', '66.4'], ['51.7', ' 89.9/86.3', ' 87.6/86.8', '75.4'], ['53.4', ' 89.5/85.8', ' 88.0/87.3', '76.4'], ['51.6', ' 89.9/86.4', ' 88.6/88.0', '74.1'], ['53.2', ' 90.2/86.7', ' 88.7/88.1', '77']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Reptile']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test Dataset || CoLA</th> <th>Test Dataset || MRPC</th> <th>Test Dataset || STS-B</th> <th>Test Dataset || RTE</th> </tr> </thead> <tbody> <tr> <td>Model || BERT</td> <td>52.1</td> <td>88.9/84.8</td> <td>87.1/85.8</td> <td>66.4</td> </tr> <tr> <td>Model || MT-DNN</td> <td>51.7</td> <td>89.9/86.3</td> <td>87.6/86.8</td> <td>75.4</td> </tr> <tr> <td>Model || MAML</td> <td>53.4</td> <td>89.5/85.8</td> <td>88.0/87.3</td> <td>76.4</td> </tr> <tr> <td>Model || FOMAML</td> <td>51.6</td> <td>89.9/86.4</td> <td>88.6/88.0</td> <td>74.1</td> </tr> <tr> <td>Model || Reptile</td> <td>53.2</td> <td>90.2/86.7</td> <td>88.7/88.1</td> <td>77</td> </tr> </tbody></table>
Table 1
table_1
D19-1112
3
emnlp2019
3.1 Results . We first use the three meta-learning algorithms with PPS sampling and present in Table 1 the experimental results on the GLUE test set. Generally, the meta-learning algorithms achieve better performance than the strong baseline models, with Reptile performing the best. Since the MT-DNN also uses PPS sampling, the improvements suggest meta-learning algorithms can indeed learn better representations compared with multi-task learning. Reptile outperforming MAML indicates that reptile is a more effective and efficient algorithm compared with MAML in our setting.
[0, 1, 1, 1, 1]
['3.1 Results .', 'We first use the three meta-learning algorithms with PPS sampling and present in Table 1 the experimental results on the GLUE test set.', 'Generally, the meta-learning algorithms achieve better performance than the strong baseline models, with Reptile performing the best.', 'Since the MT-DNN also uses PPS sampling, the improvements suggest meta-learning algorithms can indeed learn better representations compared with multi-task learning.', 'Reptile outperforming MAML indicates that reptile is a more effective and efficient algorithm compared with MAML in our setting.']
[None, None, ['Reptile'], ['MT-DNN'], ['Reptile', 'MAML']]
1
D19-1114table_1
Test results on different datasets.
2
[['Dataset', 'InferSent'], ['Dataset', 'SSE'], ['Dataset', 'DecAtt'], ['Dataset', 'ESIMtree'], ['Dataset', 'ESIMseq'], ['Dataset', 'ESIMseq+tree'], ['Dataset', 'PWIMour'], ['Dataset', 'mPWIMseq'], ['Dataset', 'mPWIMseq+tree']]
2
[['SNLI', 'Acc'], ['Quora', 'Acc'], ['Twitter', 'F1'], ['PIT-2015', 'F1'], ['STS-2014', 'Pearson’s r'], ['WikiQA', 'MAP'], ['WikiQA', 'MRR'], ['TrecQA', 'MAP'], ['TrecQA', 'MRR'], ['SICK', 'Pearson’s r'], ['SICK', 'ρ']]
[['0.846', '0.866', '0.746', '0.451', '0.715', '0.287', '0.287', '0.521', '0.559', '-'], ['0.855', '0.878', '0.65', '0.422', '0.378', '0.624', '0.638', '0.628', '0.670', '-'], ['0.856', '0.845', '0.652', '0.43', '0.317', '0.603', '0.619', '0.660', '0.712', '-'], ['0.864', '0.755', '0.74', '0.447', '0.493', '0.618', '0.633', '0.698', '0.734', '-'], ['0.87', '0.85', '0.748', '0.52', '0.602', '0.652', '0.664', '0.771', '0.795', '-'], ['0.871', '0.854', '0.759', '0.538', '0.589', '0.647', '0.658', '0.749', '0.768', '-'], ['0.822', '0.853', '0.745', '0.602', '0.695', '0.709', '0.723', '0.759', '0.822', '0.871', '0.809'], ['0.851', '0.862', '0.757', '0.612', '0.714', '0.717', '0.728', '0.774', '0.835', '0.878', '0.821'], ['0.855', '0.87', '0.743', '0.623', '0.718', '0.735', '0.751', '0.781', '0.821', '0.887', '0.834']]
column
['Acc', 'Acc', 'F1', 'F1', 'Pearson’s r', 'MAP', 'MRR', 'MAP', 'MRR', 'Pearson’s r', 'ρ']
['mPWIMseq']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI || Acc</th> <th>Quora || Acc</th> <th>Twitter || F1</th> <th>PIT-2015 || F1</th> <th>STS-2014 || Pearson’s r</th> <th>WikiQA || MAP</th> <th>WikiQA || MRR</th> <th>TrecQA || MAP</th> <th>TrecQA || MRR</th> <th>SICK || Pearson’s r</th> <th>SICK || ρ</th> </tr> </thead> <tbody> <tr> <td>Dataset || InferSent</td> <td>0.846</td> <td>0.866</td> <td>0.746</td> <td>0.451</td> <td>0.715</td> <td>0.287</td> <td>0.287</td> <td>0.521</td> <td>0.559</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || SSE</td> <td>0.855</td> <td>0.878</td> <td>0.65</td> <td>0.422</td> <td>0.378</td> <td>0.624</td> <td>0.638</td> <td>0.628</td> <td>0.670</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || DecAtt</td> <td>0.856</td> <td>0.845</td> <td>0.652</td> <td>0.43</td> <td>0.317</td> <td>0.603</td> <td>0.619</td> <td>0.660</td> <td>0.712</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || ESIMtree</td> <td>0.864</td> <td>0.755</td> <td>0.74</td> <td>0.447</td> <td>0.493</td> <td>0.618</td> <td>0.633</td> <td>0.698</td> <td>0.734</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || ESIMseq</td> <td>0.87</td> <td>0.85</td> <td>0.748</td> <td>0.52</td> <td>0.602</td> <td>0.652</td> <td>0.664</td> <td>0.771</td> <td>0.795</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || ESIMseq+tree</td> <td>0.871</td> <td>0.854</td> <td>0.759</td> <td>0.538</td> <td>0.589</td> <td>0.647</td> <td>0.658</td> <td>0.749</td> <td>0.768</td> <td>-</td> <td>None</td> </tr> <tr> <td>Dataset || PWIMour</td> <td>0.822</td> <td>0.853</td> <td>0.745</td> <td>0.602</td> <td>0.695</td> <td>0.709</td> <td>0.723</td> <td>0.759</td> <td>0.822</td> <td>0.871</td> <td>0.809</td> </tr> <tr> <td>Dataset || mPWIMseq</td> <td>0.851</td> <td>0.862</td> <td>0.757</td> <td>0.612</td> <td>0.714</td> <td>0.717</td> <td>0.728</td> <td>0.774</td> <td>0.835</td> <td>0.878</td> <td>0.821</td> </tr> <tr> <td>Dataset || mPWIMseq+tree</td> <td>0.855</td> <td>0.87</td> <td>0.743</td> <td>0.623</td> <td>0.718</td> <td>0.735</td> <td>0.751</td> <td>0.781</td> <td>0.821</td> <td>0.887</td> <td>0.834</td> </tr> </tbody></table>
Table 1
table_1
D19-1114
3
emnlp2019
SSE (Nie and Bansal, 2017) is a stacked BiLSTM model with shortcut connections and finetuning of word embeddings. Unlike our setting, where each word is represented by its own hidden state in the final output layer, SSE applies maxpooling over time to the output of the last BiLSTM layer to extract the final sentence feature vector. Based on Table 1, mPWIMseq clearly outperforms SSE on Twitter, PIT-2015, STS-2014, WikiQA, and TrecQA. However, for the SNLI and Quora datasets, SSE slightly exceeds mPWIM by 0.4% and 1.6%, respectively. SNLI and Quora have the largest training data among all the datasets with 550k and 393k training sentence pairs, respectively, which suggests that SSE performs better on larger data beyond a certain threshold. We surmise that as the dataset increases in size, the simplicity of SSE will have more performance advantages.
[2, 2, 1, 1, 2, 2]
['SSE (Nie and Bansal, 2017) is a stacked BiLSTM model with shortcut connections and finetuning of word embeddings.', 'Unlike our setting, where each word is represented by its own hidden state in the final output layer, SSE applies maxpooling over time to the output of the last BiLSTM layer to extract the final sentence feature vector.', 'Based on Table 1, mPWIMseq clearly outperforms SSE on Twitter, PIT-2015, STS-2014, WikiQA, and TrecQA.', 'However, for the SNLI and Quora datasets, SSE slightly exceeds mPWIM by 0.4% and 1.6%, respectively.', 'SNLI and Quora have the largest training data among all the datasets with 550k and 393k training sentence pairs, respectively, which suggests that SSE performs better on larger data beyond a certain threshold.', 'We surmise that as the dataset increases in size, the simplicity of SSE will have more performance advantages.']
[['SSE'], ['SSE'], ['mPWIMseq', 'SSE', 'Twitter', 'PIT-2015', 'STS-2014', 'WikiQA', 'TrecQA'], ['SSE', 'mPWIMseq', 'SNLI', 'Quora'], ['SNLI', 'Quora', 'SSE'], ['SSE']]
1
D19-1122table_7
Accuracy of models trained and evaluated on different parts of the dataset. We report the results on simple, complex, and all sentences.
2
[['Test data', ' AMT training data AMT'], ['Test data', ' AMT training data GCS'], ['Test data', ' AMT training data AMT + GCS'], ['Test data', ' GCS training data AMT'], ['Test data', ' GCS training data GCS'], ['Test data', ' GCS training data AMT + GCS'], ['Test data', ' AMT & GCS training data AMT'], ['Test data', ' AMT & GCS training data GCS'], ['Test data', ' AMT & GCS training data AMT + GCS']]
1
[[' Simple'], [' Complex'], [' All']]
[['0.99', '0.78', '0.89'], ['0.99', '0.76', '0.79'], ['0.99', '0.77', '0.88'], ['0.96', '0.76', '0.86'], ['0.97', '0.94', '0.94'], ['0.96', '0.82', '0.87'], ['0.99', '0.9', '0.95'], ['0.98', '0.94', '0.95'], ['0.99', '0.91', '0.95']]
column
['accuracy', 'accuracy', 'accuracy']
[' AMT & GCS training data AMT + GCS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Simple</th> <th>Complex</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Test data || AMT training data AMT</td> <td>0.99</td> <td>0.78</td> <td>0.89</td> </tr> <tr> <td>Test data || AMT training data GCS</td> <td>0.99</td> <td>0.76</td> <td>0.79</td> </tr> <tr> <td>Test data || AMT training data AMT + GCS</td> <td>0.99</td> <td>0.77</td> <td>0.88</td> </tr> <tr> <td>Test data || GCS training data AMT</td> <td>0.96</td> <td>0.76</td> <td>0.86</td> </tr> <tr> <td>Test data || GCS training data GCS</td> <td>0.97</td> <td>0.94</td> <td>0.94</td> </tr> <tr> <td>Test data || GCS training data AMT + GCS</td> <td>0.96</td> <td>0.82</td> <td>0.87</td> </tr> <tr> <td>Test data || AMT &amp; GCS training data AMT</td> <td>0.99</td> <td>0.9</td> <td>0.95</td> </tr> <tr> <td>Test data || AMT &amp; GCS training data GCS</td> <td>0.98</td> <td>0.94</td> <td>0.95</td> </tr> <tr> <td>Test data || AMT &amp; GCS training data AMT + GCS</td> <td>0.99</td> <td>0.91</td> <td>0.95</td> </tr> </tbody></table>
Table 7
table_7
D19-1122
5
emnlp2019
Because the two subsections – AMT and GCS – are slightly different despite being obtained using the same questionnaire (see Table 3), we test whether this difference influences the relation extraction models. We evaluate models trained using training data from one source and tested using data from the other source. A robust model should be able to detect and extract the targeted relations even when they appear in sentences of different structure and complexity. This would be reflected in close results on its own (same as training) or the other subset’s test data. Table 7 shows the results in terms of accuracy for the various experimental set-ups. The results reflect the difference between the two subsets: the results on the GCS data fluctuate more (between 0.79 and 0.94 accuracy) when the AMT or the GCS data is used for training, while AMT is rather stable (0.86 – 0.89 accuracy). Using all available training data leads to best results on both test subsets, for both simple and complex sentences.
[2, 2, 2, 2, 1, 1, 1]
['Because the two subsections – AMT and GCS – are slightly different despite being obtained using the same questionnaire (see Table 3), we test whether this difference influences the relation extraction models.', 'We evaluate models trained using training data from one source and tested using data from the other source.', 'A robust model should be able to detect and extract the targeted relations even when they appear in sentences of different structure and complexity.', 'This would be reflected in close results on its own (same as training) or the other subset’s test data.', 'Table 7 shows the results in terms of accuracy for the various experimental set-ups.', 'The results reflect the difference between the two subsets: the results on the GCS data fluctuate more (between 0.79 and 0.94 accuracy) when the AMT or the GCS data is used for training, while AMT is rather stable (0.86 – 0.89 accuracy).', 'Using all available training data leads to best results on both test subsets, for both simple and complex sentences.']
[None, None, None, None, None, [' GCS training data AMT', ' GCS training data GCS', ' GCS training data AMT + GCS', ' AMT training data AMT', ' AMT training data GCS', ' AMT training data AMT + GCS'], [' Simple', ' Complex', ' AMT & GCS training data AMT + GCS']]
1
D19-1126table_2
Performance (F1 Score) on ATIS Dataset
2
[['Approach', 'AttRNN (upper bound)'], ['Approach', 'FT-AttRNN'], ['Approach', 'FT-Lr-AttRNN'], ['Approach', 'FT-Cp-AttRNN'], ['Approach', 't-ProgModel'], ['Approach', 'c-ProgModel']]
1
[[' Batch 0'], ['Batch 1'], ['Batch 2'], ['Batch 3'], ['Batch 4']]
[['92.12', '92.89', '93.04', '93.56', '95.13'], ['', '91.85', '89.98', '91.25', '88.03'], ['', '91.96', '86.46', '88.03', '86.58'], ['92.12', '92.1', '90.06', '91.98', '89.67'], ['', '92.33', '92.43', '92.57', '92.58'], ['', '92.4', '92.64', '92.71', '93.91']]
column
['F1', 'F1', 'F1', 'F1', 'F1']
['t-ProgModel', 'c-ProgModel']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Batch 0</th> <th>Batch 1</th> <th>Batch 2</th> <th>Batch 3</th> <th>Batch 4</th> </tr> </thead> <tbody> <tr> <td>Approach || AttRNN (upper bound)</td> <td>92.12</td> <td>92.89</td> <td>93.04</td> <td>93.56</td> <td>95.13</td> </tr> <tr> <td>Approach || FT-AttRNN</td> <td></td> <td>91.85</td> <td>89.98</td> <td>91.25</td> <td>88.03</td> </tr> <tr> <td>Approach || FT-Lr-AttRNN</td> <td></td> <td>91.96</td> <td>86.46</td> <td>88.03</td> <td>86.58</td> </tr> <tr> <td>Approach || FT-Cp-AttRNN</td> <td>92.12</td> <td>92.1</td> <td>90.06</td> <td>91.98</td> <td>89.67</td> </tr> <tr> <td>Approach || t-ProgModel</td> <td></td> <td>92.33</td> <td>92.43</td> <td>92.57</td> <td>92.58</td> </tr> <tr> <td>Approach || c-ProgModel</td> <td></td> <td>92.4</td> <td>92.64</td> <td>92.71</td> <td>93.91</td> </tr> </tbody></table>
Table 2
table_2
D19-1126
4
emnlp2019
3.2 Main Results. Table 2 show the F1 score of slot filling performance comparison results on ATIS dataset. The results show that ProgModel consistently outperforms AttRNN, where the improvement gain is up to 4.24% in ATIS. As expected, ProgModel continuously improves performance with moreand more new batches of training data, even though it is only trained on new data at each batch. Among all competitors, FT-Cp-AttRNN achieves the closest performance to ProgModel by using much larger model size (shown in Section 3.4). In comparison, both FT-AttRNN and FT-Lr-AttRNN frequently suffer from catastrophic forgetting. The values in pink show that the performance of FTAttRNN and FT-Cp-AttRNN drops up to 3.82% and 5.38% respectively. As a result, their F1 scores are significantly reduced in the end. At last, we observe that ProgModel is quite close to upper bound performance (Note that this is only for reference rather than comparison since upper bound performance assumes the availability of all training data while ProgModel does not).
[2, 1, 1, 1, 1, 2, 1, 1, 1]
['3.2 Main Results.', 'Table 2 show the F1 score of slot filling performance comparison results on ATIS dataset.', 'The results show that ProgModel consistently outperforms AttRNN, where the improvement gain is up to 4.24% in ATIS.', 'As expected, ProgModel continuously improves performance with moreand more new batches of training data, even though it is only trained on new data at each batch.', 'Among all competitors, FT-Cp-AttRNN achieves the closest performance to ProgModel by using much larger model size (shown in Section 3.4).', 'In comparison, both FT-AttRNN and FT-Lr-AttRNN frequently suffer from catastrophic forgetting.', 'The values in pink show that the performance of FTAttRNN and FT-Cp-AttRNN drops up to 3.82% and 5.38% respectively.', 'As a result, their F1 scores are significantly reduced in the end.', 'At last, we observe that ProgModel is quite close to upper bound performance (Note that this is only for reference rather than comparison since upper bound performance assumes the availability of all training data while ProgModel does not).']
[None, None, ['t-ProgModel', 'c-ProgModel', 'AttRNN (upper bound)', ' Batch 0'], ['t-ProgModel', 'c-ProgModel', ' Batch 0', 'Batch 1', 'Batch 2', 'Batch 3', 'Batch 4'], ['FT-Cp-AttRNN', 't-ProgModel', 'c-ProgModel'], ['FT-AttRNN', 'FT-Lr-AttRNN'], ['FT-Cp-AttRNN'], ['FT-Cp-AttRNN', ' Batch 0', 'Batch 1', 'Batch 2', 'Batch 3', 'Batch 4'], ['t-ProgModel', 'c-ProgModel', 'AttRNN (upper bound)']]
1
D19-1131table_2
Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods.
2
[['Classifier', 'oos-train FastText'], ['Classifier', 'oos-train SVM'], ['Classifier', 'oos-train CNN'], ['Classifier', 'oos-train DialogFlow'], ['Classifier', 'oos-train Rasa'], ['Classifier', 'oos-train MLP'], ['Classifier', 'oos-train BERT'], ['Classifier', 'oos-threshold SVM'], ['Classifier', 'oos-thresholdFastText'], ['Classifier', 'oos-threshold DialogFlow'], ['Classifier', 'oos-threshold Rasa'], ['Classifier', 'oos-threshold CNN'], ['Classifier', 'oos-threshold MLP'], ['Classifier', 'oos-threshold BERT']]
2
[['In-Scope Accuracy', 'Full'], ['In-Scope Accuracy', 'Small'], ['In-Scope Accuracy', ' Imbal'], ['In-Scope Accuracy', ' OOS+'], ['Out-Of-Scope Recall', 'Full'], ['Out-Of-Scope Recall', 'Small'], ['Out-Of-Scope Recall', ' Imbal'], ['Out-Of-Scope Recall', ' OOS+']]
[['89', '84.5', '87.2', '89.2', '9.7', '23.2', '12.2', '32.2'], ['91', '89.6', '89.9', '90.1', '14.5', '18.6', '16', '29.8'], ['91.2', '88.9', '89.1', '91', '18.9', '22.2', '19', '34.2'], ['91.7', '89.4', '90.7', '91.7', '14', '14.1', '15.3', '28.5'], ['91.5', '88.9', '89.2', '90.9', '45.3', '55', '49.6', '66'], ['93.5', '91.5', '92.5', '94.1', '47.4', '52.2', '35.6', '53.9'], ['96.9', '96.4', '96.3', '96.7', '40.3', '40.9', '43.8', '59.2'], ['88.2', '85.6', '86', ' —', '18', '13', '0', ' —'], ['88.6', '84.8', '86.6', ' —', '28.3', '6', '33.2', ' —'], ['90.8', '89.2', '89.2', ' —', '26.7', '20.5', '38.1', ' —'], ['90.9', '89.6', '89.4', ' —', '31.2', '1', '0', ' —'], ['90.9', '88.9', '90', ' —', '30.9', '25.5', '26.9', ' —'], ['93.4', '91.3', '92.5', ' —', '49.1', '32.4', '13.3', ' —'], ['96.2', '96.2', '95.9', ' —', '52.3', '58.9', '52.8', ' —']]
column
['In-Scope Accuracy', 'In-Scope Accuracy', 'In-Scope Accuracy', 'In-Scope Accuracy', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall']
['Classifier']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-Scope Accuracy || Full</th> <th>In-Scope Accuracy || Small</th> <th>In-Scope Accuracy || Imbal</th> <th>In-Scope Accuracy || OOS+</th> <th>Out-Of-Scope Recall || Full</th> <th>Out-Of-Scope Recall || Small</th> <th>Out-Of-Scope Recall || Imbal</th> <th>Out-Of-Scope Recall || OOS+</th> </tr> </thead> <tbody> <tr> <td>Classifier || oos-train FastText</td> <td>89</td> <td>84.5</td> <td>87.2</td> <td>89.2</td> <td>9.7</td> <td>23.2</td> <td>12.2</td> <td>32.2</td> </tr> <tr> <td>Classifier || oos-train SVM</td> <td>91</td> <td>89.6</td> <td>89.9</td> <td>90.1</td> <td>14.5</td> <td>18.6</td> <td>16</td> <td>29.8</td> </tr> <tr> <td>Classifier || oos-train CNN</td> <td>91.2</td> <td>88.9</td> <td>89.1</td> <td>91</td> <td>18.9</td> <td>22.2</td> <td>19</td> <td>34.2</td> </tr> <tr> <td>Classifier || oos-train DialogFlow</td> <td>91.7</td> <td>89.4</td> <td>90.7</td> <td>91.7</td> <td>14</td> <td>14.1</td> <td>15.3</td> <td>28.5</td> </tr> <tr> <td>Classifier || oos-train Rasa</td> <td>91.5</td> <td>88.9</td> <td>89.2</td> <td>90.9</td> <td>45.3</td> <td>55</td> <td>49.6</td> <td>66</td> </tr> <tr> <td>Classifier || oos-train MLP</td> <td>93.5</td> <td>91.5</td> <td>92.5</td> <td>94.1</td> <td>47.4</td> <td>52.2</td> <td>35.6</td> <td>53.9</td> </tr> <tr> <td>Classifier || oos-train BERT</td> <td>96.9</td> <td>96.4</td> <td>96.3</td> <td>96.7</td> <td>40.3</td> <td>40.9</td> <td>43.8</td> <td>59.2</td> </tr> <tr> <td>Classifier || oos-threshold SVM</td> <td>88.2</td> <td>85.6</td> <td>86</td> <td>—</td> <td>18</td> <td>13</td> <td>0</td> <td>—</td> </tr> <tr> <td>Classifier || oos-thresholdFastText</td> <td>88.6</td> <td>84.8</td> <td>86.6</td> <td>—</td> <td>28.3</td> <td>6</td> <td>33.2</td> <td>—</td> </tr> <tr> <td>Classifier || oos-threshold DialogFlow</td> <td>90.8</td> <td>89.2</td> <td>89.2</td> <td>—</td> <td>26.7</td> <td>20.5</td> <td>38.1</td> <td>—</td> </tr> <tr> <td>Classifier || oos-threshold Rasa</td> <td>90.9</td> <td>89.6</td> <td>89.4</td> <td>—</td> <td>31.2</td> <td>1</td> <td>0</td> <td>—</td> </tr> <tr> <td>Classifier || oos-threshold CNN</td> <td>90.9</td> <td>88.9</td> <td>90</td> <td>—</td> <td>30.9</td> <td>25.5</td> <td>26.9</td> <td>—</td> </tr> <tr> <td>Classifier || oos-threshold MLP</td> <td>93.4</td> <td>91.3</td> <td>92.5</td> <td>—</td> <td>49.1</td> <td>32.4</td> <td>13.3</td> <td>—</td> </tr> <tr> <td>Classifier || oos-threshold BERT</td> <td>96.2</td> <td>96.2</td> <td>95.9</td> <td>—</td> <td>52.3</td> <td>58.9</td> <td>52.8</td> <td>—</td> </tr> </tbody></table>
Table 2
table_2
D19-1131
4
emnlp2019
4 Results . 4.1 Results with oos-train . Table 2 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy.
[0, 2, 1, 1, 1, 1, 1, 2, 2, 2]
['4 Results .', '4.1 Results with oos-train .', 'Table 2 presents results for all models across the four variations of the dataset.', 'First, BERT is consistently the best approach for in-scope, followed by MLP.', 'Second, out-of-scope query performance is much lower than in-scope across all methods.', 'Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries.', 'The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions.', 'Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant.', 'This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries.', 'We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy.']
[None, None, None, ['oos-train BERT', 'oos-threshold BERT'], ['Out-Of-Scope Recall', 'In-Scope Accuracy'], ['In-Scope Accuracy', 'Small', ' Imbal'], ['Out-Of-Scope Recall', 'Small', ' Imbal'], None, None, None]
1
D19-1131table_3
Results of oos-binary experiments on OOS+, where we compare performance of undersampling (under) and augmentation using sentences from Wikipedia (wiki aug). The wiki aug approach was too large for the DialogFlow and Rasa classifiers.
2
[['Classifier', 'DialogFlow'], ['Classifier', 'Rasa'], ['Classifier', 'FastText'], ['Classifier', 'SVM'], ['Classifier', 'CNN'], ['Classifier', 'MLP'], ['Classifier', 'BERT']]
2
[['In-Scope Accuracy', 'under'], ['In-Scope Accuracy', 'wiki aug'], ['Out-of-Scope Recall', 'under'], ['Out-of-Scope Recall', 'wiki aug']]
[['84.7', ' —', '37.3', ' —'], ['87.5', ' —', '37.7', ' —'], ['88.1', '87', '22.7', '31.4'], ['88.4', '89.3', '32.2', '37.7'], ['89.8', '90.1', '25.6', '39.7'], ['90.1', '92.9', '52.8', '32.4'], ['94.4', '96', '46.5', '40.4']]
column
['In-Scope Accuracy', 'In-Scope Accuracy', 'Out-of-Scope Recall', 'Out-of-Scope Recall']
['Out-of-Scope Recall']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-Scope Accuracy || under</th> <th>In-Scope Accuracy || wiki aug</th> <th>Out-of-Scope Recall || under</th> <th>Out-of-Scope Recall || wiki aug</th> </tr> </thead> <tbody> <tr> <td>Classifier || DialogFlow</td> <td>84.7</td> <td>—</td> <td>37.3</td> <td>—</td> </tr> <tr> <td>Classifier || Rasa</td> <td>87.5</td> <td>—</td> <td>37.7</td> <td>—</td> </tr> <tr> <td>Classifier || FastText</td> <td>88.1</td> <td>87</td> <td>22.7</td> <td>31.4</td> </tr> <tr> <td>Classifier || SVM</td> <td>88.4</td> <td>89.3</td> <td>32.2</td> <td>37.7</td> </tr> <tr> <td>Classifier || CNN</td> <td>89.8</td> <td>90.1</td> <td>25.6</td> <td>39.7</td> </tr> <tr> <td>Classifier || MLP</td> <td>90.1</td> <td>92.9</td> <td>52.8</td> <td>32.4</td> </tr> <tr> <td>Classifier || BERT</td> <td>94.4</td> <td>96</td> <td>46.5</td> <td>40.4</td> </tr> </tbody></table>
Table 3
table_3
D19-1131
4
emnlp2019
4.3 Results with oos-binary. Table 3 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table 2. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but outof-scope performance remains weak.
[2, 1, 1, 1, 1]
['4.3 Results with oos-binary.', 'Table 3 compares classifier performance using the oos-binary scheme.', 'In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table 2.', 'However, out-of-scope recall improves compared to oos-train on Full but not OOS+.', 'Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but outof-scope performance remains weak.']
[None, None, ['In-Scope Accuracy', 'Out-of-Scope Recall'], ['Out-of-Scope Recall'], ['Out-of-Scope Recall', 'wiki aug']]
1
D19-1132table_1
Activity, Entity F1 results reported by previous work (rows 1-4 from Serban et al. (2017a)), and the All-operations and AutoAugment models.
1
[['LSTM'], ['HRED'], ['VHRED'], ['VHRED (w/ attn.)'], ['All-operations'], ['Input-aware'], ['Input-agnostic']]
1
[['Activity F1'], ['Entity F1']]
[['1.18', '0.87'], ['4.34', '2.22'], ['4.63', '2.53'], ['5.94', '3.52'], ['6.53', '3.79'], ['7.04', '3.9'], ['7.02', '4']]
column
['Activity F1', 'Entity F1']
['All-operations', 'Input-aware', 'Input-agnostic']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Activity F1</th> <th>Entity F1</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>1.18</td> <td>0.87</td> </tr> <tr> <td>HRED</td> <td>4.34</td> <td>2.22</td> </tr> <tr> <td>VHRED</td> <td>4.63</td> <td>2.53</td> </tr> <tr> <td>VHRED (w/ attn.)</td> <td>5.94</td> <td>3.52</td> </tr> <tr> <td>All-operations</td> <td>6.53</td> <td>3.79</td> </tr> <tr> <td>Input-aware</td> <td>7.04</td> <td>3.9</td> </tr> <tr> <td>Input-agnostic</td> <td>7.02</td> <td>4</td> </tr> </tbody></table>
Table 1
table_1
D19-1132
4
emnlp2019
5 Results and Analysis . Automatic Results: Table 1 shows that all data augmentation approaches (last 3 rows) improve statistically significantly (p < 0.01) 10 over the strongest baseline VHRED (w/ attention). Moreover, our input-agnostic AutoAugment is statistically significantly (p < 0.01) better (on Activity and Entity F1) than the strong manual-policy Alloperations model, while the input-aware model is stat.signif.(p < 0.01) better on Activity F1.11 .
[2, 1, 1]
['5 Results and Analysis .', 'Automatic Results: Table 1 shows that all data augmentation approaches (last 3 rows) improve statistically significantly (p < 0.01) 10 over the strongest baseline VHRED (w/ attention).', 'Moreover, our input-agnostic AutoAugment is statistically significantly (p < 0.01) better (on Activity and Entity F1) than the strong manual-policy Alloperations model, while the input-aware model is stat.signif.(p < 0.01) better on Activity F1.11 .']
[None, ['All-operations', 'Input-aware', 'Input-agnostic', 'VHRED (w/ attn.)'], ['Input-agnostic', 'Activity F1', 'Entity F1']]
1
D19-1140table_2
Classification results (F1) obtained with: i) automatic English translations by three models (Generic, Reinforce, MO-Reinforce), and ii) gold-standard English (English) and untranslated German/Italian (Original) tweets.
1
[['Generic'], ['Reinforce'], ['MO-Reinforce'], ['English'], ['Original']]
2
[['De - En', '5%'], ['De - En', '100%'], [' It - En', '5%'], ['It - En', '100%']]
[['79.7', '83.2', '78.2', '81.6'], ['80.4', '83.7', '77.8', '82.8'], ['80.9', '84.4', '80.3', '84.5'], ['85.1', '85.1', '85.1', '85.1'], ['74.4', '74.4', '75.6', '75.6']]
column
['F1', 'F1', 'F1', 'F1']
['Reinforce', 'MO-Reinforce']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>De - En || 5%</th> <th>De - En || 100%</th> <th>It - En || 5%</th> <th>It - En || 100%</th> </tr> </thead> <tbody> <tr> <td>Generic</td> <td>79.7</td> <td>83.2</td> <td>78.2</td> <td>81.6</td> </tr> <tr> <td>Reinforce</td> <td>80.4</td> <td>83.7</td> <td>77.8</td> <td>82.8</td> </tr> <tr> <td>MO-Reinforce</td> <td>80.9</td> <td>84.4</td> <td>80.3</td> <td>84.5</td> </tr> <tr> <td>English</td> <td>85.1</td> <td>85.1</td> <td>85.1</td> <td>85.1</td> </tr> <tr> <td>Original</td> <td>74.4</td> <td>74.4</td> <td>75.6</td> <td>75.6</td> </tr> </tbody></table>
Table 2
table_2
D19-1140
4
emnlp2019
4 Results and Discussion. Table 2 shows our classification results, presenting the F1 scores obtained by the different MT-based approaches in the two training conditions. When NMT is trained on 100% of the parallel data, for both languages Reinforce produces translations that lead to classification improvements over those produced by the Generic model (+0.5 De-En, +0.8 It-En). Although the scores are considerably better than those obtained by the Original classifiers (+9.3 De-En, +7.2 It-En), the gap with respect to the English classifier is still quite large (-1.4 De-En and -2.3 It-En).
[2, 1, 1, 1]
['4 Results and Discussion.', 'Table 2 shows our classification results, presenting the F1 scores obtained by the different MT-based approaches in the two training conditions.', 'When NMT is trained on 100% of the parallel data, for both languages Reinforce produces translations that lead to classification improvements over those produced by the Generic model (+0.5 De-En, +0.8 It-En).', 'Although the scores are considerably better than those obtained by the Original classifiers (+9.3 De-En, +7.2 It-En), the gap with respect to the English classifier is still quite large (-1.4 De-En and -2.3 It-En).']
[None, None, ['100%', 'Generic', 'Reinforce'], ['Original', 'English']]
1
D19-1151table_2
Models’ performance on all words and OOV words per language. For Vietnamese, N´aplava et al. (2018) reports 2.45% for WER on a much larger dataset (∼25M words), which is significantly better than our model.
3
[['System', 'Arabic', 'Pasha et al. (2014)'], ['System', 'Arabic', 'Zalmout and Habash (2017)'], ['System', 'Arabic', 'LSTM'], ['System', 'Arabic', 'TCN'], ['System', 'Arabic', 'BiLSTM'], ['System', 'Arabic', 'A-TCN'], ['System', 'Vietnamese', 'Naplava et al. (2018)'], ['System', 'Vietnamese', 'LSTM'], ['System', 'Vietnamese', 'TCN'], ['System', 'Vietnamese', 'BiLSTM'], ['System', 'Vietnamese', 'A-TCN'], ['System', 'Yoruba', 'Orife (2018)'], ['System', 'Yoruba', 'LSTM'], ['System', 'Yoruba', 'TCN'], ['System', 'Yoruba', 'BiLSTM'], ['System', 'Yoruba', 'A-TCN']]
1
[['DER'], ['WER'], ['OOV']]
[['-', '12.3%', '29.8%'], ['-', '8.3%', '20.2%'], ['19.2%', '51.9%', '86.6%'], ['17.5%', '47.6%', '87.2%'], ['2.8%', '8.2%', '33.6%'], ['3.0%', '10.2%', '36.3%'], ['11.2%', '44.5%', '-'], ['13.3%', '39.5%', '33.1%'], ['11.1%', '32.9%', '32.4%'], ['2.6%', '7.8%', '15.3%'], ['2.5%', '7.7%', '15.3%'], ['-', '4.6%', '-'], ['13.4%', '37.2%', '84.9%'], ['12.7%', '35.5%', '83.8%'], ['3.6%', '12.1%', '69.3%'], ['3.8%', '12.6%', '70.2%']]
column
['accuracy', 'accuracy', 'accuracy']
['BiLSTM', 'A-TCN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DER</th> <th>WER</th> <th>OOV</th> </tr> </thead> <tbody> <tr> <td>System || Arabic || Pasha et al. (2014)</td> <td>-</td> <td>12.3%</td> <td>29.8%</td> </tr> <tr> <td>System || Arabic || Zalmout and Habash (2017)</td> <td>-</td> <td>8.3%</td> <td>20.2%</td> </tr> <tr> <td>System || Arabic || LSTM</td> <td>19.2%</td> <td>51.9%</td> <td>86.6%</td> </tr> <tr> <td>System || Arabic || TCN</td> <td>17.5%</td> <td>47.6%</td> <td>87.2%</td> </tr> <tr> <td>System || Arabic || BiLSTM</td> <td>2.8%</td> <td>8.2%</td> <td>33.6%</td> </tr> <tr> <td>System || Arabic || A-TCN</td> <td>3.0%</td> <td>10.2%</td> <td>36.3%</td> </tr> <tr> <td>System || Vietnamese || Naplava et al. (2018)</td> <td>11.2%</td> <td>44.5%</td> <td>-</td> </tr> <tr> <td>System || Vietnamese || LSTM</td> <td>13.3%</td> <td>39.5%</td> <td>33.1%</td> </tr> <tr> <td>System || Vietnamese || TCN</td> <td>11.1%</td> <td>32.9%</td> <td>32.4%</td> </tr> <tr> <td>System || Vietnamese || BiLSTM</td> <td>2.6%</td> <td>7.8%</td> <td>15.3%</td> </tr> <tr> <td>System || Vietnamese || A-TCN</td> <td>2.5%</td> <td>7.7%</td> <td>15.3%</td> </tr> <tr> <td>System || Yoruba || Orife (2018)</td> <td>-</td> <td>4.6%</td> <td>-</td> </tr> <tr> <td>System || Yoruba || LSTM</td> <td>13.4%</td> <td>37.2%</td> <td>84.9%</td> </tr> <tr> <td>System || Yoruba || TCN</td> <td>12.7%</td> <td>35.5%</td> <td>83.8%</td> </tr> <tr> <td>System || Yoruba || BiLSTM</td> <td>3.6%</td> <td>12.1%</td> <td>69.3%</td> </tr> <tr> <td>System || Yoruba || A-TCN</td> <td>3.8%</td> <td>12.6%</td> <td>70.2%</td> </tr> </tbody></table>
Table 2
table_2
D19-1151
4
emnlp2019
Comparison to Prior Work: Table 2 shows the performance of previous models trained on the same data. For Arabic, both A-TCN and BiLSTM provide significantly better performance than MADAMIRA (Pasha et al., 2014), which is a morphological disambiguation tool for Arabic. The performance of Zalmout and Habash (2017)’s model falls in between BiLSTM and ATCN. As opposed to our character-based models, both previous models use other morphological features along with a language model to rank all possible diacritic choices. We believe that this additional semantic and morphological information help their models perform better on OOV words. For Vietnamese, when we re-train Naplava et al. ´ (2018) model on the same sample discussed in Section 4, both A-TCN and BiLSTM provide significantly better results. Naplava et al. ´ (2018) also use BiLSTM but with different parameter settings and different dataset preparation. For Yoruba, both character-based architectures provide lower performance than Orife (2018)’s model. However, Orife (2018) uses seq2seq modeling which generate diacritized sentences that are not of the same length as the input and can generate words not present in the original sentence (hallucinations). This is unpleasant behaviour for diacritization especially if used in text-to-speech applications.
[1, 1, 1, 2, 2, 1, 2, 1, 2, 2]
['Comparison to Prior Work: Table 2 shows the performance of previous models trained on the same data.', 'For Arabic, both A-TCN and BiLSTM provide significantly better performance than MADAMIRA (Pasha et al., 2014), which is a morphological disambiguation tool for Arabic.', 'The performance of Zalmout and Habash (2017)’s model falls in between BiLSTM and ATCN.', 'As opposed to our character-based models, both previous models use other morphological features along with a language model to rank all possible diacritic choices.', 'We believe that this additional semantic and morphological information help their models perform better on OOV words.', 'For Vietnamese, when we re-train Naplava et al. ´ (2018) model on the same sample discussed in Section 4, both A-TCN and BiLSTM provide significantly better results.', 'Naplava et al. ´ (2018) also use BiLSTM but with different parameter settings and different dataset preparation.', 'For Yoruba, both character-based architectures provide lower performance than Orife (2018)’s model.', 'However, Orife (2018) uses seq2seq modeling which generate diacritized sentences that are not of the same length as the input and can generate words not present in the original sentence (hallucinations).', 'This is unpleasant behaviour for diacritization especially if used in text-to-speech applications.']
[None, ['Arabic', 'A-TCN', 'BiLSTM', 'Pasha et al. (2014)'], ['Zalmout and Habash (2017)', 'BiLSTM', 'A-TCN'], ['Pasha et al. (2014)', 'Zalmout and Habash (2017)'], None, ['Vietnamese', 'Naplava et al. (2018)', 'A-TCN', 'BiLSTM'], None, ['Yoruba', 'BiLSTM', 'A-TCN', 'Orife (2018)'], ['Orife (2018)'], None]
1
D19-1168table_3
Comparison with the Transformer model on German-English document-level translation. And the p-value between Ours and Baseline is less than 0.01.
2
[['Model', 'Baseline'], ['Model', 'Ours']]
1
[['tst13'], ['tst14'], ['Avg']]
[['27.89', '23.75', '25.82'], ['28.58', '24.85', '26.72']]
column
['BLEU', 'BLEU', 'BLEU']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>tst13</th> <th>tst14</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>27.89</td> <td>23.75</td> <td>25.82</td> </tr> <tr> <td>Model || Ours</td> <td>28.58</td> <td>24.85</td> <td>26.72</td> </tr> </tbody></table>
Table 3
table_3
D19-1168
6
emnlp2019
4.2 Different Language Pairs. In this paper, we aim to propose a robust document context extraction model. To achieve this goal, we perform experiments on different language pairs to further illustrate the effectiveness of our proposed HM-GDC model. Table 3 shows the performance of our model on German-English document-level translation and the baseline here refers to the Transformer model. For clarity, we only use the German-English document-level parallel corpus to train these two models without pretraining. From the results, our proposed HMGDC model can help improve the Transformer model on German-English document-level translation by 0.90 BLEU points. The experimental results further validate the robustness of our model in different language pairs.
[2, 2, 2, 1, 2, 1, 2]
['4.2 Different Language Pairs.', 'In this paper, we aim to propose a robust document context extraction model.', 'To achieve this goal, we perform experiments on different language pairs to further illustrate the effectiveness of our proposed HM-GDC model.', 'Table 3 shows the performance of our model on German-English document-level translation and the baseline here refers to the Transformer model.', 'For clarity, we only use the German-English document-level parallel corpus to train these two models without pretraining.', 'From the results, our proposed HMGDC model can help improve the Transformer model on German-English document-level translation by 0.90 BLEU points.', 'The experimental results further validate the robustness of our model in different language pairs.']
[None, None, None, ['Baseline', 'Ours'], None, ['Ours'], None]
1
D19-1170table_1
The performance of MTMSN and other competing approaches on DROP dev and test set.
2
[['Model', 'Heuristic Baseline (Dua et al., 2019)'], ['Model', 'Semantic Role Labeling (Carreras and Marquez, 2004)'], ['Model', 'BiDAF (Seo et al., 2017)'], ['Model', 'QANet+ELMo (Yu et al., 2018)'], ['Model', 'BERTBASE (Devlin et al., 2019)'], ['Model', 'NAQANet (Dua et al., 2019)'], ['Model', 'NABERTBASE'], ['Model', 'NABERTLARGE'], ['Model', 'MTMSNBASE'], ['Model', 'MTMSNLARGE'], ['Model', 'Human Performance (Dua et al., 2019)']]
2
[['Dev', 'EM'], ['Dev', 'F1'], ['Test', 'EM'], ['Test', 'F1']]
[['4.28', '8.07', '4.18', '8.59'], ['11.03', '13.67', '10.87', '13.35'], ['26.06', '28.85', '24.75', '27.49'], ['27.71', '30.33', '27.08', '29.67'], ['30.10', '33.36', '29.45', '32.70'], ['46.20', '49.24', '44.07', '47.01'], ['55.82', '58.75', '-', '-'], ['64.61', '67.35', '-', '-'], ['68.17', '72.81', '-', '-'], ['76.68', '80.54', '75.85', '79.88'], ['-', '-', '92.38', '95.98']]
column
['EM', 'F1', 'EM', 'F1']
['MTMSNLARGE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || EM</th> <th>Dev || F1</th> <th>Test || EM</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Heuristic Baseline (Dua et al., 2019)</td> <td>4.28</td> <td>8.07</td> <td>4.18</td> <td>8.59</td> </tr> <tr> <td>Model || Semantic Role Labeling (Carreras and Marquez, 2004)</td> <td>11.03</td> <td>13.67</td> <td>10.87</td> <td>13.35</td> </tr> <tr> <td>Model || BiDAF (Seo et al., 2017)</td> <td>26.06</td> <td>28.85</td> <td>24.75</td> <td>27.49</td> </tr> <tr> <td>Model || QANet+ELMo (Yu et al., 2018)</td> <td>27.71</td> <td>30.33</td> <td>27.08</td> <td>29.67</td> </tr> <tr> <td>Model || BERTBASE (Devlin et al., 2019)</td> <td>30.10</td> <td>33.36</td> <td>29.45</td> <td>32.70</td> </tr> <tr> <td>Model || NAQANet (Dua et al., 2019)</td> <td>46.20</td> <td>49.24</td> <td>44.07</td> <td>47.01</td> </tr> <tr> <td>Model || NABERTBASE</td> <td>55.82</td> <td>58.75</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || NABERTLARGE</td> <td>64.61</td> <td>67.35</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MTMSNBASE</td> <td>68.17</td> <td>72.81</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MTMSNLARGE</td> <td>76.68</td> <td>80.54</td> <td>75.85</td> <td>79.88</td> </tr> <tr> <td>Model || Human Performance (Dua et al., 2019)</td> <td>-</td> <td>-</td> <td>92.38</td> <td>95.98</td> </tr> </tbody></table>
Table 1
table_1
D19-1170
6
emnlp2019
Table 1 shows the performance of our model and other competitive approaches on the development and test sets. MTMSN outperforms all existing approaches by a large margin, and creates new state-of-the-art results by achieving an EM score of 75.85 and a F1 score of 79.88 on the test set. Since our best model utilizes BERTLARGE as encoder, we therefore compare MTMSNLARGE with the NABERTLARGE baseline. As we can see, our model obtains 12.07/13.19 absolute gain of EM/F1 over the baseline, demonstrating the effectiveness of our approach. However, as the human achieves 95.98 F1 on the test set, our results suggest that there is still room for improvement.
[1, 1, 2, 1, 1]
['Table 1 shows the performance of our model and other competitive approaches on the development and test sets.', 'MTMSN outperforms all existing approaches by a large margin, and creates new state-of-the-art results by achieving an EM score of 75.85 and a F1 score of 79.88 on the test set.', 'Since our best model utilizes BERTLARGE as encoder, we therefore compare MTMSNLARGE with the NABERTLARGE baseline.', 'As we can see, our model obtains 12.07/13.19 absolute gain of EM/F1 over the baseline, demonstrating the effectiveness of our approach.', 'However, as the human achieves 95.98 F1 on the test set, our results suggest that there is still room for improvement.']
[None, ['EM', 'F1', 'Test', 'MTMSNLARGE'], ['MTMSNLARGE', 'NABERTLARGE'], ['MTMSNLARGE', 'NABERTLARGE', 'EM', 'F1', 'Test'], ['Human Performance (Dua et al., 2019)', 'Heuristic Baseline (Dua et al., 2019)', 'BERTBASE (Devlin et al., 2019)', 'NAQANet (Dua et al., 2019)', 'F1', 'Test']]
1
D19-1170table_4
Performance breakdown of NABERTLARGE and MTMSNLARGE by gold answer types.
2
[['Type', 'Date'], ['Type', 'Number'], ['Type', 'Single Span'], ['Type', 'Multi Span']]
2
[['Metric', '(%)'], ['NABERT', 'EM'], ['NABERT', 'F1'], ['MTMSN', 'EM'], ['MTMSN', 'F1']]
[['1.6', '55.7', '60.8', '55.7', '69'], ['61.9', '63.8', '64', '80.9', '81.1'], ['31.7', '75.9', '80.6', '77.5', '82.8'], ['4.8', '0', '22.7', '25.1', '62.8']]
column
['(%)', 'EM', 'F1', 'EM', 'F1']
['Multi Span']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || (%)</th> <th>NABERT || EM</th> <th>NABERT || F1</th> <th>MTMSN || EM</th> <th>MTMSN || F1</th> </tr> </thead> <tbody> <tr> <td>Type || Date</td> <td>1.6</td> <td>55.7</td> <td>60.8</td> <td>55.7</td> <td>69</td> </tr> <tr> <td>Type || Number</td> <td>61.9</td> <td>63.8</td> <td>64</td> <td>80.9</td> <td>81.1</td> </tr> <tr> <td>Type || Single Span</td> <td>31.7</td> <td>75.9</td> <td>80.6</td> <td>77.5</td> <td>82.8</td> </tr> <tr> <td>Type || Multi Span</td> <td>4.8</td> <td>0</td> <td>22.7</td> <td>25.1</td> <td>62.8</td> </tr> </tbody></table>
Table 4
table_4
D19-1170
7
emnlp2019
Performance breakdown. We now provide a quantitative analysis by showing performance breakdown on the development set. Table 4 shows that our gains mainly come from the most frequent number type, which requires various types of symbolic, discrete reasoning operations. Moreover, significant improvements are also obtained in the multi-span category, where the F1 score increases by more than 40 points. This result further proves the validity of our multi-span extraction method.
[2, 2, 1, 1, 1]
['Performance breakdown.', 'We now provide a quantitative analysis by showing performance breakdown on the development set.', 'Table 4 shows that our gains mainly come from the most frequent number type, which requires various types of symbolic, discrete reasoning operations.', 'Moreover, significant improvements are also obtained in the multi-span category, where the F1 score increases by more than 40 points.', 'This result further proves the validity of our multi-span extraction method.']
[None, None, ['Number', 'Metric'], ['Multi Span', 'F1'], ['Multi Span']]
1
D19-1171table_5
Answer selection performances (averaged over five datasets) when trained with question-answer pairs vs. WS-TB.
2
[['Model', 'BiLSTM'], ['Model', 'COALA']]
1
[['Supervised'], ['WS-TB'], ['WS-TB (all)']]
[['35.3', '37.5', '42.5'], ['44.7', '45.2', '44.5']]
column
['accuracy', 'accuracy', 'accuracy']
['BiLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Supervised</th> <th>WS-TB</th> <th>WS-TB (all)</th> </tr> </thead> <tbody> <tr> <td>Model || BiLSTM</td> <td>35.3</td> <td>37.5</td> <td>42.5</td> </tr> <tr> <td>Model || COALA</td> <td>44.7</td> <td>45.2</td> <td>44.5</td> </tr> </tbody></table>
Table 5
table_5
D19-1171
8
emnlp2019
The results are given in Table 5 where we report the accuracy (P@1), averaged over the five datasets. Interestingly, we do not observe large differences between supervised training and WS-TB for both models when they use the same number of positive training instances (ranging from 2.8k to 5.8k). Thus, using title-body information instead of question-answer pairs to train models without direct answer supervision is feasible and effective. Further, when we use all available title-body pairs, the BiLSTM model substantially improves by 5pp, which is only slightly worse than COALA (which was designed for smaller training sets). We hypothesize that one reason is that BiLSTM can learn improved representations with the additional data. Further, title-body pairs have a higher overlap than question-answer pairs (see ยง6) which provides a stronger training signal to the siamese network. These results demonstrate that our work can have broader impact to cQA, e.g., to train models on other tasks beyond duplicate question detection.
[1, 1, 2, 1, 2, 2, 2]
['The results are given in Table 5 where we report the accuracy (P@1), averaged over the five datasets.', 'Interestingly, we do not observe large differences between supervised training and WS-TB for both models when they use the same number of positive training instances (ranging from 2.8k to 5.8k).', 'Thus, using title-body information instead of question-answer pairs to train models without direct answer supervision is feasible and effective.', 'Further, when we use all available title-body pairs, the BiLSTM model substantially improves by 5pp, which is only slightly worse than COALA (which was designed for smaller training sets).', 'We hypothesize that one reason is that BiLSTM can learn improved representations with the additional data.', 'Further, title-body pairs have a higher overlap than question-answer pairs (see ยง6) which provides a stronger training signal to the siamese network.', 'These results demonstrate that our work can have broader impact to cQA, e.g., to train models on other tasks beyond duplicate question detection.']
[None, ['Supervised', 'WS-TB'], None, ['BiLSTM', 'COALA'], ['BiLSTM'], None, ['BiLSTM']]
1
D19-1172table_8
Results of clarification question generation models.
2
[['Model', 'Seq2Seq'], ['Model', 'Transformer'], ['Model', 'The proposed Model']]
1
[['Single-Turn'], ['Multi-Turn']]
[['18.84', '31.62'], ['20.69', '44.42'], ['24.04', '45.02']]
column
['BLEU', 'BLEU']
['The proposed Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Single-Turn</th> <th>Multi-Turn</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>18.84</td> <td>31.62</td> </tr> <tr> <td>Model || Transformer</td> <td>20.69</td> <td>44.42</td> </tr> <tr> <td>Model || The proposed Model</td> <td>24.04</td> <td>45.02</td> </tr> </tbody></table>
Table 8
table_8
D19-1172
7
emnlp2019
Clarification Question Generation. Table 8 shows that Seq2Seq achieves low BLEU scores, which indicates its tendency to generate irrelevant text. Transformer achieves higher performance than Seq2Seq. Our proposed coarse-to-fine model demonstrates a new state of the art, improving the current highest baseline result by 3.35 and 0.60 BLEU scores, respectively.
[2, 1, 1, 1]
['Clarification Question Generation.', 'Table 8 shows that Seq2Seq achieves low BLEU scores, which indicates its tendency to generate irrelevant text.', 'Transformer achieves higher performance than Seq2Seq.', 'Our proposed coarse-to-fine model demonstrates a new state of the art, improving the current highest baseline result by 3.35 and 0.60 BLEU scores, respectively.']
[None, ['Seq2Seq'], ['Seq2Seq', 'Transformer'], ['The proposed Model']]
1
D19-1184table_2
Performance on MultiWOZ. MGT is compared to a baseline dual encoder, and an ensemble of dual encoders with an identical number of parameters. All bold-face results are statistically significant to p < 0.01.
2
[['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)']]
1
[['MRR'], ['Hits@1']]
[['79.55', '66.13%'], ['81.53', '69.47%'], ['82.74', '72.18%']]
column
['MRR', 'Hits@1']
['Multi-Granularity (5)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRR</th> <th>Hits@1</th> </tr> </thead> <tbody> <tr> <td>Model Name || Dual Encoder</td> <td>79.55</td> <td>66.13%</td> </tr> <tr> <td>Model Name || Ensemble (5)</td> <td>81.53</td> <td>69.47%</td> </tr> <tr> <td>Model Name || Multi-Granularity (5)</td> <td>82.74</td> <td>72.18%</td> </tr> </tbody></table>
Table 2
table_2
D19-1184
6
emnlp2019
The results in Table 2 demonstrate the strong performance gains obtained with MGT. With L = 5 granularities, MGT outperforms a similarly sized ensemble of dual encoders. These results demonstrate that explicitly enforcing the policy that makes models learn multiple granularities of representation improves the representative power and performance on next utterance retrieval.
[1, 1, 2]
['The results in Table 2 demonstrate the strong performance gains obtained with MGT.', 'With L = 5 granularities, MGT outperforms a similarly sized ensemble of dual encoders.', 'These results demonstrate that explicitly enforcing the policy that makes models learn multiple granularities of representation improves the representative power and performance on next utterance retrieval.']
[['Multi-Granularity (5)'], ['Multi-Granularity (5)'], None]
1
D19-1184table_5
Experimental results demonstrating performance on two downstream tasks, without any finetuning of the latent representations. All bold-face results are statistically significant to p < 0.01.
2
[['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)'], ['Model Name', 'Fine-tuned']]
2
[['BoW', 'F-1'], ['DA', 'F-1']]
[['60.13', '19.09'], ['64.11', '22.39'], ['67.51', '22.85'], ['90.33', '28.75']]
column
['F-1', 'F-1']
['Multi-Granularity (5)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BoW || F-1</th> <th>DA || F-1</th> </tr> </thead> <tbody> <tr> <td>Model Name || Dual Encoder</td> <td>60.13</td> <td>19.09</td> </tr> <tr> <td>Model Name || Ensemble (5)</td> <td>64.11</td> <td>22.39</td> </tr> <tr> <td>Model Name || Multi-Granularity (5)</td> <td>67.51</td> <td>22.85</td> </tr> <tr> <td>Model Name || Fine-tuned</td> <td>90.33</td> <td>28.75</td> </tr> </tbody></table>
Table 5
table_5
D19-1184
8
emnlp2019
The results shown in Table 5 demonstrate that MGT results in more general representations of language, thereby facilitating better transfer. However, there is room for improvement when comparing to models fine-tuned on the downstream task. This suggests that additional measures can be taken to improve the representative power of these models.
[1, 1, 2]
['The results shown in Table 5 demonstrate that MGT results in more general representations of language, thereby facilitating better transfer.', 'However, there is room for improvement when comparing to models fine-tuned on the downstream task.', 'This suggests that additional measures can be taken to improve the representative power of these models.']
[['Multi-Granularity (5)'], ['Fine-tuned'], None]
1
D19-1184table_6
Experimental results demonstrating performance on the downstream task of dialog act prediction, when the model is fine-tuned on all available data. All bold-face results are statistically significant to p < 0.01.
2
[['Model Name', 'Random Init'], ['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)']]
1
[['DA (F-1)']]
[['28.75'], ['32.63'], ['31.71'], ['33.46']]
column
['DA (F-1)']
['Multi-Granularity (5)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DA (F-1)</th> </tr> </thead> <tbody> <tr> <td>Model Name || Random Init</td> <td>28.75</td> </tr> <tr> <td>Model Name || Dual Encoder</td> <td>32.63</td> </tr> <tr> <td>Model Name || Ensemble (5)</td> <td>31.71</td> </tr> <tr> <td>Model Name || Multi-Granularity (5)</td> <td>33.46</td> </tr> </tbody></table>
Table 6
table_6
D19-1184
8
emnlp2019
The results in Table 6 demonstrate that MGT learns general representations which effectively transfer to downstream tasks, especially more difficult tasks such as dialog act prediction. Finetuning the latent representations learned by MGT, results in improved performance on dialog act prediction.
[1, 2]
['The results in Table 6 demonstrate that MGT learns general representations which effectively transfer to downstream tasks, especially more difficult tasks such as dialog act prediction.', 'Finetuning the latent representations learned by MGT, results in improved performance on dialog act prediction.']
[['Multi-Granularity (5)'], ['Multi-Granularity (5)']]
1
D19-1188table_1
Quantitative evaluation results (%).
2
[['Models', 'SEQ2SEQ'], ['Models', 'CVAE'], ['Models', 'LAED'], ['Models', 'TA-SEQ2SEQ'], ['Models', 'DOM-SEQ2SEQ'], ['Models', 'ADAND (with context para.)'], ['Models', 'ADAND (with topic para.)'], ['Models', 'ADAND (with both)']]
2
[['Relevance (%)', 'BLEU'], ['Relevance (%)', 'Average'], ['Relevance (%)', 'Greedy'], ['Relevance (%)', 'Extrema'], [' Informativeness (%)', 'Distinct-1'], [' Informativeness (%)', 'Distinct-2'], [' Informativeness (%)', 'Distinct-3']]
[['0.845', '69.60', '64.94', '45.29', '0.2822', '0.5922', '0.7873'], ['1.546', '71.23', '66.67', '47.14', '0.5465', '1.716', '2.731'], ['0.7545', '69.91', '63.55', '43.12', '0.3890', '0.9165', '1.243'], ['1.465', '72.47', '65.9', '45.19', '0.3593', '0.7994', '1.016'], ['1.189', '74.42', '66.6', '48.47', '0.4977', '1.294', '1.814'], ['1.94', '74.03', '66.76', '49.23', '0.6493', '1.889', '2.745'], ['2.051', '74.17', '66.65', '49.04', '0.5919', '1.699', '2.438'], ['1.90', '75.59', '67.25', '51.17', '0.7092', '2.10', '3.108']]
column
['BLEU', 'Average', 'Greedy', 'Extrema', 'Distinct-1', 'Distinct-2', 'Distinct-3']
['CVAE', 'LAED', 'SEQ2SEQ', 'TA-SEQ2SEQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Relevance (%) || BLEU</th> <th>Relevance (%) || Average</th> <th>Relevance (%) || Greedy</th> <th>Relevance (%) || Extrema</th> <th>Informativeness (%) || Distinct-1</th> <th>Informativeness (%) || Distinct-2</th> <th>Informativeness (%) || Distinct-3</th> </tr> </thead> <tbody> <tr> <td>Models || SEQ2SEQ</td> <td>0.845</td> <td>69.60</td> <td>64.94</td> <td>45.29</td> <td>0.2822</td> <td>0.5922</td> <td>0.7873</td> </tr> <tr> <td>Models || CVAE</td> <td>1.546</td> <td>71.23</td> <td>66.67</td> <td>47.14</td> <td>0.5465</td> <td>1.716</td> <td>2.731</td> </tr> <tr> <td>Models || LAED</td> <td>0.7545</td> <td>69.91</td> <td>63.55</td> <td>43.12</td> <td>0.3890</td> <td>0.9165</td> <td>1.243</td> </tr> <tr> <td>Models || TA-SEQ2SEQ</td> <td>1.465</td> <td>72.47</td> <td>65.9</td> <td>45.19</td> <td>0.3593</td> <td>0.7994</td> <td>1.016</td> </tr> <tr> <td>Models || DOM-SEQ2SEQ</td> <td>1.189</td> <td>74.42</td> <td>66.6</td> <td>48.47</td> <td>0.4977</td> <td>1.294</td> <td>1.814</td> </tr> <tr> <td>Models || ADAND (with context para.)</td> <td>1.94</td> <td>74.03</td> <td>66.76</td> <td>49.23</td> <td>0.6493</td> <td>1.889</td> <td>2.745</td> </tr> <tr> <td>Models || ADAND (with topic para.)</td> <td>2.051</td> <td>74.17</td> <td>66.65</td> <td>49.04</td> <td>0.5919</td> <td>1.699</td> <td>2.438</td> </tr> <tr> <td>Models || ADAND (with both)</td> <td>1.90</td> <td>75.59</td> <td>67.25</td> <td>51.17</td> <td>0.7092</td> <td>2.10</td> <td>3.108</td> </tr> </tbody></table>
Table 1
table_1
D19-1188
6
emnlp2019
4.4 Overall Performance. Table 1 lists the performance of our system and the comparison systems. CVAE and LAED inject SEQ2SEQ with stochastic latent variable, resulting in more informative responses and better performance on Distinct-{1, 2, 3}. TA-SEQ2SEQ incorporates SEQ2SEQ with the outsourcing topic information from LDA. It is not surprising that it performs much better on the response relevance (BLEU, Average, Greedy, Extrema), while its improvements on the informativeness are limited. DOM-SEQ2SEQ builds multiple domain-specific encoder-decoders. It gains improvements on both the relevance metrics and informativeness metrics. In general, with both the context-aware and topic-aware parameterization, our model outperforms all the competitive baselines in terms of the response relevance and informativeness.
[0, 1, 1, 1, 1, 1, 1, 2]
['4.4 Overall Performance.', 'Table 1 lists the performance of our system and the comparison systems.', 'CVAE and LAED inject SEQ2SEQ with stochastic latent variable, resulting in more informative responses and better performance on Distinct-{1, 2, 3}.', 'TA-SEQ2SEQ incorporates SEQ2SEQ with the outsourcing topic information from LDA.', 'It is not surprising that it performs much better on the response relevance (BLEU, Average, Greedy, Extrema), while its improvements on the informativeness are limited.', 'DOM-SEQ2SEQ builds multiple domain-specific encoder-decoders.', 'It gains improvements on both the relevance metrics and informativeness metrics.', 'In general, with both the context-aware and topic-aware parameterization, our model outperforms all the competitive baselines in terms of the response relevance and informativeness.']
[None, None, ['CVAE', 'LAED', 'SEQ2SEQ'], ['TA-SEQ2SEQ', 'SEQ2SEQ'], ['TA-SEQ2SEQ'], ['DOM-SEQ2SEQ'], ['DOM-SEQ2SEQ'], None]
1
D19-1192table_1
The result of rewriting quality.
1
[['Last Utterance'], ['Last Utterance + Context'], ['Last Utterance + Keyword'], ['CRN'], ['CRN + RL']]
1
[['BLEU-4']]
[['34.2'], ['37.1'], ['49.8'], ['50.9'], ['54.2']]
column
['BLEU-4']
['CRN + RL', 'CRN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Last Utterance</td> <td>34.2</td> </tr> <tr> <td>Last Utterance + Context</td> <td>37.1</td> </tr> <tr> <td>Last Utterance + Keyword</td> <td>49.8</td> </tr> <tr> <td>CRN</td> <td>50.9</td> </tr> <tr> <td>CRN + RL</td> <td>54.2</td> </tr> </tbody></table>
Table 1
table_1
D19-1192
7
emnlp2019
Table 1 shows the experiment result, which indicates that our rewriting method outperforms heuristic methods. Moreover, a 54.2 BLEU-4 score means that the rewritten sentences are very similar to the human references. CRN-RL has a higher score than CRN-Pre-train on BLEU4, it proves reinforcement learning promotes our model effectively.
[1, 1, 1]
['Table 1 shows the experiment result, which indicates that our rewriting method outperforms heuristic methods.', 'Moreover, a 54.2 BLEU-4 score means that the rewritten sentences are very similar to the human references.', 'CRN-RL has a higher score than CRN-Pre-train on BLEU4, it proves reinforcement learning promotes our model effectively.']
[['CRN', 'CRN + RL', 'Last Utterance', 'Last Utterance + Keyword', 'Last Utterance + Context'], ['BLEU-4'], ['CRN + RL', 'CRN', 'BLEU-4']]
1
D19-1193table_2
the IMN model and previous methods on PERSONA-CHAT dataset without using personas. All the results except ours are copied from Zhang et al. (2018).
1
[['IR baseline'], ['Starspace'], ['Profile'], ['KV Profile'], ['IMN']]
1
[['hits@1'], ['MRR']]
[['21.4', '-'], ['31.8', '-'], ['31.8', '-'], ['34.9', '-'], ['63.8', '75.8']]
column
['hits@1', 'MRR']
['IMN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hits@1</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>IR baseline</td> <td>21.4</td> <td>-</td> </tr> <tr> <td>Starspace</td> <td>31.8</td> <td>-</td> </tr> <tr> <td>Profile</td> <td>31.8</td> <td>-</td> </tr> <tr> <td>KV Profile</td> <td>34.9</td> <td>-</td> </tr> <tr> <td>IMN</td> <td>63.8</td> <td>75.8</td> </tr> </tbody></table>
Table 2
table_2
D19-1193
8
emnlp2019
6.4 Experimental Results. Table 2 presents the evaluation results of our reproduced IMN model (Gu et al., 2019) and previous methods on PERSONA-CHAT dataset without using personas. It can be seen that the IMN model outperformed other models on this dataset by a margin larger than 28.9% in terms of hits@1. As introduced above, our proposed models for personalized response selection were all built on IMN.
[2, 1, 1, 2]
['6.4 Experimental Results.', 'Table 2 presents the evaluation results of our reproduced IMN model (Gu et al., 2019) and previous methods on PERSONA-CHAT dataset without using personas.', 'It can be seen that the IMN model outperformed other models on this dataset by a margin larger than 28.9% in terms of hits@1.', 'As introduced above, our proposed models for personalized response selection were all built on IMN.']
[None, ['IMN', 'IR baseline', 'Starspace', 'Profile', 'KV Profile'], ['IMN', 'IR baseline', 'Starspace', 'Profile', 'KV Profile', 'hits@1'], None]
1
D19-1193table_3
Performance of the proposed and previous methods on the PERSONA-CHAT under various persona configurations. The meanings of “Self Persona”, “Their Persona”, “Original”, and “revised” can be found in Section 6.1. All results except ours are copied from Zhang et al. (2018); Mazar´e et al. (2018). Numbers in parentheses indicate the gains or losses after adding the persona conditions.
1
[['IR baseline'], ['Starspace'], ['Profile'], ['KV Profile'], ['FT-PC'], ['IMNctx'], ['IMNutr'], ['DIM']]
3
[['Self Persona', 'Original', 'hits@1'], ['Self Persona', 'Original', 'MRR'], ['Self Persona', 'Revised', 'hits@1'], ['Self Persona', 'Revised', 'MRR'], ['Their Persona', 'Original', 'hits@1'], ['Their Persona', 'Original', 'MRR'], ['Their Persona', 'Revised', 'hits@1'], ['Their Persona', 'Revised', 'MRR']]
[['41.0 (+19.6)', '-', '20.7 (-0.7)', '-', '18.1 (-3.3)', '-', '18.1 (-3.3)', '-'], ['48.1 (+16.3)', '-', '32.2 (+0.4)', '-', '24.5 (-7.3)', '-', '26.1 (-5.7)', '-'], ['47.3 (+15.5)', '-', '35.4 (+3.6)', '-', '28.3 (-3.5)', '-', '29.4 (-2.4)', '-'], ['51.1 (+16.2)', '-', '35.1 (+0.2)', '-', '29.1 (-5.8)', '-', '28.9 (-6.0)', '-'], ['-', '-', '60.7 (-)', '-', '-', '-', '-', '-'], ['64.3 (+0.5)', '76.2 (+0.4)', '63.8 (+0.0)', '75.8 (+0.0)', '63.7 (-0.1)', '75.8 (+0.0)', '63.5 (-0.3)', '75.7 (-0.1)'], ['66.7 (+2.9)', '78.1 (+2.3)', '64.0 (+0.2)', '76.0 (+0.2)', '63.9 (+0.1)', '75.9 (+0.1)', '63.7 (-0.1)', '75.7 (-0.1)'], ['78.8 (+15.0)', '6.7 (+10.9)', '70.7 (+6.9)', '1.2 (+5.4)', '64.0 (+0.2)', '76.1 (+0.3)', '63.9 (+0.1)', '76.0 (+0.2)']]
column
['hits@1', 'MRR', 'hits@1', 'MRR', 'hits@1', 'MRR', 'hits@1', 'MRR']
['DIM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Self Persona || Original || hits@1</th> <th>Self Persona || Original || MRR</th> <th>Self Persona || Revised || hits@1</th> <th>Self Persona || Revised || MRR</th> <th>Their Persona || Original || hits@1</th> <th>Their Persona || Original || MRR</th> <th>Their Persona || Revised || hits@1</th> <th>Their Persona || Revised || MRR</th> </tr> </thead> <tbody> <tr> <td>IR baseline</td> <td>41.0 (+19.6)</td> <td>-</td> <td>20.7 (-0.7)</td> <td>-</td> <td>18.1 (-3.3)</td> <td>-</td> <td>18.1 (-3.3)</td> <td>-</td> </tr> <tr> <td>Starspace</td> <td>48.1 (+16.3)</td> <td>-</td> <td>32.2 (+0.4)</td> <td>-</td> <td>24.5 (-7.3)</td> <td>-</td> <td>26.1 (-5.7)</td> <td>-</td> </tr> <tr> <td>Profile</td> <td>47.3 (+15.5)</td> <td>-</td> <td>35.4 (+3.6)</td> <td>-</td> <td>28.3 (-3.5)</td> <td>-</td> <td>29.4 (-2.4)</td> <td>-</td> </tr> <tr> <td>KV Profile</td> <td>51.1 (+16.2)</td> <td>-</td> <td>35.1 (+0.2)</td> <td>-</td> <td>29.1 (-5.8)</td> <td>-</td> <td>28.9 (-6.0)</td> <td>-</td> </tr> <tr> <td>FT-PC</td> <td>-</td> <td>-</td> <td>60.7 (-)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>IMNctx</td> <td>64.3 (+0.5)</td> <td>76.2 (+0.4)</td> <td>63.8 (+0.0)</td> <td>75.8 (+0.0)</td> <td>63.7 (-0.1)</td> <td>75.8 (+0.0)</td> <td>63.5 (-0.3)</td> <td>75.7 (-0.1)</td> </tr> <tr> <td>IMNutr</td> <td>66.7 (+2.9)</td> <td>78.1 (+2.3)</td> <td>64.0 (+0.2)</td> <td>76.0 (+0.2)</td> <td>63.9 (+0.1)</td> <td>75.9 (+0.1)</td> <td>63.7 (-0.1)</td> <td>75.7 (-0.1)</td> </tr> <tr> <td>DIM</td> <td>78.8 (+15.0)</td> <td>6.7 (+10.9)</td> <td>70.7 (+6.9)</td> <td>1.2 (+5.4)</td> <td>64.0 (+0.2)</td> <td>76.1 (+0.3)</td> <td>63.9 (+0.1)</td> <td>76.0 (+0.2)</td> </tr> </tbody></table>
Table 3
table_3
D19-1193
8
emnlp2019
Table 3 presents the evaluation results of our proposed and previous methods on PERSONACHAT under various persona configurations. The t-test shows that the differences between our proposed models, i.e., IMNutt and DIM, and the baseline model, i.e. IMNctx, were both statistically significant with p-value < 0.01. We can see that the fine-grained persona fusion at the utterance level rendered a hits@1 improvement of 2.4% and an MRR improvement of 1.9% by comparing IMNctx and IMNutr conditioned on original self personas. The DIM model outperformed its baseline IMNctx by a margin of 14.5% in terms of hits@1 and 10.5% in terms of MRR. Compared with the FT-PC model (Mazare et al. ´ , 2018) which was first pretrained using a large-scale corpus and then fine-tuned on the PERSONA-CHAT dataset, the DIM model outperformed it by a margin of 10.0% in terms of hits@1 conditioned on revised self personas. Another advantage of DIM is that it was trained in an end-to-end mode without pretraining and using any external knowledge. Lastly, the DIM model outperforms previous models by margins larger than 27.7% in terms of hits@1 conditioned on original self personas.
[1, 2, 2, 1, 1, 1, 2, 1]
['Table 3 presents the evaluation results of our proposed and previous methods on PERSONACHAT under various persona configurations.', 'The t-test shows that the differences between our proposed models, i.e., IMNutt and DIM, and the baseline model, i.e.', 'IMNctx, were both statistically significant with p-value < 0.01.', 'We can see that the fine-grained persona fusion at the utterance level rendered a hits@1 improvement of 2.4% and an MRR improvement of 1.9% by comparing IMNctx and IMNutr conditioned on original self personas.', 'The DIM model outperformed its baseline IMNctx by a margin of 14.5% in terms of hits@1 and 10.5% in terms of MRR.', 'Compared with the FT-PC model (Mazare et al. ´ , 2018) which was first pretrained using a large-scale corpus and then fine-tuned on the PERSONA-CHAT dataset, the DIM model outperformed it by a margin of 10.0% in terms of hits@1 conditioned on revised self personas.', 'Another advantage of DIM is that it was trained in an end-to-end mode without pretraining and using any external knowledge.', 'Lastly, the DIM model outperforms previous models by margins larger than 27.7% in terms of hits@1 conditioned on original self personas.']
[None, None, None, ['IMNctx', 'IMNutr', 'Self Persona', 'Original', 'hits@1'], ['DIM', 'IMNctx', 'hits@1'], ['DIM', 'FT-PC', 'Self Persona', 'Revised', 'hits@1'], None, ['DIM', 'KV Profile', 'Self Persona', 'Original', 'hits@1']]
1
D19-1194table_6
The results of responses generation with BLEU, perplexity (PPL), distinct scores (1-gram to 4-gram).
2
[['Model', 'Seq2Seq'], ['Model', 'MemNet'], ['Model', 'MemNet + multi'], ['Model', 'TAware'], ['Model', 'TAware + multi'], ['Model', 'KAware'], ['Model', 'Qadpt'], ['Model', 'Qadpt + multi'], ['Model', 'Qadpt + TAware']]
2
[['HGZHZ', 'BLEU'], ['HGZHZ', 'PPL'], ['HGZHZ', 'dist-1'], ['HGZHZ', 'dist-2'], ['HGZHZ', 'dist-3'], ['HGZHZ', 'dist-4'], ['Friends', 'BLEU'], ['Friends', 'PPL'], ['Friends', 'dist-1'], ['Friends', 'dist-2'], ['Friends', 'dist-3'], ['Friends', 'dist-4']]
[['14.20', '94.48', '0.008', '0.039', '0.092', '0.150', '15.46', '73.23', '0.004', '0.016', '0.026', '0.032'], ['15.73', '88.29', '0.012', '0.062', '0.150', '0.240', '14.61', '67.58', '0.005', '0.023', '0.040', '0.049'], ['15.88', '86.76', '0.010', '0.058', '0.138', '0.224', '12.97', '54.67', '0.006', '0.022', '0.032', '0.036'], ['15.97', '81.54', '0.013', '0.068', '0.153', '0.223', '14.78', '60.61', '0.002', '0.007', '0.013', '0.016'], ['13.34', '80.48', '0.022', '0.122', '0.239', '0.304', '15.74', '56.67', '0.003', '0.011', '0.019', '0.023'], ['14.14', '90.11', '0.011', '0.061', '0.135', '0.198', '15.70', '64.70', '0.002', '0.009', '0.017', '0.021'], ['14.52', '88.24', '0.013', '0.081', '0.169', '0.242', '17.01', '68.27', '0.002', '0.008', '0.013', '0.016'], ['15.47', '86.65', '0.021', '0.129', '0.259', '0.342', '14.79', '66.70', '0.005', '0.023', '0.041', '0.051'], ['15.05', '81.75', '0.022', '0.123', '0.246', '0.332', '16.85', '55.46', '0.003', '0.012', '0.020', '0.024']]
column
['BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4', 'BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4']
['Qadpt', 'Qadpt + multi']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HGZHZ || BLEU</th> <th>HGZHZ || PPL</th> <th>HGZHZ || dist-1</th> <th>HGZHZ || dist-2</th> <th>HGZHZ || dist-3</th> <th>HGZHZ || dist-4</th> <th>Friends || BLEU</th> <th>Friends || PPL</th> <th>Friends || dist-1</th> <th>Friends || dist-2</th> <th>Friends || dist-3</th> <th>Friends || dist-4</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>14.20</td> <td>94.48</td> <td>0.008</td> <td>0.039</td> <td>0.092</td> <td>0.150</td> <td>15.46</td> <td>73.23</td> <td>0.004</td> <td>0.016</td> <td>0.026</td> <td>0.032</td> </tr> <tr> <td>Model || MemNet</td> <td>15.73</td> <td>88.29</td> <td>0.012</td> <td>0.062</td> <td>0.150</td> <td>0.240</td> <td>14.61</td> <td>67.58</td> <td>0.005</td> <td>0.023</td> <td>0.040</td> <td>0.049</td> </tr> <tr> <td>Model || MemNet + multi</td> <td>15.88</td> <td>86.76</td> <td>0.010</td> <td>0.058</td> <td>0.138</td> <td>0.224</td> <td>12.97</td> <td>54.67</td> <td>0.006</td> <td>0.022</td> <td>0.032</td> <td>0.036</td> </tr> <tr> <td>Model || TAware</td> <td>15.97</td> <td>81.54</td> <td>0.013</td> <td>0.068</td> <td>0.153</td> <td>0.223</td> <td>14.78</td> <td>60.61</td> <td>0.002</td> <td>0.007</td> <td>0.013</td> <td>0.016</td> </tr> <tr> <td>Model || TAware + multi</td> <td>13.34</td> <td>80.48</td> <td>0.022</td> <td>0.122</td> <td>0.239</td> <td>0.304</td> <td>15.74</td> <td>56.67</td> <td>0.003</td> <td>0.011</td> <td>0.019</td> <td>0.023</td> </tr> <tr> <td>Model || KAware</td> <td>14.14</td> <td>90.11</td> <td>0.011</td> <td>0.061</td> <td>0.135</td> <td>0.198</td> <td>15.70</td> <td>64.70</td> <td>0.002</td> <td>0.009</td> <td>0.017</td> <td>0.021</td> </tr> <tr> <td>Model || Qadpt</td> <td>14.52</td> <td>88.24</td> <td>0.013</td> <td>0.081</td> <td>0.169</td> <td>0.242</td> <td>17.01</td> <td>68.27</td> <td>0.002</td> <td>0.008</td> <td>0.013</td> <td>0.016</td> </tr> <tr> <td>Model || Qadpt + multi</td> <td>15.47</td> <td>86.65</td> <td>0.021</td> <td>0.129</td> <td>0.259</td> <td>0.342</td> <td>14.79</td> <td>66.70</td> <td>0.005</td> <td>0.023</td> <td>0.041</td> <td>0.051</td> </tr> <tr> <td>Model || Qadpt + TAware</td> <td>15.05</td> <td>81.75</td> <td>0.022</td> <td>0.123</td> <td>0.246</td> <td>0.332</td> <td>16.85</td> <td>55.46</td> <td>0.003</td> <td>0.012</td> <td>0.020</td> <td>0.024</td> </tr> </tbody></table>
Table 6
table_6
D19-1194
8
emnlp2019
Table 6 presents the BLEU-2 scores (as recommended in the prior work (Liu et al., 2016)), perplexity (PPL), and distinct scores. The results show that all models have similar levels of BLEU2 and PPL, while Qadpt+multi has slightly better distinct scores. The results suggest the same claim as Liu et al.(2016) that BLEU scores are not suitable for dialogue generation.
[1, 1, 2]
['Table 6 presents the BLEU-2 scores (as recommended in the prior work (Liu et al., 2016)), perplexity (PPL), and distinct scores.', 'The results show that all models have similar levels of BLEU2 and PPL, while Qadpt+multi has slightly better distinct scores.', 'The results suggest the same claim as Liu et al.(2016) that BLEU scores are not suitable for dialogue generation.']
[['BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4'], ['BLEU', 'PPL', 'Qadpt + multi'], ['BLEU']]
1
D19-1197table_2
Automatic evaluation results for the task of sentiment response generation. Numbers in bold mean that the improvement over the best performing baseline is statistically significant (t-test, with p-value< 0.01).
1
[['Seq2Seq'], ['CVAE'], ['ECM'], ['S2S-Temp-MLE'], ['S2S-Temp-None'], ['S2S-Temp-50%'], ['S2S-Temp']]
1
[['BLEU-1'], ['ROUGE-L'], ['AVERAGE'], ['EXTREME'], ['GREEDY']]
[['0.065', '0.118', '0.726', '0.474', '0.582'], ['0.088', '0.081', '0.727', '0.408', '0.563'], ['0.051', '0.102', '0.708', '0.462', '0.559'], ['0.103', '0.124', '0.732', '0.458', '0.593'], ['0.078', '0.089', '0.687', '0.479', '0.501'], ['0.102', '0.121', '0.691', '0.491', '0.586'], ['0.106', '0.130', '0.738', '0.492', '0.603']]
column
['BLEU-1', 'ROUGE-L', 'AVERAGE', 'EXTREME', 'GREEDY']
['S2S-Temp']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>ROUGE-L</th> <th>AVERAGE</th> <th>EXTREME</th> <th>GREEDY</th> </tr> </thead> <tbody> <tr> <td>Seq2Seq</td> <td>0.065</td> <td>0.118</td> <td>0.726</td> <td>0.474</td> <td>0.582</td> </tr> <tr> <td>CVAE</td> <td>0.088</td> <td>0.081</td> <td>0.727</td> <td>0.408</td> <td>0.563</td> </tr> <tr> <td>ECM</td> <td>0.051</td> <td>0.102</td> <td>0.708</td> <td>0.462</td> <td>0.559</td> </tr> <tr> <td>S2S-Temp-MLE</td> <td>0.103</td> <td>0.124</td> <td>0.732</td> <td>0.458</td> <td>0.593</td> </tr> <tr> <td>S2S-Temp-None</td> <td>0.078</td> <td>0.089</td> <td>0.687</td> <td>0.479</td> <td>0.501</td> </tr> <tr> <td>S2S-Temp-50%</td> <td>0.102</td> <td>0.121</td> <td>0.691</td> <td>0.491</td> <td>0.586</td> </tr> <tr> <td>S2S-Temp</td> <td>0.106</td> <td>0.130</td> <td>0.738</td> <td>0.492</td> <td>0.603</td> </tr> </tbody></table>
Table 2
table_2
D19-1197
7
emnlp2019
5.4 Evaluation Results. Table 2 report the results of automatic evaluation on sentiment response generation task. We can see that S2S-Temp outperforms all baseline models in terms of all metrics, and the improvements are statistically significant (t-test with p-value< 0.01). The results demonstrate that when only limited pairs are available, S2STemp can effectively leverage unpaired data to enhance the quality of response generation. Although lacking fine-grained check, from the comparison among S2S-Temp-None, S2S-Temp-50%, and S2S-Temp, we can conclude that the performance of S2S-Temp improves with more unpaired data. Moreover, without unpaired data, our model is even worse than CVAE since the structured templates cannot be accurately estimated from such a few data, and as long as half of the unpaired data are available, the model outperforms the baseline models on most metrics. The results further verified the important role the unpaired data plays in learning of a response generation model from low resources. S2S-Temp is better than S2S-TempMLE, indicating that the adversarial learning approach can indeed enhance the relevance of responses regarding to messages.
[2, 1, 1, 2, 1, 1, 2, 2]
['5.4 Evaluation Results.', 'Table 2 report the results of automatic evaluation on sentiment response generation task.', 'We can see that S2S-Temp outperforms all baseline models in terms of all metrics, and the improvements are statistically significant (t-test with p-value< 0.01).', 'The results demonstrate that when only limited pairs are available, S2STemp can effectively leverage unpaired data to enhance the quality of response generation.', 'Although lacking fine-grained check, from the comparison among S2S-Temp-None, S2S-Temp-50%, and S2S-Temp, we can conclude that the performance of S2S-Temp improves with more unpaired data.', 'Moreover, without unpaired data, our model is even worse than CVAE since the structured templates cannot be accurately estimated from such a few data, and as long as half of the unpaired data are available, the model outperforms the baseline models on most metrics.', 'The results further verified the important role the unpaired data plays in learning of a response generation model from low resources.', 'S2S-Temp is better than S2S-TempMLE, indicating that the adversarial learning approach can indeed enhance the relevance of responses regarding to messages.']
[None, None, ['S2S-Temp'], ['S2S-Temp'], ['S2S-Temp-None', 'S2S-Temp-50%', 'S2S-Temp'], ['S2S-Temp-None', 'CVAE'], None, ['S2S-Temp', 'S2S-Temp-MLE']]
1
D19-1199table_4
Consistency comparison between human inference and model predictions on overlapping rate (%). (cid:63) denotes p-value < 0.01 in the significance test against all the baselines.
2
[['Model', 'Preceding'], ['Model', 'Subsequent'], ['Model', 'DRNN'], ['Model', 'SIRNN'], ['Model', 'W2W']]
2
[['Len-5', 'p@1'], ['Len-5', 'p@2'], ['Len-5', 'p@3'], ['Len-5', 'Acc.'], ['Len-10', 'p@1'], ['Len-10', 'p@2'], ['Len-10', 'p@3'], ['Len-10', 'Acc.'], ['Len-15', 'p@1'], ['Len-15', 'p@2'], ['Len-15', 'p@3'], ['Len-15', 'Acc.']]
[['63.50', '90.05', '98.83', '40.46', '56.84', '80.15', '91.86', '21.06', '54.97', '77.19', '88.75', '13.08'], ['61.03', '88.86', '98.54', '40.25', '54.57', '73.60', '87.26', '20.26', '53.07', '69.85', '81.93', '12.79'], ['72.75', '93.21', '99.24', '58.18', '65.58', '85.85', '94.92', '34.47', '62.60', '82.68', '92.14', '22.58'], ['75.98', '94.49', '99.39', '62.06', '70.88', '89.14', '96.10', '40.66', '68.13', '85.82', '93.52', '28.05'], ['77.55*', '95.11*', '99.57', '63.81*', '73.52*', '90.33*', '96.64', '44.14*', '73.42*', '89.44*', '95.51*', '34.23*']]
column
['p@1', 'p@2', 'p@3', 'Acc.', 'p@1', 'p@2', 'p@3', 'Acc.', 'p@1', 'p@2', 'p@3', 'Acc.']
['W2W']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Len-5 || p@1</th> <th>Len-5 || p@2</th> <th>Len-5 || p@3</th> <th>Len-5 || Acc.</th> <th>Len-10 || p@1</th> <th>Len-10 || p@2</th> <th>Len-10 || p@3</th> <th>Len-10 || Acc.</th> <th>Len-15 || p@1</th> <th>Len-15 || p@2</th> <th>Len-15 || p@3</th> <th>Len-15 || Acc.</th> </tr> </thead> <tbody> <tr> <td>Model || Preceding</td> <td>63.50</td> <td>90.05</td> <td>98.83</td> <td>40.46</td> <td>56.84</td> <td>80.15</td> <td>91.86</td> <td>21.06</td> <td>54.97</td> <td>77.19</td> <td>88.75</td> <td>13.08</td> </tr> <tr> <td>Model || Subsequent</td> <td>61.03</td> <td>88.86</td> <td>98.54</td> <td>40.25</td> <td>54.57</td> <td>73.60</td> <td>87.26</td> <td>20.26</td> <td>53.07</td> <td>69.85</td> <td>81.93</td> <td>12.79</td> </tr> <tr> <td>Model || DRNN</td> <td>72.75</td> <td>93.21</td> <td>99.24</td> <td>58.18</td> <td>65.58</td> <td>85.85</td> <td>94.92</td> <td>34.47</td> <td>62.60</td> <td>82.68</td> <td>92.14</td> <td>22.58</td> </tr> <tr> <td>Model || SIRNN</td> <td>75.98</td> <td>94.49</td> <td>99.39</td> <td>62.06</td> <td>70.88</td> <td>89.14</td> <td>96.10</td> <td>40.66</td> <td>68.13</td> <td>85.82</td> <td>93.52</td> <td>28.05</td> </tr> <tr> <td>Model || W2W</td> <td>77.55*</td> <td>95.11*</td> <td>99.57</td> <td>63.81*</td> <td>73.52*</td> <td>90.33*</td> <td>96.64</td> <td>44.14*</td> <td>73.42*</td> <td>89.44*</td> <td>95.51*</td> <td>34.23*</td> </tr> </tbody></table>
Table 4
table_4
D19-1199
7
emnlp2019
Table 4 shows the consistency between human inference and model predictions. W2W also outperforms the baselines with a larger margin on longer conversation scenarios, which is consistent with the phenomenon of automatic evaluation. The advantage on unlabeled data of our W2W model demonstrates the superiority for detecting the latent speaker-addressee structure unspecified in the conversation stream, and that it could help find out the relationship between and across users in the session.
[1, 1, 2]
['Table 4 shows the consistency between human inference and model predictions.', 'W2W also outperforms the baselines with a larger margin on longer conversation scenarios, which is consistent with the phenomenon of automatic evaluation.', 'The advantage on unlabeled data of our W2W model demonstrates the superiority for detecting the latent speaker-addressee structure unspecified in the conversation stream, and that it could help find out the relationship between and across users in the session.']
[None, ['W2W'], ['W2W']]
1
D19-1201table_2
The results of both automatic evaluations and human evaluation. Read., Info., P-score refer to readability, informativeness, personalization scores. The kappa value between human annotators is 0.41, which indicates a moderate inter-rater.
2
[['Models', 'Seq2Seq'], ['Models', 'Persona'], ['Models', 'Adaptation'], ['Models', 'CVAE'], ['Models', 'RL-Persona'], ['Models', 'DiaWAE-GMD'], ['Models', 'PersonaWAE'], ['Models', 'ground-truth']]
2
[['Embedding Metrics', 'Extrema'], ['Embedding Metrics', 'Greedy'], ['Embedding Metrics', 'Average'], ['BLEU', 'Recall'], ['BLEU', 'Precision'], ['BLEU', 'F1'], ['Human Evaluation', 'Read.'], ['Human Evaluation', 'Info.'], ['Human Evaluation', 'P-score']]
[['0.1640', '0.4098', '0.4911', '0.1646', '0.1646', '0.1646', '2.30', '2.16', '0.49'], ['0.1631', '0.3982', '0.4871', '0.1646', '0.1646', '0.1646', '2.31', '2.15', '0.50'], ['0.1722', '0.4038', '0.5113', '0.1689', '0.1689', '0.1689', '2.29', '1.93', '0.47'], ['0.2643', '0.2911', '0.5759', '0.1931', '0.1636', '0.1771', '2.02', '2.33', '0.45'], ['0.1694', '0.4536', '0.4906', '0.1723', '0.1723', '0.1723', '2.21', '2.22', '0.63'], ['0.4387', '0.4752', '0.7573', '0.3409', '0.1710', '0.2277', '2.31', '2.35', '0.50'], ['0.4542', '0.5914', '0.7585', '0.3365', '0.1806', '0.2350', '2.33', '2.37', '0.66'], ['', '', '', '', '', '', '2.73', '2.66', '0.86']]
column
['Extrema', 'Greedy', 'Average', 'Recall', 'Precision', 'F1', 'Read.', 'Info.', 'P-score']
['PersonaWAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Embedding Metrics || Extrema</th> <th>Embedding Metrics || Greedy</th> <th>Embedding Metrics || Average</th> <th>BLEU || Recall</th> <th>BLEU || Precision</th> <th>BLEU || F1</th> <th>Human Evaluation || Read.</th> <th>Human Evaluation || Info.</th> <th>Human Evaluation || P-score</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>0.1640</td> <td>0.4098</td> <td>0.4911</td> <td>0.1646</td> <td>0.1646</td> <td>0.1646</td> <td>2.30</td> <td>2.16</td> <td>0.49</td> </tr> <tr> <td>Models || Persona</td> <td>0.1631</td> <td>0.3982</td> <td>0.4871</td> <td>0.1646</td> <td>0.1646</td> <td>0.1646</td> <td>2.31</td> <td>2.15</td> <td>0.50</td> </tr> <tr> <td>Models || Adaptation</td> <td>0.1722</td> <td>0.4038</td> <td>0.5113</td> <td>0.1689</td> <td>0.1689</td> <td>0.1689</td> <td>2.29</td> <td>1.93</td> <td>0.47</td> </tr> <tr> <td>Models || CVAE</td> <td>0.2643</td> <td>0.2911</td> <td>0.5759</td> <td>0.1931</td> <td>0.1636</td> <td>0.1771</td> <td>2.02</td> <td>2.33</td> <td>0.45</td> </tr> <tr> <td>Models || RL-Persona</td> <td>0.1694</td> <td>0.4536</td> <td>0.4906</td> <td>0.1723</td> <td>0.1723</td> <td>0.1723</td> <td>2.21</td> <td>2.22</td> <td>0.63</td> </tr> <tr> <td>Models || DiaWAE-GMD</td> <td>0.4387</td> <td>0.4752</td> <td>0.7573</td> <td>0.3409</td> <td>0.1710</td> <td>0.2277</td> <td>2.31</td> <td>2.35</td> <td>0.50</td> </tr> <tr> <td>Models || PersonaWAE</td> <td>0.4542</td> <td>0.5914</td> <td>0.7585</td> <td>0.3365</td> <td>0.1806</td> <td>0.2350</td> <td>2.33</td> <td>2.37</td> <td>0.66</td> </tr> <tr> <td>Models || ground-truth</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>2.73</td> <td>2.66</td> <td>0.86</td> </tr> </tbody></table>
Table 2
table_2
D19-1201
6
emnlp2019
Incorporating personalization in the conditional GMD prior is more effective than combing personalization in decoder. As shown in Table 2, Persona model only achieves comparable results with Seq2Seq in terms of BLEU scores and BOW scores. For PersonaWAE and DiaWAE-GMD, incorporating personalizations in both decoder and the latent space yields a performance improvement. For the BLEU-Recall, which PersonaWAE does not outperform than DiaWAE-GMD, a possible explanation for this might be that PersonaWAE model the personalization information and generation may be more limited.
[2, 1, 1, 1]
['Incorporating personalization in the conditional GMD prior is more effective than combing personalization in decoder.', 'As shown in Table 2, Persona model only achieves comparable results with Seq2Seq in terms of BLEU scores and BOW scores.', 'For PersonaWAE and DiaWAE-GMD, incorporating personalizations in both decoder and the latent space yields a performance improvement.', 'For the BLEU-Recall, which PersonaWAE does not outperform than DiaWAE-GMD, a possible explanation for this might be that PersonaWAE model the personalization information and generation may be more limited.']
[None, ['Persona', 'BLEU', 'Embedding Metrics'], ['PersonaWAE', 'DiaWAE-GMD', 'Embedding Metrics'], ['PersonaWAE', 'DiaWAE-GMD', 'Recall']]
1
D19-1205table_2
Comparison of MRR when using dialogue acts of Context-only, Response-only and Crossway fashion
1
[['Siamese-PDA-ST'], ['Siamese-PDA-MT (Zhao et al., 2017)'], ['Siamese-ADA (Kumar et al., 2018)']]
2
[['DailyDialogue', 'Context-DA (Zhao et al., 2017)'], ['DailyDialogue', 'Response-DA'], ['DailyDialogue', 'Crossway'], ['SwDA', 'Context-DA (Zhao et al., 2017)'], ['SwDA', 'Response-DA'], ['SwDA', 'Crossway']]
[['0.912', '0.900', '0.900', '0.639', '0.649', '0.669'], ['0.921', '0.919', '0.946', '0.698', '0.685', '0.703'], ['0.934', '0.927', '0.956', '0.656', '0.682', '0.719']]
column
['MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR']
['Crossway']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialogue || Context-DA (Zhao et al., 2017)</th> <th>DailyDialogue || Response-DA</th> <th>DailyDialogue || Crossway</th> <th>SwDA || Context-DA (Zhao et al., 2017)</th> <th>SwDA || Response-DA</th> <th>SwDA || Crossway</th> </tr> </thead> <tbody> <tr> <td>Siamese-PDA-ST</td> <td>0.912</td> <td>0.900</td> <td>0.900</td> <td>0.639</td> <td>0.649</td> <td>0.669</td> </tr> <tr> <td>Siamese-PDA-MT (Zhao et al., 2017)</td> <td>0.921</td> <td>0.919</td> <td>0.946</td> <td>0.698</td> <td>0.685</td> <td>0.703</td> </tr> <tr> <td>Siamese-ADA (Kumar et al., 2018)</td> <td>0.934</td> <td>0.927</td> <td>0.956</td> <td>0.656</td> <td>0.682</td> <td>0.719</td> </tr> </tbody></table>
Table 2
table_2
D19-1205
7
emnlp2019
Crossway vs Response-DA/Context-DA: Although the dialogue acts have been shown to be useful for the response selection task, existing work has only used the dialogue acts of the context. Whereas, in our experiments, we have found that the model that uses the dialogue acts of both context and response outperforms the models that use the dialogue acts of either context or response. To further analyze the results, we perform an ablation study and show the results of using the dialogue acts of context, response and of both. In Table 2, we report the MRR numbers of several models that use the dialogue acts in different settings. More specifically, we show how the following models i.e., Siamese with actual dialogue act (Siamese-ADA), Siamese with predicted dialogue acts in single-task setting (Siamese-PDA-ST) and Siamese with predicted dialogue act in multi-task setting (SiamesePDA-MT) perform when they are given the dialogue acts of only context (Context-DA), dialogue acts of only response (Response-DA), and dialogue acts of both (Crossway). Results in Table 2 indicate that the Crossway always outperforms the Context-DA or the Response-DA, for both datasets. For DailyDialog dataset, Context-DA performs better than Response-DA for all three models, whereas in the SwDA dataset, ResponseDA does a relatively better job than contextDA (two out of three models). Despite their different behavior for different datasets, when we combine Response-DA and Context-DA in a Crossway fashion, it outperforms the both, giving the best of both worlds. This performance improvement of the Crossway over context-DA and response-DA can also be attributed to the Crossway model, there are four similarities that play a role, i.e. context-response, ContextDAResponseDA, ContextDA-response and ContextresponseDA, graphically depicted in Figure 2. So, in the case of erroneous prediction of either of context DA or response DA, it shall only corrupt two of the four similarities, still leaving two other similarities that can provide strong clues to the underlying model about the correct response belonging to the context.
[2, 2, 2, 1, 1, 1, 1, 1, 2, 2]
['Crossway vs Response-DA/Context-DA: Although the dialogue acts have been shown to be useful for the response selection task, existing work has only used the dialogue acts of the context.', 'Whereas, in our experiments, we have found that the model that uses the dialogue acts of both context and response outperforms the models that use the dialogue acts of either context or response.', 'To further analyze the results, we perform an ablation study and show the results of using the dialogue acts of context, response and of both.', 'In Table 2, we report the MRR numbers of several models that use the dialogue acts in different settings.', 'More specifically, we show how the following models i.e., Siamese with actual dialogue act (Siamese-ADA), Siamese with predicted dialogue acts in single-task setting (Siamese-PDA-ST) and Siamese with predicted dialogue act in multi-task setting (SiamesePDA-MT) perform when they are given the dialogue acts of only context (Context-DA), dialogue acts of only response (Response-DA), and dialogue acts of both (Crossway).', 'Results in Table 2 indicate that the Crossway always outperforms the Context-DA or the Response-DA, for both datasets.', 'For DailyDialog dataset, Context-DA performs better than Response-DA for all three models, whereas in the SwDA dataset, ResponseDA does a relatively better job than contextDA (two out of three models).', 'Despite their different behavior for different datasets, when we combine Response-DA and Context-DA in a Crossway fashion, it outperforms the both, giving the best of both worlds.', 'This performance improvement of the Crossway over context-DA and response-DA can also be attributed to the Crossway model, there are four similarities that play a role, i.e. context-response, ContextDAResponseDA, ContextDA-response and ContextresponseDA, graphically depicted in Figure 2.', 'So, in the case of erroneous prediction of either of context DA or response DA, it shall only corrupt two of the four similarities, still leaving two other similarities that can provide strong clues to the underlying model about the correct response belonging to the context.']
[None, None, None, None, ['Siamese-ADA (Kumar et al., 2018)', 'Siamese-PDA-ST', 'Context-DA (Zhao et al., 2017)', 'Response-DA', 'Crossway'], ['Context-DA (Zhao et al., 2017)', 'Response-DA'], ['Context-DA (Zhao et al., 2017)', 'Response-DA'], ['Context-DA (Zhao et al., 2017)', 'Response-DA', 'Crossway'], None, None]
1
D19-1208table_2
Comparison with the semi-supervised image captioning method, “Self-Retrieval” [Liu et al., 2018]. Our method shows improved performance even without Unlabeled-COCO data (denoted as w/o unlabeled) as well as with Unlabeled-COCO (with unlabeled), although our model is not originally proposed for such scenario.
1
[['Self-Retrieval [Liu et al., 2018] (w/o unlabeled)'], ['Self-Retrieval [Liu et al., 2018] (with unlabeled)'], ['Baseline (w/o unlabeled)'], ['Ours (w/o unlabeled)'], ['Ours (with unlabeled)']]
1
[['BLEU1'], ['BLEU2'], ['BLEU3'], ['BLEU4'], ['ROUGE-L'], ['SPICE'], ['METEOR'], ['CIDEr']]
[['79.8', '62.3', '47.1', '34.9', '56.6', '20.5', '27.5', '114.6'], ['80.1', '63.1', '48', '35.8', '57', '21', '27.4', '117.1'], ['77.7', '61.6', '46.9', '36.2', '56.8', '20', '26.7', '114.2'], ['80.8', '65.3', '49.9', '37.6', '58.7', '22.7', '28.4', '122.6'], ['81.2', '66', '50.9', '39.1', '59.4', '23.8', '29.4', '125.5']]
column
['BLEU1', 'BLEU2', 'BLEU3', 'BLEU4', 'ROUGE-L', 'SPICE', 'METEOR', 'CIDEr']
['Ours (w/o unlabeled)', 'Ours (with unlabeled)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU1</th> <th>BLEU2</th> <th>BLEU3</th> <th>BLEU4</th> <th>ROUGE-L</th> <th>SPICE</th> <th>METEOR</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Self-Retrieval [Liu et al., 2018] (w/o unlabeled)</td> <td>79.8</td> <td>62.3</td> <td>47.1</td> <td>34.9</td> <td>56.6</td> <td>20.5</td> <td>27.5</td> <td>114.6</td> </tr> <tr> <td>Self-Retrieval [Liu et al., 2018] (with unlabeled)</td> <td>80.1</td> <td>63.1</td> <td>48</td> <td>35.8</td> <td>57</td> <td>21</td> <td>27.4</td> <td>117.1</td> </tr> <tr> <td>Baseline (w/o unlabeled)</td> <td>77.7</td> <td>61.6</td> <td>46.9</td> <td>36.2</td> <td>56.8</td> <td>20</td> <td>26.7</td> <td>114.2</td> </tr> <tr> <td>Ours (w/o unlabeled)</td> <td>80.8</td> <td>65.3</td> <td>49.9</td> <td>37.6</td> <td>58.7</td> <td>22.7</td> <td>28.4</td> <td>122.6</td> </tr> <tr> <td>Ours (with unlabeled)</td> <td>81.2</td> <td>66</td> <td>50.9</td> <td>39.1</td> <td>59.4</td> <td>23.8</td> <td>29.4</td> <td>125.5</td> </tr> </tbody></table>
Table 2
table_2
D19-1208
8
emnlp2019
Table 2 shows the comparison with SelfRetrieval. For a fair comparison with them, we replace the cross entropy loss from our loss with the policy gradient method [Rennie et al., 2017] to directly optimize our model with CIDEr score as in [Liu et al., 2018]. As our baseline model (denoted as Baseline), we train a model only with policy gradient method without the proposed GAN model. When only using the 100% paired MS COCO dataset (denoted as w/o unlabeled), our model already shows improved performance over Self-Retrieval. Moreover, when adding Unlabeled-COCO images (denoted as with unlabeled), our model performs favorably against Self-Retrieval in all the metrics. The results suggest that our method is also advantageous in the semi-supervised setup.
[1, 2, 1, 1, 1, 2]
['Table 2 shows the comparison with SelfRetrieval.', 'For a fair comparison with them, we replace the cross entropy loss from our loss with the policy gradient method [Rennie et al., 2017] to directly optimize our model with CIDEr score as in [Liu et al., 2018].', 'As our baseline model (denoted as Baseline), we train a model only with policy gradient method without the proposed GAN model.', 'When only using the 100% paired MS COCO dataset (denoted as w/o unlabeled), our model already shows improved performance over Self-Retrieval.', 'Moreover, when adding Unlabeled-COCO images (denoted as with unlabeled), our model performs favorably against Self-Retrieval in all the metrics.', 'The results suggest that our method is also advantageous in the semi-supervised setup.']
[None, None, None, ['Ours (w/o unlabeled)'], ['Ours (with unlabeled)'], None]
1
D19-1211table_4
Binary accuracy for different variants of CMFN and training scenarios outlined in Section 5. The best performance is achieved using all three modalities of text (T), visual (V) and acoustic (A).
2
[['Modality', 'C-MFN (P)'], ['Modality', 'C-MFN (C)'], ['Modality', 'C-MFN']]
1
[['T'], ['A+V'], ['T+A'], ['T+V'], ['T+A+V']]
[['62.85', '53.3', '63.28', '63.22', '64.47'], ['57.96', '50.23', '57.78', '57.99', '58.45'], ['64.44', '57.99', '64.47', '64.22', '65.23']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['C-MFN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>T</th> <th>A+V</th> <th>T+A</th> <th>T+V</th> <th>T+A+V</th> </tr> </thead> <tbody> <tr> <td>Modality || C-MFN (P)</td> <td>62.85</td> <td>53.3</td> <td>63.28</td> <td>63.22</td> <td>64.47</td> </tr> <tr> <td>Modality || C-MFN (C)</td> <td>57.96</td> <td>50.23</td> <td>57.78</td> <td>57.99</td> <td>58.45</td> </tr> <tr> <td>Modality || C-MFN</td> <td>64.44</td> <td>57.99</td> <td>64.47</td> <td>64.22</td> <td>65.23</td> </tr> </tbody></table>
Table 4
table_4
D19-1211
8
emnlp2019
6 Results and Discussion. The results of our experiments are presented in Table 4. Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models. Punchline is the most important component for detecting humor as the performance of C-MFN (P) is significantly higher than C-MFN (C). Models that use all modalities (T+A+V) outperform models that use only one or two modalities (T, T+A, T+V, A+V). Between text (T) and nonverbal behaviors (A+V), text shows to be the most important modality. Most of the cases, both modalities of visual and acoustic improve the performance of text alone (T+V, T+A). Based on the above observations, each neural component of the C-MFN model is useful in improving the prediction of humor. The results also indicate that modeling humor from a multimodal perspective yields successful results. Furthermore, both context and punchline are important in understanding humor. The highest accuracy achieved by Random Forest baseline (after hyper parameter tuning & using same folds as C-MFN) was 57.78%, which is higher than random baseline but lower than CMFN (65.23%). In addition, C-MFN achieves higher accuracy than similar notable unimodal previous work (58.9%) where only punchline and textual information were used (Chen and Lee, 2017) . The human performance 5 on the URFUNNY dataset is 82.5%.
[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2]
['6 Results and Discussion.', 'The results of our experiments are presented in Table 4.', 'Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models.', 'Punchline is the most important component for detecting humor as the performance of C-MFN (P) is significantly higher than C-MFN (C).', 'Models that use all modalities (T+A+V) outperform models that use only one or two modalities (T, T+A, T+V, A+V).', 'Between text (T) and nonverbal behaviors (A+V), text shows to be the most important modality.', 'Most of the cases, both modalities of visual and acoustic improve the performance of text alone (T+V, T+A).', 'Based on the above observations, each neural component of the C-MFN model is useful in improving the prediction of humor.', 'The results also indicate that modeling humor from a multimodal perspective yields successful results.', 'Furthermore, both context and punchline are important in understanding humor.', 'The highest accuracy achieved by Random Forest baseline (after hyper parameter tuning & using same folds as C-MFN) was 57.78%, which is higher than random baseline but lower than CMFN (65.23%).', 'In addition, C-MFN achieves higher accuracy than similar notable unimodal previous work (58.9%) where only punchline and textual information were used (Chen and Lee, 2017) .', 'The human performance 5 on the URFUNNY dataset is 82.5%.']
[None, None, ['C-MFN', 'C-MFN (C)', 'C-MFN (P)'], ['C-MFN (P)', 'C-MFN (C)'], ['T+A+V', 'T', 'T+A', 'T+V', 'A+V'], ['T', 'A+V'], ['T', 'T+V', 'T+A'], ['C-MFN'], None, None, ['C-MFN'], ['C-MFN'], None]
1
D19-1213table_2
Performance compared with state-of-the-art methods on Youtube2Text dataset. The (−) is an unknown metric.
2
[['Model', 'LSTM-I'], ['Model', 'HRNE'], ['Model', 'MA'], ['Model', 'SCN'], ['Model', 'TSA'], ['Model', 'SA-LSTM'], ['Model', 'PickNet'], ['Model', 'ASGN+LNA']]
1
[['B-4'], ['M'], ['R'], ['C']]
[['0.446', '0.297', '-', '-'], ['0.438', '0.331', '-', '-'], ['0.504', '0.318', '-', '0.699'], ['0.511', '0.335', '-', '0.777'], ['0.517', '0.34', '-', '0.749'], ['0.523', '0.341', '0.698', '0.803'], ['0.523', '0.333', '0.696', '0.765'], ['0.547', '0.342', '0.717', '0.813']]
column
['B-4', 'M', 'R', 'C']
['ASGN+LNA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-4</th> <th>M</th> <th>R</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-I</td> <td>0.446</td> <td>0.297</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || HRNE</td> <td>0.438</td> <td>0.331</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MA</td> <td>0.504</td> <td>0.318</td> <td>-</td> <td>0.699</td> </tr> <tr> <td>Model || SCN</td> <td>0.511</td> <td>0.335</td> <td>-</td> <td>0.777</td> </tr> <tr> <td>Model || TSA</td> <td>0.517</td> <td>0.34</td> <td>-</td> <td>0.749</td> </tr> <tr> <td>Model || SA-LSTM</td> <td>0.523</td> <td>0.341</td> <td>0.698</td> <td>0.803</td> </tr> <tr> <td>Model || PickNet</td> <td>0.523</td> <td>0.333</td> <td>0.696</td> <td>0.765</td> </tr> <tr> <td>Model || ASGN+LNA</td> <td>0.547</td> <td>0.342</td> <td>0.717</td> <td>0.813</td> </tr> </tbody></table>
Table 2
table_2
D19-1213
7
emnlp2019
4.5 Quantitative Analysis In Table 2 and Table 3, we compare our ASGN+LNA model with the state-of-the-art models on the Youtube2Text and MSR-VTT datasets. Following the operation of (Gan et al., 2016; Pasunuru and Bansal, 2017), ASGN+LNA is the average ensemble of 5 ASGN+LNA (RL) models trained with different initializations. From the results, our method achieves the competitive performance on the two datasets. Compared with the other interpretable improvement methods (Dong et al., 2017; Wu et al., 2018), interpretability of our neural network is explicitly improved, and the performance of our model is more competitive.
[1, 1, 1, 1]
['4.5 Quantitative Analysis In Table 2 and Table 3, we compare our ASGN+LNA model with the state-of-the-art models on the Youtube2Text and MSR-VTT datasets.', 'Following the operation of (Gan et al., 2016; Pasunuru and Bansal, 2017), ASGN+LNA is the average ensemble of 5 ASGN+LNA (RL) models trained with different initializations.', 'From the results, our method achieves the competitive performance on the two datasets.', 'Compared with the other interpretable improvement methods (Dong et al., 2017; Wu et al., 2018), interpretability of our neural network is explicitly improved, and the performance of our model is more competitive.']
[['ASGN+LNA'], ['ASGN+LNA'], ['ASGN+LNA'], ['ASGN+LNA']]
1
D19-1214table_3
The SLU performance on baseline models compared with our Stack-Propagation model on two datasets.
2
[['Model', 'gate-mechanism'], ['Model', 'pipelined model'], ['Model', 'sentence intent augmented'], ['Model', 'lstm+last-hidden'], ['Model', 'lstm+token-level'], ['Model', 'without self-attention'], ['Model', 'Our model']]
2
[['SNIPS', 'Slot (F1)'], ['SNIPS', 'Intent (Acc)'], ['SNIPS', 'Overall (Acc)'], ['ATIS', 'Slot (F1)'], ['ATIS', 'Intent (Acc)'], ['ATIS', 'Overall (Acc)']]
[['92.2', '97.6', '82.4', '95.3', '96.2', '83.4'], ['90.8', '97.6', '81.8', '95.1', '96.1', '82.3'], ['93.7', '97.5', '86.1', '95.5', '96.7', '85.8'], ['-', '97.1', '-', '-', '95.2', '-'], ['-', '97.5', '-', '-', '96', '-'], ['94.1', '97.8', '86.6', '95.6', '96.6', '86.2'], ['94.2', '98', '86.9', '95.9', '96.9', '86.5']]
column
['Slot (F1)', 'Intent (Acc)', 'Overall (Acc)', 'Slot (F1)', 'Intent (Acc)', 'Overall (Acc)']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNIPS || Slot (F1)</th> <th>SNIPS || Intent (Acc)</th> <th>SNIPS || Overall (Acc)</th> <th>ATIS || Slot (F1)</th> <th>ATIS || Intent (Acc)</th> <th>ATIS || Overall (Acc)</th> </tr> </thead> <tbody> <tr> <td>Model || gate-mechanism</td> <td>92.2</td> <td>97.6</td> <td>82.4</td> <td>95.3</td> <td>96.2</td> <td>83.4</td> </tr> <tr> <td>Model || pipelined model</td> <td>90.8</td> <td>97.6</td> <td>81.8</td> <td>95.1</td> <td>96.1</td> <td>82.3</td> </tr> <tr> <td>Model || sentence intent augmented</td> <td>93.7</td> <td>97.5</td> <td>86.1</td> <td>95.5</td> <td>96.7</td> <td>85.8</td> </tr> <tr> <td>Model || lstm+last-hidden</td> <td>-</td> <td>97.1</td> <td>-</td> <td>-</td> <td>95.2</td> <td>-</td> </tr> <tr> <td>Model || lstm+token-level</td> <td>-</td> <td>97.5</td> <td>-</td> <td>-</td> <td>96</td> <td>-</td> </tr> <tr> <td>Model || without self-attention</td> <td>94.1</td> <td>97.8</td> <td>86.6</td> <td>95.6</td> <td>96.6</td> <td>86.2</td> </tr> <tr> <td>Model || Our model</td> <td>94.2</td> <td>98</td> <td>86.9</td> <td>95.9</td> <td>96.9</td> <td>86.5</td> </tr> </tbody></table>
Table 3
table_3
D19-1214
7
emnlp2019
Table 3 gives the result of the comparison experiment. From the result of gate-mechanism row, we can observe that without the Stack-Propagation learning and simply using the gate-mechanism to incorporate the intent information, the slot filling (F1) performance drops significantly, which demonstrates that directly leverage the intent information with Stack-Propagation can improve the slot filling performance effectively than using the gate mechanism. Besides, we can see that the intent detection (Acc) and overall accuracy (Acc) decrease a lot. We attribute it to the fact that the bad slot filling performance harms the intent detection and the whole sentence semantic performance due to the joint learning scheme. Besides, from the pipeline model row of Table 3, we can see that without shared encoder, the performance on all metrics declines significantly. This shows that Stack-Propagation model can learn the correlation knowledge which may promote each other and ease the error propagation effectively.
[1, 1, 1, 1, 1, 1]
['Table 3 gives the result of the comparison experiment.', 'From the result of gate-mechanism row, we can observe that without the Stack-Propagation learning and simply using the gate-mechanism to incorporate the intent information, the slot filling (F1) performance drops significantly, which demonstrates that directly leverage the intent information with Stack-Propagation can improve the slot filling performance effectively than using the gate mechanism.', 'Besides, we can see that the intent detection (Acc) and overall accuracy (Acc) decrease a lot.', 'We attribute it to the fact that the bad slot filling performance harms the intent detection and the whole sentence semantic performance due to the joint learning scheme.', 'Besides, from the pipeline model row of Table 3, we can see that without shared encoder, the performance on all metrics declines significantly.', 'This shows that Stack-Propagation model can learn the correlation knowledge which may promote each other and ease the error propagation effectively.']
[None, ['gate-mechanism', 'Our model', 'Slot (F1)'], ['Intent (Acc)', 'Overall (Acc)'], ['Slot (F1)', 'Intent (Acc)'], ['pipelined model'], ['Our model']]
1
D19-1217table_1
Experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets.
2
[['Method', 'ESA+'], ['Method', 'STAN+'], ['Method', 'CDMN+'], ['Method', 'LF+'], ['Method', 'HRE+'], ['Method', 'MN+'], ['Method', 'SFQIH+'], ['Method', 'HACRN'], ['Method', 'RICT (ours)']]
2
[['TACoS-MultiLevel', 'BLEU-1'], ['TACoS-MultiLevel', 'BLEU-2'], ['TACoS-MultiLevel', 'ROUGE'], ['TACoS-MultiLevel', 'METEOR'], ['YoutubeClip', 'BLEU-1'], ['YoutubeClip', 'BLEU-2'], ['YoutubeClip', 'ROUGE'], ['YoutubeClip', 'METEOR']]
[['0.356', '0.244', '0.422', '0.109', '0.268', '0.151', '0.276', '0.082'], ['0.408', '0.312', '0.449', '0.133', '0.315', '0.185', '0.306', '0.09'], ['0.429', '0.341', '0.46', '0.142', '0.293', '0.161', '0.311', '0.094'], ['0.404', '0.29', '0.465', '0.135', '0.284', '0.183', '0.307', '0.083'], ['0.438', '0.32', '0.502', '0.153', '0.293', '0.172', '0.308', '0.094'], ['0.43', '0.326', '0.472', '0.149', '0.306', '0.185', '0.29', '0.086'], ['0.438', '0.334', '0.481', '0.153', '0.326', '0.202', '0.319', '0.085'], ['0.451', '0.346', '0.499', '0.161', '0.307', '0.174', '0.331', '0.104'], ['0.464', '0.361', '0.527', '0.178', '0.333', '0.194', '0.332', '0.104']]
column
['BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR', 'BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR']
['RICT (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TACoS-MultiLevel || BLEU-1</th> <th>TACoS-MultiLevel || BLEU-2</th> <th>TACoS-MultiLevel || ROUGE</th> <th>TACoS-MultiLevel || METEOR</th> <th>YoutubeClip || BLEU-1</th> <th>YoutubeClip || BLEU-2</th> <th>YoutubeClip || ROUGE</th> <th>YoutubeClip || METEOR</th> </tr> </thead> <tbody> <tr> <td>Method || ESA+</td> <td>0.356</td> <td>0.244</td> <td>0.422</td> <td>0.109</td> <td>0.268</td> <td>0.151</td> <td>0.276</td> <td>0.082</td> </tr> <tr> <td>Method || STAN+</td> <td>0.408</td> <td>0.312</td> <td>0.449</td> <td>0.133</td> <td>0.315</td> <td>0.185</td> <td>0.306</td> <td>0.09</td> </tr> <tr> <td>Method || CDMN+</td> <td>0.429</td> <td>0.341</td> <td>0.46</td> <td>0.142</td> <td>0.293</td> <td>0.161</td> <td>0.311</td> <td>0.094</td> </tr> <tr> <td>Method || LF+</td> <td>0.404</td> <td>0.29</td> <td>0.465</td> <td>0.135</td> <td>0.284</td> <td>0.183</td> <td>0.307</td> <td>0.083</td> </tr> <tr> <td>Method || HRE+</td> <td>0.438</td> <td>0.32</td> <td>0.502</td> <td>0.153</td> <td>0.293</td> <td>0.172</td> <td>0.308</td> <td>0.094</td> </tr> <tr> <td>Method || MN+</td> <td>0.43</td> <td>0.326</td> <td>0.472</td> <td>0.149</td> <td>0.306</td> <td>0.185</td> <td>0.29</td> <td>0.086</td> </tr> <tr> <td>Method || SFQIH+</td> <td>0.438</td> <td>0.334</td> <td>0.481</td> <td>0.153</td> <td>0.326</td> <td>0.202</td> <td>0.319</td> <td>0.085</td> </tr> <tr> <td>Method || HACRN</td> <td>0.451</td> <td>0.346</td> <td>0.499</td> <td>0.161</td> <td>0.307</td> <td>0.174</td> <td>0.331</td> <td>0.104</td> </tr> <tr> <td>Method || RICT (ours)</td> <td>0.464</td> <td>0.361</td> <td>0.527</td> <td>0.178</td> <td>0.333</td> <td>0.194</td> <td>0.332</td> <td>0.104</td> </tr> </tbody></table>
Table 1
table_1
D19-1217
7
emnlp2019
Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets. Our method (RICT) outperforms all above models in almost all metrics. This fact shows the effectiveness of our overall network architecture. And we find that the image dialog models perform better than video QA models in answer generation.
[1, 1, 1, 1]
['Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets.', 'Our method (RICT) outperforms all above models in almost all metrics.', 'This fact shows the effectiveness of our overall network architecture.', 'And we find that the image dialog models perform better than video QA models in answer generation.']
[None, ['RICT (ours)'], ['RICT (ours)'], ['TACoS-MultiLevel', 'YoutubeClip']]
1
D19-1217table_2
Experimental results of question generation on TACoS-MultiLevel and YoutubeClip datasets.
2
[['Method', 'ESA+'], ['Method', 'STAN+'], ['Method', 'CDMN+'], ['Method', 'LF+'], ['Method', 'HRE+'], ['Method', 'MN+'], ['Method', 'SFQIH+'], ['Method', 'HACRN'], ['Method', 'RICT (ours)']]
2
[['TACoS-MultiLevel', 'BLEU-1'], ['TACoS-MultiLevel', 'BLEU-2'], ['TACoS-MultiLevel', 'ROUGE'], ['TACoS-MultiLevel', 'METEOR'], ['YoutubeClip', 'BLEU-1'], ['YoutubeClip', 'BLEU-2'], ['YoutubeClip', 'ROUGE'], ['YoutubeClip', 'METEOR']]
[['0.693', '0.582', '0.718', '0.341', '0.497', '0.333', '0.565', '0.212'], ['0.706', '0.599', '0.73', '0.354', '0.483', '0.322', '0.559', '0.208'], ['0.707', '0.603', '0.74', '0.357', '0.507', '0.341', '0.567', '0.219'], ['0.704', '0.598', '0.728', '0.349', '0.512', '0.346', '0.574', '0.218'], ['0.694', '0.592', '0.729', '0.348', '0.515', '0.35', '0.571', '0.223'], ['0.698', '0.589', '0.718', '0.345', '0.488', '0.324', '0.556', '0.204'], ['0.694', '0.592', '0.729', '0.349', '0.503', '0.339', '0.563', '0.217'], ['0.715', '0.616', '0.741', '0.358', '0.524', '0.352', '0.577', '0.229'], ['0.733', '0.625', '0.748', '0.367', '0.536', '0.375', '0.593', '0.234']]
column
['BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR', 'BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR']
['RICT (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TACoS-MultiLevel || BLEU-1</th> <th>TACoS-MultiLevel || BLEU-2</th> <th>TACoS-MultiLevel || ROUGE</th> <th>TACoS-MultiLevel || METEOR</th> <th>YoutubeClip || BLEU-1</th> <th>YoutubeClip || BLEU-2</th> <th>YoutubeClip || ROUGE</th> <th>YoutubeClip || METEOR</th> </tr> </thead> <tbody> <tr> <td>Method || ESA+</td> <td>0.693</td> <td>0.582</td> <td>0.718</td> <td>0.341</td> <td>0.497</td> <td>0.333</td> <td>0.565</td> <td>0.212</td> </tr> <tr> <td>Method || STAN+</td> <td>0.706</td> <td>0.599</td> <td>0.73</td> <td>0.354</td> <td>0.483</td> <td>0.322</td> <td>0.559</td> <td>0.208</td> </tr> <tr> <td>Method || CDMN+</td> <td>0.707</td> <td>0.603</td> <td>0.74</td> <td>0.357</td> <td>0.507</td> <td>0.341</td> <td>0.567</td> <td>0.219</td> </tr> <tr> <td>Method || LF+</td> <td>0.704</td> <td>0.598</td> <td>0.728</td> <td>0.349</td> <td>0.512</td> <td>0.346</td> <td>0.574</td> <td>0.218</td> </tr> <tr> <td>Method || HRE+</td> <td>0.694</td> <td>0.592</td> <td>0.729</td> <td>0.348</td> <td>0.515</td> <td>0.35</td> <td>0.571</td> <td>0.223</td> </tr> <tr> <td>Method || MN+</td> <td>0.698</td> <td>0.589</td> <td>0.718</td> <td>0.345</td> <td>0.488</td> <td>0.324</td> <td>0.556</td> <td>0.204</td> </tr> <tr> <td>Method || SFQIH+</td> <td>0.694</td> <td>0.592</td> <td>0.729</td> <td>0.349</td> <td>0.503</td> <td>0.339</td> <td>0.563</td> <td>0.217</td> </tr> <tr> <td>Method || HACRN</td> <td>0.715</td> <td>0.616</td> <td>0.741</td> <td>0.358</td> <td>0.524</td> <td>0.352</td> <td>0.577</td> <td>0.229</td> </tr> <tr> <td>Method || RICT (ours)</td> <td>0.733</td> <td>0.625</td> <td>0.748</td> <td>0.367</td> <td>0.536</td> <td>0.375</td> <td>0.593</td> <td>0.234</td> </tr> </tbody></table>
Table 2
table_2
D19-1217
7
emnlp2019
Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets. Our method (RICT) outperforms all above models in almost all metrics. This fact shows the effectiveness of our overall network architecture. And we find that the image dialog models perform better than video QA models in answer generation, but worse in question generation on both datasets. This might indicate that for these two datasets, the answer generation is more dependent on dialog, and question generation is more dependent on video content.
[1, 1, 1, 1, 2]
['Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets.', 'Our method (RICT) outperforms all above models in almost all metrics.', 'This fact shows the effectiveness of our overall network architecture.', 'And we find that the image dialog models perform better than video QA models in answer generation, but worse in question generation on both datasets.', 'This might indicate that for these two datasets, the answer generation is more dependent on dialog, and question generation is more dependent on video content.']
[None, ['RICT (ours)'], ['RICT (ours)'], ['TACoS-MultiLevel', 'YoutubeClip'], None]
1
D19-1230table_2
Performances of negative focus detection systems on the SEM’12 corpus.
3
[['Existing Methods', 'System', 'CLaC (Rosenberg (2012))'], ['Existing Methods', 'System', 'FOC-DET (Blanco (2013))'], ['Existing Methods', 'System', 'WTGM (Zou (2015))'], ['Our Methods', 'System', 'LSTM'], ['Our Methods', 'System', 'BiLSTM'], ['Our Methods', 'System', 'BiLSTM-CRF'], ['Our Methods', 'System', 'W-Att BiLSTM-CRF'], ['Our Methods', 'System', 'T-Att BiLSTM-CRF'], ['Our Methods', 'System', 'WT-Att BiLSTM-CRF']]
1
[['Acc']]
[['60'], ['65.5'], ['68.4'], ['58.71'], ['60.81'], ['67.28'], ['70.22'], ['70.51'], ['69.8']]
column
['Acc']
['Our Methods']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Existing Methods || System || CLaC (Rosenberg (2012))</td> <td>60</td> </tr> <tr> <td>Existing Methods || System || FOC-DET (Blanco (2013))</td> <td>65.5</td> </tr> <tr> <td>Existing Methods || System || WTGM (Zou (2015))</td> <td>68.4</td> </tr> <tr> <td>Our Methods || System || LSTM</td> <td>58.71</td> </tr> <tr> <td>Our Methods || System || BiLSTM</td> <td>60.81</td> </tr> <tr> <td>Our Methods || System || BiLSTM-CRF</td> <td>67.28</td> </tr> <tr> <td>Our Methods || System || W-Att BiLSTM-CRF</td> <td>70.22</td> </tr> <tr> <td>Our Methods || System || T-Att BiLSTM-CRF</td> <td>70.51</td> </tr> <tr> <td>Our Methods || System || WT-Att BiLSTM-CRF</td> <td>69.8</td> </tr> </tbody></table>
Table 2
table_2
D19-1230
6
emnlp2019
4.2 Results. Table 2 shows the performance comparison of various negative focus detection models. We can see that all of contextual attention based models (row 7-9 in Table 2) achieve better perfomances than existing methods (row 1-3 in Table 2) and models without contextual attention (row 4-6 in Table 2). In addition, both word-level and topiclevel contextual attention based models outperform the state-of-the-art system (WTGM in Table 2) with about 2% accuracy gain at least. The results demonstrate the effectiveness of these two types of contextual attention mechanisms. Moreover, to better quantify the contribution of the different attention mechanisms of our approach, we also conduct several attention variants. Comparing the three types of attention mechanisms (row 7-9 in Table 2), the topic-level attention based model achieves the best performance. We also observe that the word-level attention based model also achieves comparative performance with the topic-level one. However, when combining the two types of attention mechanisms, the performance declines. It indicates that both the word-level attention mechanism and the topiclevel one can capture the contextual information effectively, but applying such information repeatedly might lead to feature redundancy. In addition to the methods that take advantage of the contextual features in adjacent sentences, we also compare the performances of different frameworks for negative focus detection (row 4-6 in Table 2), which only apply the features in current sentence. The results indicate that the BiLSTM-CRF framework is a better fit for encoding order information and long-range context dependency for such sequence labeling task.
[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['4.2 Results.', 'Table 2 shows the performance comparison of various negative focus detection models.', 'We can see that all of contextual attention based models (row 7-9 in Table 2) achieve better perfomances than existing methods (row 1-3 in Table 2) and models without contextual attention (row 4-6 in Table 2).', 'In addition, both word-level and topiclevel contextual attention based models outperform the state-of-the-art system (WTGM in Table 2) with about 2% accuracy gain at least.', 'The results demonstrate the effectiveness of these two types of contextual attention mechanisms.', 'Moreover, to better quantify the contribution of the different attention mechanisms of our approach, we also conduct several attention variants.', 'Comparing the three types of attention mechanisms (row 7-9 in Table 2), the topic-level attention based model achieves the best performance.', 'We also observe that the word-level attention based model also achieves comparative performance with the topic-level one.', 'However, when combining the two types of attention mechanisms, the performance declines.', 'It indicates that both the word-level attention mechanism and the topiclevel one can capture the contextual information effectively, but applying such information repeatedly might lead to feature redundancy.', 'In addition to the methods that take advantage of the contextual features in adjacent sentences, we also compare the performances of different frameworks for negative focus detection (row 4-6 in Table 2), which only apply the features in current sentence.', 'The results indicate that the BiLSTM-CRF framework is a better fit for encoding order information and long-range context dependency for such sequence labeling task.']
[None, None, ['CLaC (Rosenberg (2012))', 'FOC-DET (Blanco (2013))', 'WTGM (Zou (2015))', 'LSTM', 'BiLSTM', 'BiLSTM-CRF', 'W-Att BiLSTM-CRF', 'T-Att BiLSTM-CRF', 'WT-Att BiLSTM-CRF'], ['WT-Att BiLSTM-CRF', 'WTGM (Zou (2015))'], ['WT-Att BiLSTM-CRF'], None, ['W-Att BiLSTM-CRF', 'T-Att BiLSTM-CRF', 'WT-Att BiLSTM-CRF'], ['W-Att BiLSTM-CRF', 'T-Att BiLSTM-CRF'], ['WT-Att BiLSTM-CRF'], None, ['LSTM', 'BiLSTM', 'BiLSTM-CRF'], ['BiLSTM-CRF']]
1
D19-1230table_7
Performance comparison for different pretrained word embeddings on the T-Att BiLSTM-CRF model.
2
[['Word Embedding', 'Senna'], ['Word Embedding', 'Glove'], ['Word Embedding', 'Word2vec'], ['Word Embedding', 'BERT'], ['Word Embedding', 'ELMo']]
1
[['Dimension'], ['Acc']]
[['50', '70.08'], ['100', '69.66'], ['300', '69.1'], ['768', '70.22'], ['1024', '70.51']]
column
['Dimension', 'Acc']
['ELMo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension</th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Word Embedding || Senna</td> <td>50</td> <td>70.08</td> </tr> <tr> <td>Word Embedding || Glove</td> <td>100</td> <td>69.66</td> </tr> <tr> <td>Word Embedding || Word2vec</td> <td>300</td> <td>69.1</td> </tr> <tr> <td>Word Embedding || BERT</td> <td>768</td> <td>70.22</td> </tr> <tr> <td>Word Embedding || ELMo</td> <td>1024</td> <td>70.51</td> </tr> </tbody></table>
Table 7
table_7
D19-1230
8
emnlp2019
Impact of Pre-trained Word Embedding. To compare the impacts of different pre-trained word embeddings for negative focus detection task, we attempt to employ other pre-trained word embeddings. Table 7 show the performances of the T-Att BiLSTM-CRF model with different pre-trained word embeddings, including Senna, Glove, Word2vec, and BERT. We can see that ELMo achieves the best performance, but the performance gaps between different pre-trained word embeddings and different dimensions are not significant.
[2, 2, 1, 1]
['Impact of Pre-trained Word Embedding.', 'To compare the impacts of different pre-trained word embeddings for negative focus detection task, we attempt to employ other pre-trained word embeddings.', 'Table 7 show the performances of the T-Att BiLSTM-CRF model with different pre-trained word embeddings, including Senna, Glove, Word2vec, and BERT.', 'We can see that ELMo achieves the best performance, but the performance gaps between different pre-trained word embeddings and different dimensions are not significant.']
[None, None, ['Word Embedding', 'Senna', 'Glove', 'Word2vec', 'BERT'], ['ELMo', 'Word Embedding']]
1
D19-1231table_3
Results in accuracy on the Local Discrimination task. (cid:63) is a pre-trained model on the global discrimination task.
4
[['Model', 'Lex. Neural Grid (M&J)*', 'Emb.', 'word2vec'], ['Model', 'Lex. Neural Grid (M&J)', 'Emb.', 'word2vec'], ['Model', 'Dist. sentence (L&H)', 'Emb.', 'word2vec'], ['Model', 'Our Global Model', 'Emb.', 'word2vec'], ['Model', 'Our Local Model', 'Emb.', 'word2vec'], ['Model', 'Our Local Model', 'Emb.', 'ELMo'], ['Model', 'Our Full Model', 'Emb.', 'word2vec'], ['Model', 'Our Full Model', 'Emb.', 'ELMo']]
1
[['Dw=1,2,3'], ['Dw=1'], ['Dw=2'], ['Dw=3']]
[['60.27', '56.11', '60.23', '62.23'], ['55.01', '53.81', '55.37', '56.16'], ['6.76', '4.28', '6.82', '9.25'], ['57.24', '53.35', '56.58', '59.67'], ['73.23', '66.21', '73.16', '77.93'], ['74.12', '65.82', '73.54', '78.16'], ['75.37', '67.29', '75.58', '80.21'], ['77.07', '64.38', '76.12', '81.23']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Our Local Model', 'Our Global Model', 'Our Full Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dw=1,2,3</th> <th>Dw=1</th> <th>Dw=2</th> <th>Dw=3</th> </tr> </thead> <tbody> <tr> <td>Model || Lex. Neural Grid (M&amp;J)* || Emb. || word2vec</td> <td>60.27</td> <td>56.11</td> <td>60.23</td> <td>62.23</td> </tr> <tr> <td>Model || Lex. Neural Grid (M&amp;J) || Emb. || word2vec</td> <td>55.01</td> <td>53.81</td> <td>55.37</td> <td>56.16</td> </tr> <tr> <td>Model || Dist. sentence (L&amp;H) || Emb. || word2vec</td> <td>6.76</td> <td>4.28</td> <td>6.82</td> <td>9.25</td> </tr> <tr> <td>Model || Our Global Model || Emb. || word2vec</td> <td>57.24</td> <td>53.35</td> <td>56.58</td> <td>59.67</td> </tr> <tr> <td>Model || Our Local Model || Emb. || word2vec</td> <td>73.23</td> <td>66.21</td> <td>73.16</td> <td>77.93</td> </tr> <tr> <td>Model || Our Local Model || Emb. || ELMo</td> <td>74.12</td> <td>65.82</td> <td>73.54</td> <td>78.16</td> </tr> <tr> <td>Model || Our Full Model || Emb. || word2vec</td> <td>75.37</td> <td>67.29</td> <td>75.58</td> <td>80.21</td> </tr> <tr> <td>Model || Our Full Model || Emb. || ELMo</td> <td>77.07</td> <td>64.38</td> <td>76.12</td> <td>81.23</td> </tr> </tbody></table>
Table 3
table_3
D19-1231
8
emnlp2019
5.3 Results on Local Discrimination . Table 3 shows the results in accuracy on the “local” discrimination task. From the table, we see that existing models including our global model perform poorly compared to our proposed local models. They are likely to fail in distinguishing the text segments that are locally coherent and penalize them unfairly. One of the possible explanations of this phenomenon can be found in the nature of the global model. These models (except L&H) are designed to make a decision at a global level, thus they are likely to penalize locally coherent segments of a text. This observation is further bolstered by the performance of our local coherence models, which show higher sensitivity in discriminating locally coherent texts and achieve significantly higher accuracy compared to the baseline models and our global model.
[2, 1, 1, 2, 2, 2, 1]
['5.3 Results on Local Discrimination .', 'Table 3 shows the results in accuracy on the \x93local\x94\x9d discrimination task.', 'From the table, we see that existing models including our global model perform poorly compared to our proposed local models.', 'They are likely to fail in distinguishing the text segments that are locally coherent and penalize them unfairly.', 'One of the possible explanations of this phenomenon can be found in the nature of the global model.', 'These models (except L&H) are designed to make a decision at a global level, thus they are likely to penalize locally coherent segments of a text.', 'This observation is further bolstered by the performance of our local coherence models, which show higher sensitivity in discriminating locally coherent texts and achieve significantly higher accuracy compared to the baseline models and our global model.']
[None, ['Our Local Model'], ['Lex. Neural Grid (M&J)', 'Dist. sentence (L&H)', 'Our Full Model', 'Our Global Model', 'Our Local Model'], None, None, None, ['Our Local Model', 'Our Global Model', 'Lex. Neural Grid (M&J)', 'Dist. sentence (L&H)']]
1
D19-1233table_4
Test set micro-averaged F1 scores on labelled attachment decisions. We report numbers for other parsers from Morey et al. (2017)’s replication study. For each metric, the highest score for all the parsers in the comparison is shown in bold, while the highest score among parsers of that type (neural or feature-based) is in italics.
3
[['Model', 'Feature-based parsers', 'Hayashi et al. (2016)'], ['Model', 'Feature-based parsers', 'Surdeanu et al. (2015)'], ['Model', 'Feature-based parsers', 'Joty et al. (2015)'], ['Model', 'Feature-based parsers', 'Feng and Hirst (2014a)'], ['Model', 'Neural parsers', 'Braud et al. (2016)'], ['Model', 'Neural parsers', 'Li et al. (2016)'], ['Model', 'Neural parsers', 'Braud et al. (2017) (mono)'], ['Model', 'Our work', 'Discriminative Baseline'], ['Model', 'Our work', 'Generative Model'], ['Model', 'Unpublished', 'Ji and Eisenstein (2014) (updated)'], ['Model', 'Additional data', 'Braud et al. (2017) (cross + dev)']]
1
[['S'], ['N'], ['R'], ['F']]
[['65.1', '54.6', '44.7', '44.1'], ['65.3', '54.2', '45.1', '44.2'], ['65.1', '55.5', '45.1', '44.3'], ['68.6', '55.9', '45.8', '44.6'], ['59.5', '47.2', '34.7', '34.3'], ['64.5', '54', '38.1', '36.6'], ['61.9', '53.4', '44.5', '44'], ['65.2', '54.9', '42.8', '42.4'], ['67.1', '57.4', '45.5', '45'], ['64.1', '54.2', '46.8', '46.3'], ['62.7', '54.5', '45.5', '45.1']]
column
['F1', 'F1', 'F1', 'F1']
['Generative Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>N</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Model || Feature-based parsers || Hayashi et al. (2016)</td> <td>65.1</td> <td>54.6</td> <td>44.7</td> <td>44.1</td> </tr> <tr> <td>Model || Feature-based parsers || Surdeanu et al. (2015)</td> <td>65.3</td> <td>54.2</td> <td>45.1</td> <td>44.2</td> </tr> <tr> <td>Model || Feature-based parsers || Joty et al. (2015)</td> <td>65.1</td> <td>55.5</td> <td>45.1</td> <td>44.3</td> </tr> <tr> <td>Model || Feature-based parsers || Feng and Hirst (2014a)</td> <td>68.6</td> <td>55.9</td> <td>45.8</td> <td>44.6</td> </tr> <tr> <td>Model || Neural parsers || Braud et al. (2016)</td> <td>59.5</td> <td>47.2</td> <td>34.7</td> <td>34.3</td> </tr> <tr> <td>Model || Neural parsers || Li et al. (2016)</td> <td>64.5</td> <td>54</td> <td>38.1</td> <td>36.6</td> </tr> <tr> <td>Model || Neural parsers || Braud et al. (2017) (mono)</td> <td>61.9</td> <td>53.4</td> <td>44.5</td> <td>44</td> </tr> <tr> <td>Model || Our work || Discriminative Baseline</td> <td>65.2</td> <td>54.9</td> <td>42.8</td> <td>42.4</td> </tr> <tr> <td>Model || Our work || Generative Model</td> <td>67.1</td> <td>57.4</td> <td>45.5</td> <td>45</td> </tr> <tr> <td>Model || Unpublished || Ji and Eisenstein (2014) (updated)</td> <td>64.1</td> <td>54.2</td> <td>46.8</td> <td>46.3</td> </tr> <tr> <td>Model || Additional data || Braud et al. (2017) (cross + dev)</td> <td>62.7</td> <td>54.5</td> <td>45.5</td> <td>45.1</td> </tr> </tbody></table>
Table 4
table_4
D19-1233
9
emnlp2019
5.4.2 Parsing Performance . Table 4 shows RST-DT test set labelled attachment metrics for various parsers. Our model outperforms all of the published neural models that do not use additional training data in Morey et al. (2017)'s replication study on all of the metrics. On span accuracy (S), we outperform all of the other parsers except for Feng and Hirst (2014a)'s graph CRF model. On spans with nuclearity (N), the equivalent of the unlabelled attachment score for discourse dependencies, we outperform all of the parsers in the study. We perform competitively on spans with relations (R), and we outperform all of the published parsers that do not use additional data on spans with nuclearity and relations (F). Our model also outperforms the discriminative baseline using the same features and implementation on all metrics by between 1.9% and 2.7%.
[2, 1, 1, 1, 1, 1, 1]
['5.4.2 Parsing Performance .', 'Table 4 shows RST-DT test set labelled attachment metrics for various parsers.', "Our model outperforms all of the published neural models that do not use additional training data in Morey et al. (2017)'s replication study on all of the metrics.", "On span accuracy (S), we outperform all of the other parsers except for Feng and Hirst (2014a)'s graph CRF model.", 'On spans with nuclearity (N), the equivalent of the unlabelled attachment score for discourse dependencies, we outperform all of the parsers in the study.', 'We perform competitively on spans with relations (R), and we outperform all of the published parsers that do not use additional data on spans with nuclearity and relations (F).', 'Our model also outperforms the discriminative baseline using the same features and implementation on all metrics by between 1.9% and 2.7%.']
[None, None, ['Generative Model'], ['S', 'Generative Model', 'Feng and Hirst (2014a)'], ['N', 'Generative Model'], ['R', 'Generative Model', 'F'], ['Generative Model']]
1
D19-1234table_2
Evaluations of weakly supervised (Snorkel and stand alone GEN) and supervised approaches on STAC data.
2
[['SUPERVISED BASELINES', 'LAST'], ['SUPERVISED BASELINES', 'BiLSTM on Gold labels'], ['SUPERVISED BASELINES', 'BERT on Gold labels'], ['SUPERVISED BASELINES', 'LogReg* on Gold labels'], ['SUPERVISED BASELINES', 'BERT+LogReg* on Gold labels'], ['SNORKEL PIPELINE', 'GEN + Disc (BiLSTM)'], ['SNORKEL PIPELINE', 'GEN + Disc (BERT)'], ['SNORKEL PIPELINE', 'GEN + Disc (LogReg*)'], ['GENERATIVE STAND ALONE', 'GEN'], ['GENERATIVE STAND ALONE', 'GEN + MST-short']]
1
[['Precision'], ['Recall'], ['F1 score'], ['Accuracy']]
[['0.54', '0.55', '0.55', '0.84'], ['0.33', '0.8', '0.47', '0.75'], ['0.56', '0.48', '0.52', '0.88'], ['0.73', '0.52', '0.61', '0.91'], ['0.59', '0.49', '0.53', '0.89'], ['0.28', '0.59', '0.38', '0.74'], ['0.49', '0.4', '0.44', '0.86'], ['0.68', '0.65', '0.67', '0.91'], ['0.69', '0.66', '0.68', '0.92'], ['0.73', '0.71', '0.72', '0.93']]
column
['Precision', 'Recall', 'F1 score', 'Accuracy']
['GEN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1 score</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>SUPERVISED BASELINES || LAST</td> <td>0.54</td> <td>0.55</td> <td>0.55</td> <td>0.84</td> </tr> <tr> <td>SUPERVISED BASELINES || BiLSTM on Gold labels</td> <td>0.33</td> <td>0.8</td> <td>0.47</td> <td>0.75</td> </tr> <tr> <td>SUPERVISED BASELINES || BERT on Gold labels</td> <td>0.56</td> <td>0.48</td> <td>0.52</td> <td>0.88</td> </tr> <tr> <td>SUPERVISED BASELINES || LogReg* on Gold labels</td> <td>0.73</td> <td>0.52</td> <td>0.61</td> <td>0.91</td> </tr> <tr> <td>SUPERVISED BASELINES || BERT+LogReg* on Gold labels</td> <td>0.59</td> <td>0.49</td> <td>0.53</td> <td>0.89</td> </tr> <tr> <td>SNORKEL PIPELINE || GEN + Disc (BiLSTM)</td> <td>0.28</td> <td>0.59</td> <td>0.38</td> <td>0.74</td> </tr> <tr> <td>SNORKEL PIPELINE || GEN + Disc (BERT)</td> <td>0.49</td> <td>0.4</td> <td>0.44</td> <td>0.86</td> </tr> <tr> <td>SNORKEL PIPELINE || GEN + Disc (LogReg*)</td> <td>0.68</td> <td>0.65</td> <td>0.67</td> <td>0.91</td> </tr> <tr> <td>GENERATIVE STAND ALONE || GEN</td> <td>0.69</td> <td>0.66</td> <td>0.68</td> <td>0.92</td> </tr> <tr> <td>GENERATIVE STAND ALONE || GEN + MST-short</td> <td>0.73</td> <td>0.71</td> <td>0.72</td> <td>0.93</td> </tr> </tbody></table>
Table 2
table_2
D19-1234
8
emnlp2019
As seen in Table 2 on STAC test data, GEN dramatically outperformed our deep learning baselines, BiLSTM, BERT, and BERT + LogReg* architectures on gold labels, as well as the LAST baseline, which attaches every DU in a dialogue to the DU directly preceding it. In addition, stand alone GEN also outperformed all the coupled Snorkel models, in which GEN is combined with an added discriminative step, by up to a 30 point improvement in F1 score (GEN vs. GEN+BiLSTM). We did not expect this, given that adding a discriminative model in Snorkel is meant to generalize, and hence improve, what GEN learns.
[1, 1, 2]
['As seen in Table 2 on STAC test data, GEN dramatically outperformed our deep learning baselines, BiLSTM, BERT, and BERT + LogReg* architectures on gold labels, as well as the LAST baseline, which attaches every DU in a dialogue to the DU directly preceding it.', 'In addition, stand alone GEN also outperformed all the coupled Snorkel models, in which GEN is combined with an added discriminative step, by up to a 30 point improvement in F1 score (GEN vs. GEN+BiLSTM).', 'We did not expect this, given that adding a discriminative model in Snorkel is meant to generalize, and hence improve, what GEN learns.']
[['GEN', 'BiLSTM on Gold labels', 'GEN + Disc (BiLSTM)', 'BERT on Gold labels', 'BERT+LogReg* on Gold labels'], ['GEN', 'GENERATIVE STAND ALONE', 'SNORKEL PIPELINE', 'GEN + Disc (BiLSTM)', 'F1 score'], ['SNORKEL PIPELINE', 'GEN']]
1
D19-1239table_4
Performance of the different ADR detection techniques on the Twitter and Reddit test sets.
3
[['Twitter', 'Technique', 'QuickUMLS'], ['Twitter', 'Technique', 'CRF'], ['Twitter', 'Technique', 'BLSTM-RNN'], ['Twitter', 'Technique', 'CRF+VAE'], ['Twitter', 'Technique', 'BLSTM-RNN+VAE'], ['Reddit', 'Technique', 'QuickUMLS.1'], ['Reddit', 'Technique', 'CRF'], ['Reddit', 'Technique', 'BLSTM-RNN'], ['Reddit', 'Technique', 'CRF+VAE'], ['Reddit', 'Technique', 'BLSTM-RNN+VAE']]
1
[['Precision'], ['Recall'], ['Fscore']]
[['0.47', '0.34', '0.39'], ['0.67', '0.42', '0.51'], ['0.61', '0.87', '0.72'], ['0.68', '0.49', '0.57'], ['0.71', '0.85', '0.77'], ['0.14', '0.21', '17'], ['0.72', '0.47', '0.57'], ['0.67', '0.28', '0.39'], ['0.69', '0.52', '0.6'], ['0.63', '0.29', '0.4']]
column
['Precision', 'Recall', 'Fscore']
['CRF+VAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>Fscore</th> </tr> </thead> <tbody> <tr> <td>Twitter || Technique || QuickUMLS</td> <td>0.47</td> <td>0.34</td> <td>0.39</td> </tr> <tr> <td>Twitter || Technique || CRF</td> <td>0.67</td> <td>0.42</td> <td>0.51</td> </tr> <tr> <td>Twitter || Technique || BLSTM-RNN</td> <td>0.61</td> <td>0.87</td> <td>0.72</td> </tr> <tr> <td>Twitter || Technique || CRF+VAE</td> <td>0.68</td> <td>0.49</td> <td>0.57</td> </tr> <tr> <td>Twitter || Technique || BLSTM-RNN+VAE</td> <td>0.71</td> <td>0.85</td> <td>0.77</td> </tr> <tr> <td>Reddit || Technique || QuickUMLS.1</td> <td>0.14</td> <td>0.21</td> <td>17</td> </tr> <tr> <td>Reddit || Technique || CRF</td> <td>0.72</td> <td>0.47</td> <td>0.57</td> </tr> <tr> <td>Reddit || Technique || BLSTM-RNN</td> <td>0.67</td> <td>0.28</td> <td>0.39</td> </tr> <tr> <td>Reddit || Technique || CRF+VAE</td> <td>0.69</td> <td>0.52</td> <td>0.6</td> </tr> <tr> <td>Reddit || Technique || BLSTM-RNN+VAE</td> <td>0.63</td> <td>0.29</td> <td>0.4</td> </tr> </tbody></table>
Table 4
table_4
D19-1239
7
emnlp2019
5 Results and Discussions 5.1 Comparison with ADR Detectors . In the first experiment, we compare our approach (i.e.trained with 100% of the labeled training data with 1 sample generated for each sample in the LC) against different ADR detector techniques described in Section 4.3. Table 4 reports precision, and recall and F1-measure, of all the baselines in comparison to proposed approach CRF+VAE in Twitter and Reddit dataset. We make the following observations: QuickUMLS is outperformed by all the other methods. The result shows that dictionary based approaches are not able to cover concepts that do not have a reference in UMLS dictionary, and produce false positives by labeling irrelevant words such as “maybe”, “energy”, “condition”, “illness”, or “worse” as positive.
[2, 2, 1, 1, 2]
['5 Results and Discussions 5.1 Comparison with ADR Detectors .', 'In the first experiment, we compare our approach (i.e.trained with 100% of the labeled training data with 1 sample generated for each sample in the LC) against different ADR detector techniques described in Section 4.3.', 'Table 4 reports precision, and recall and F1-measure, of all the baselines in comparison to proposed approach CRF+VAE in Twitter and Reddit dataset.', 'We make the following observations: QuickUMLS is outperformed by all the other methods.', 'The result shows that dictionary based approaches are not able to cover concepts that do not have a reference in UMLS dictionary, and produce false positives by labeling irrelevant words such as “maybe”, “energy”, “condition”, “illness”, or “worse” as positive.']
[None, None, ['CRF+VAE', 'Precision', 'Recall', 'Fscore'], ['QuickUMLS'], None]
1
D19-1247table_6
Performances of whether using the pre-trained KB embedding by transE.
4
[['Model', 'Elsahar et al. (2018)', 'TransE', 'TRUE'], ['Model', 'Elsahar et al. (2018)', 'TransE', 'FALSE'], ['Model', 'Our Model ans loss', 'TransE', 'TRUE'], ['Model', 'Our Model ans loss', 'TransE', 'FALSE']]
1
[['BLEU4'], ['ROUGE-L'], ['METEOR']]
[['36.56', '58.09', '34.41'], ['33.67', '55.57', '33.2'], ['41.72', '69.31', '48.13'], ['41.55', '68.59', '47.52']]
column
['BLEU4', 'ROUGE-L', 'METEOR']
['Our Model ans loss']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU4</th> <th>ROUGE-L</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || Elsahar et al. (2018) || TransE || TRUE</td> <td>36.56</td> <td>58.09</td> <td>34.41</td> </tr> <tr> <td>Model || Elsahar et al. (2018) || TransE || FALSE</td> <td>33.67</td> <td>55.57</td> <td>33.2</td> </tr> <tr> <td>Model || Our Model ans loss || TransE || TRUE</td> <td>41.72</td> <td>69.31</td> <td>48.13</td> </tr> <tr> <td>Model || Our Model ans loss || TransE || FALSE</td> <td>41.55</td> <td>68.59</td> <td>47.52</td> </tr> </tbody></table>
Table 6
table_6
D19-1247
8
emnlp2019
4.7.1 Without Pre-trained KB Embeddings. Pre-trained KB embeddings may provide rich structured relational information among entities. However, it heavily relies on large-scale triplets, which is time and resource-intensive. To investigate the effectiveness of pre-trained KB embedding for KBQG, we report the performance of KBQG whether using pre-trained KB embeddings by simply applying TransE. Table 6 shows that the performance of KBQG is degraded without TransE embeddings. In comparison, Elsahar et al. (2018) obtain obvious degradation on all metrics while there is only a slight decline in our model. We believe that it may owe to the contextaugmented fact encoder since our model drops to 40.87 on the BLEU4 score without contextaugmented fact encoder and transE embeddings.
[2, 2, 2, 2, 1, 1, 1]
['4.7.1 Without Pre-trained KB Embeddings.', 'Pre-trained KB embeddings may provide rich structured relational information among entities.', 'However, it heavily relies on large-scale triplets, which is time and resource-intensive.', 'To investigate the effectiveness of pre-trained KB embedding for KBQG, we report the performance of KBQG whether using pre-trained KB embeddings by simply applying TransE.', 'Table 6 shows that the performance of KBQG is degraded without TransE embeddings.', 'In comparison, Elsahar et al. (2018) obtain obvious degradation on all metrics while there is only a slight decline in our model.', 'We believe that it may owe to the contextaugmented fact encoder since our model drops to 40.87 on the BLEU4 score without contextaugmented fact encoder and transE embeddings.']
[None, None, None, ['TransE'], ['TransE'], ['Our Model ans loss', 'Elsahar et al. (2018)'], ['Our Model ans loss', 'BLEU4']]
1
D19-1252table_3
Results on the XQA. The average column is the average of fr and de result.
2
[['Machine translate at training (TRANSLATE-TRAIN)', 'XLM (Lample and Conneau, 2019)'], ['Machine translate at training (TRANSLATE-TRAIN)', 'Unicoder'], ['Evaluation of cross-lingual sentence encoders (Cross-lingual TEST)', 'XLM (Lample and Conneau, 2019)'], ['Evaluation of cross-lingual sentence encoders (Cross-lingual TEST)', 'Unicoder'], ['Multi-language Fine-tuning', 'BERT (Devlin et al., 2018)'], ['Multi-language Fine-tuning', 'XLM (Lample and Conneau, 2019)'], ['Multi-language Fine-tuning', 'Unicoder']]
1
[['en'], ['fr'], ['de'], ['average']]
[['80.2', '65.1', '63.3', '64.2'], ['81.1', '66.2', '66.5', '66.4'], ['80.2', '62.3', '61.7', '62'], ['81.1', '64.1', '63.7', '63.9'], ['76.4', '61.6', '64.6', '63.1'], ['80.7', '67.1', '68.2', '67.7'], ['81.4', '69.3', '70.1', '69.7']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Multi-language Fine-tuning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en</th> <th>fr</th> <th>de</th> <th>average</th> </tr> </thead> <tbody> <tr> <td>Machine translate at training (TRANSLATE-TRAIN) || XLM (Lample and Conneau, 2019)</td> <td>80.2</td> <td>65.1</td> <td>63.3</td> <td>64.2</td> </tr> <tr> <td>Machine translate at training (TRANSLATE-TRAIN) || Unicoder</td> <td>81.1</td> <td>66.2</td> <td>66.5</td> <td>66.4</td> </tr> <tr> <td>Evaluation of cross-lingual sentence encoders (Cross-lingual TEST) || XLM (Lample and Conneau, 2019)</td> <td>80.2</td> <td>62.3</td> <td>61.7</td> <td>62</td> </tr> <tr> <td>Evaluation of cross-lingual sentence encoders (Cross-lingual TEST) || Unicoder</td> <td>81.1</td> <td>64.1</td> <td>63.7</td> <td>63.9</td> </tr> <tr> <td>Multi-language Fine-tuning || BERT (Devlin et al., 2018)</td> <td>76.4</td> <td>61.6</td> <td>64.6</td> <td>63.1</td> </tr> <tr> <td>Multi-language Fine-tuning || XLM (Lample and Conneau, 2019)</td> <td>80.7</td> <td>67.1</td> <td>68.2</td> <td>67.7</td> </tr> <tr> <td>Multi-language Fine-tuning || Unicoder</td> <td>81.4</td> <td>69.3</td> <td>70.1</td> <td>69.7</td> </tr> </tbody></table>
Table 3
table_3
D19-1252
6
emnlp2019
Second, Multi-language fine-tuning is helpful to find the relation between languages, we will analyze it below. Table 3 show it can bring a significant boost in cross-lingual language understanding performance. With the help of Multi-language finetuning, Unicoder is been improved by 1.6% of accuracy on XNLI and 3.3% on XQA. In Table 3, we proved that Multi-language Fine-tuning with 15 languages is better than TRANSLATE-TRAIN who only fine-tune on 1 language. In this sub-section, we try more setting to analysis the relation between language number and fine-tuning performance.
[2, 1, 1, 1, 2]
['Second, Multi-language fine-tuning is helpful to find the relation between languages, we will analyze it below.', 'Table 3 show it can bring a significant boost in cross-lingual language understanding performance.', 'With the help of Multi-language finetuning, Unicoder is been improved by 1.6% of accuracy on XNLI and 3.3% on XQA.', 'In Table 3, we proved that Multi-language Fine-tuning with 15 languages is better than TRANSLATE-TRAIN who only fine-tune on 1 language.', 'In this sub-section, we try more setting to analysis the relation between language number and fine-tuning performance.']
[None, None, ['Multi-language Fine-tuning', 'Unicoder'], ['Multi-language Fine-tuning', 'Machine translate at training (TRANSLATE-TRAIN)'], None]
1
D19-1254table_2
Performance of AdaMRC compared with baseline models on three datasets, using SAN as the MRC model.
3
[['Method', 'SQuAD → NewsQA', 'SAN'], ['Method', 'SQuAD → NewsQA', 'SynNet + SAN'], ['Method', 'SQuAD → NewsQA', 'AdaMRC'], ['Method', 'SQuAD → NewsQA', 'AdaMRC with GT questions'], ['Method', 'NewsQA → SQuAD', 'SAN'], ['Method', 'NewsQA → SQuAD', 'SynNet + SAN'], ['Method', 'NewsQA → SQuAD', 'AdaMRC'], ['Method', 'NewsQA → SQuAD', 'AdaMRC with GT questions'], ['Method', 'SQuAD → MS MARCO', 'SAN'], ['Method', 'SQuAD → MS MARCO', 'SynNet + SAN'], ['Method', 'SQuAD → MS MARCO', 'AdaMRC'], ['Method', 'SQuAD → MS MARCO', 'AdaMRC with GT questions'], ['Method', 'MS MARCO → SQuAD', 'SAN'], ['Method', 'MS MARCO → SQuAD', 'SynNet + SAN'], ['Method', 'MS MARCO → SQuAD', 'AdaMRC'], ['Method', 'MS MARCO → SQuAD', 'AdaMRC with GT questions']]
1
[['EM'], ['F1']]
[['36.68', '52.79'], ['35.19', '49.61'], ['38.46', '54.20'], ['39.37', '54.63'], ['56.83', '68.62'], ['50.34', '62.42'], ['58.20', '69.75'], ['58.82', '70.14'], ['13.06', '25.80'], ['12.52', '25.47'], ['14.09', '26.09'], ['15.59', '26.40'], ['27.06', '40.07'], ['23.67', '36.79'], ['27.92', '40.69'], ['27.79', '41.47']]
column
['EM', 'F1']
['AdaMRC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || SQuAD → NewsQA || SAN</td> <td>36.68</td> <td>52.79</td> </tr> <tr> <td>Method || SQuAD → NewsQA || SynNet + SAN</td> <td>35.19</td> <td>49.61</td> </tr> <tr> <td>Method || SQuAD → NewsQA || AdaMRC</td> <td>38.46</td> <td>54.20</td> </tr> <tr> <td>Method || SQuAD → NewsQA || AdaMRC with GT questions</td> <td>39.37</td> <td>54.63</td> </tr> <tr> <td>Method || NewsQA → SQuAD || SAN</td> <td>56.83</td> <td>68.62</td> </tr> <tr> <td>Method || NewsQA → SQuAD || SynNet + SAN</td> <td>50.34</td> <td>62.42</td> </tr> <tr> <td>Method || NewsQA → SQuAD || AdaMRC</td> <td>58.20</td> <td>69.75</td> </tr> <tr> <td>Method || NewsQA → SQuAD || AdaMRC with GT questions</td> <td>58.82</td> <td>70.14</td> </tr> <tr> <td>Method || SQuAD → MS MARCO || SAN</td> <td>13.06</td> <td>25.80</td> </tr> <tr> <td>Method || SQuAD → MS MARCO || SynNet + SAN</td> <td>12.52</td> <td>25.47</td> </tr> <tr> <td>Method || SQuAD → MS MARCO || AdaMRC</td> <td>14.09</td> <td>26.09</td> </tr> <tr> <td>Method || SQuAD → MS MARCO || AdaMRC with GT questions</td> <td>15.59</td> <td>26.40</td> </tr> <tr> <td>Method || MS MARCO → SQuAD || SAN</td> <td>27.06</td> <td>40.07</td> </tr> <tr> <td>Method || MS MARCO → SQuAD || SynNet + SAN</td> <td>23.67</td> <td>36.79</td> </tr> <tr> <td>Method || MS MARCO → SQuAD || AdaMRC</td> <td>27.92</td> <td>40.69</td> </tr> <tr> <td>Method || MS MARCO → SQuAD || AdaMRC with GT questions</td> <td>27.79</td> <td>41.47</td> </tr> </tbody></table>
Table 2
table_2
D19-1254
6
emnlp2019
Table 2 summarizes the experimental results. We observe that the proposed method consistently outperforms SAN and the SynNet+SAN model on all datasets. in the SQuAD → NewsQA setting, where the sourcedomain dataset is SQuAD and the target-domain dataset is NewsQA, AdaMRC achieves 38.46% and 54.20% in terms of EM and F1 scores, outperforming the pre-trained SAN by 1.78% (EM) and 1.41% (F1), respectively, as well as surpassing SynNet by 3.27% (EM) and 4.59% (F1), respectively. Similar improvements are also observed in NewsQA → SQuAD, SQuAD → MS MARCO and MS MARCO → SQuAD settings, which demonstrates the effectiveness of the proposed model.
[1, 1, 1, 1]
['Table 2 summarizes the experimental results.', 'We observe that the proposed method consistently outperforms SAN and the SynNet+SAN model on all datasets.', 'in the SQuAD → NewsQA setting, where the sourcedomain dataset is SQuAD and the target-domain dataset is NewsQA, AdaMRC achieves 38.46% and 54.20% in terms of EM and F1 scores, outperforming the pre-trained SAN by 1.78% (EM) and 1.41% (F1), respectively, as well as surpassing SynNet by 3.27% (EM) and 4.59% (F1), respectively.', 'Similar improvements are also observed in NewsQA → SQuAD, SQuAD → MS MARCO and MS MARCO → SQuAD settings, which demonstrates the effectiveness of the proposed model.']
[None, ['AdaMRC', 'AdaMRC with GT questions'], ['SQuAD → NewsQA', 'AdaMRC', 'EM', 'F1', 'SAN', 'SynNet + SAN'], ['AdaMRC', 'EM', 'F1', 'NewsQA → SQuAD', 'SQuAD → MS MARCO', 'MS MARCO → SQuAD']]
1
D19-1255table_3
Human evaluation of KEAG and state-of-theart answer generation models. Scores range in [1, 5].
2
[['Model', 'gQA'], ['Model', 'gQA w/ KBLSTM'], ['Model', 'gQA w/ CRWE'], ['Model', 'MHPGM'], ['Model', 'KEAG']]
1
[['Syntactic'], ['Correct']]
[['3.78', '3.54'], ['3.98', '3.62'], ['3.91', '3.69'], ['4.1', '3.81'], ['4.18', '4.03']]
column
['Syntactic', 'Correct']
['KEAG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic</th> <th>Correct</th> </tr> </thead> <tbody> <tr> <td>Model || gQA</td> <td>3.78</td> <td>3.54</td> </tr> <tr> <td>Model || gQA w/ KBLSTM</td> <td>3.98</td> <td>3.62</td> </tr> <tr> <td>Model || gQA w/ CRWE</td> <td>3.91</td> <td>3.69</td> </tr> <tr> <td>Model || MHPGM</td> <td>4.1</td> <td>3.81</td> </tr> <tr> <td>Model || KEAG</td> <td>4.18</td> <td>4.03</td> </tr> </tbody></table>
Table 3
table_3
D19-1255
7
emnlp2019
Table 3 reports the human evaluation scores of KEAG and state-of-the-art answer generation models. The KEAG model surpasses all the others in generating correct answers syntactically and substantively. In terms of syntactic correctness, KEAG and MHPGM both perform well thanks to their architectures of composing answer text and integrating knowledge. On the other hand, KEAG significantly outperforms all compared models in generating substantively correct answers, which demonstrates its power in exploiting external knowledge.
[1, 1, 2, 1]
['Table 3 reports the human evaluation scores of KEAG and state-of-the-art answer generation models.', 'The KEAG model surpasses all the others in generating correct answers syntactically and substantively.', 'In terms of syntactic correctness, KEAG and MHPGM both perform well thanks to their architectures of composing answer text and integrating knowledge.', 'On the other hand, KEAG significantly outperforms all compared models in generating substantively correct answers, which demonstrates its power in exploiting external knowledge.']
[['KEAG', 'gQA', 'gQA w/ KBLSTM', 'gQA w/ CRWE', 'MHPGM'], ['KEAG', 'Correct'], ['KEAG', 'MHPGM'], ['KEAG', 'Correct']]
1
D19-1256table_3
Results on ARC Easy test
2
[['Model', 'Random guess'], ['Model', 'IR Solver'], ['Model', 'Reading Strategies (previous SOTA)'], ['Model', 'Attentive Ranker (ours)']]
1
[['Accuracy']]
[['25.00%'], ['62.55%'], ['68.90%'], ['72.30%']]
column
['Accuracy']
['Attentive Ranker (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Random guess</td> <td>25.00%</td> </tr> <tr> <td>Model || IR Solver</td> <td>62.55%</td> </tr> <tr> <td>Model || Reading Strategies (previous SOTA)</td> <td>68.90%</td> </tr> <tr> <td>Model || Attentive Ranker (ours)</td> <td>72.30%</td> </tr> </tbody></table>
Table 3
table_3
D19-1256
7
emnlp2019
Second, we verified the model performance on the ARC test sets in order to check how the model generalizes on unseen data and to compare it with other top models in the ARC public leaderboard (https://leaderboard.allenai.org/arc/subm issions/public). A summary of the results is reported in Table 3 and Table 4. In both cases, our Attentive Ranker model outperforms the current state-of-the-art (SOTA) approach proving that, indeed, performing a semantic ranking is very effective for QA systems.
[2, 1, 1]
['Second, we verified the model performance on the ARC test sets in order to check how the model generalizes on unseen data and to compare it with other top models in the ARC public leaderboard (https://leaderboard.allenai.org/arc/subm issions/public).', 'A summary of the results is reported in Table 3 and Table 4.', 'In both cases, our Attentive Ranker model outperforms the current state-of-the-art (SOTA) approach proving that, indeed, performing a semantic ranking is very effective for QA systems.']
[None, None, ['Attentive Ranker (ours)', 'Reading Strategies (previous SOTA)']]
1
D19-1256table_6
Downstream model performance on
6
[['Dataset', 'Val.', '# docs', 'Top 1', 'Ranking', 'TF-IDF'], ['Dataset', 'Val.', '# docs', 'Top 1', 'Ranking', 'Ours'], ['Dataset', 'Val.', '# docs', 'Top 10', 'Ranking', 'TF-IDF'], ['Dataset', 'Val.', '# docs', 'Top 10', 'Ranking', 'Ours'], ['Dataset', 'Test', '# docs', 'Top 1', 'Ranking', 'TF-IDF'], ['Dataset', 'Test', '# docs', 'Top 1', 'Ranking', 'Ours'], ['Dataset', 'Test', '# docs', 'Top 10', 'Ranking', 'TF-IDF'], ['Dataset', 'Test', '# docs', 'Top 10', 'Ranking', 'Ours']]
1
[['Accuracy (D)']]
[['35.59%'], ['38.3%(+2.71)'], ['35.93%'], ['43.72%(+7.79)'], ['34.93%'], ['37.51%(+3.58)'], ['37.08%'], ['40%(+2.92)']]
column
['Accuracy (D)']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (D)</th> </tr> </thead> <tbody> <tr> <td>Dataset || Val. || # docs || Top 1 || Ranking || TF-IDF</td> <td>35.59%</td> </tr> <tr> <td>Dataset || Val. || # docs || Top 1 || Ranking || Ours</td> <td>38.3%(+2.71)</td> </tr> <tr> <td>Dataset || Val. || # docs || Top 10 || Ranking || TF-IDF</td> <td>35.93%</td> </tr> <tr> <td>Dataset || Val. || # docs || Top 10 || Ranking || Ours</td> <td>43.72%(+7.79)</td> </tr> <tr> <td>Dataset || Test || # docs || Top 1 || Ranking || TF-IDF</td> <td>34.93%</td> </tr> <tr> <td>Dataset || Test || # docs || Top 1 || Ranking || Ours</td> <td>37.51%(+3.58)</td> </tr> <tr> <td>Dataset || Test || # docs || Top 10 || Ranking || TF-IDF</td> <td>37.08%</td> </tr> <tr> <td>Dataset || Test || # docs || Top 10 || Ranking || Ours</td> <td>40%(+2.92)</td> </tr> </tbody></table>
Table 6
table_6
D19-1256
8
emnlp2019
Table 6 shows that using documents ranked by our attentive neural network always leads to a performance increase in downstream models, compared to TF-IDF. On the validation set, the improvement is considerably higher (+7.79) due to a possible over-fitting of the hyperparameters during the Attentive Ranker's training.
[1, 1]
['Table 6 shows that using documents ranked by our attentive neural network always leads to a performance increase in downstream models, compared to TF-IDF.', "On the validation set, the improvement is considerably higher (+7.79) due to a possible over-fitting of the hyperparameters during the Attentive Ranker's training."]
[['Ours', 'TF-IDF'], None]
1