table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
P18-1097table_2
Performance of seq2seq for GEC with different learning (row) and inference (column) methods on CoNLL-2014 dataset. (+LM) denotes decoding with the RNN language model through shallow fusion. The last 3 systems (with (cid:63)) use the additional non-public Lang-8 data for training. • Whether is fluency boost learning mechanism helpful for training the error correction model, and which of the strategies (back-boost, selfboost, dual-boost) is the most effective?
2
[['Model', 'normal seq2seq'], ['Model', 'back-boost'], ['Model', 'self-boost'], ['Model', 'dual-boost'], ['Model', 'back-boost (+native)'], ['Model', 'self-boost (+native)'], ['Model', 'dual-boost (+native)'], ['Model', 'back-boost (+native)★'], ['Model', 'self-boost (+native)★'], ['Model', 'dual-boost (+native)★']]
2
[['seq2seq', 'P'], ['seq2seq', 'R'], ['seq2seq', 'F0.5'], ['fluency boost', 'P'], ['fluency boost', 'R'], ['fluency boost', 'F0.5'], ['seq2seq (+LM)', 'P'], ['seq2seq (+LM)', 'R'], ['seq2seq (+LM)', 'F0.5'], ['fluency boost (+LM)', 'P'], ['fluency boost (+LM)', 'R'], ['fluency boost (+LM)', 'F0.5']]
[['61.06', '18.49', '41.81', '61.56', '18.85', '42.37', '61.75', '23.3', '46.42', '61.94', '23.7', '46.83'], ['61.66', '19.54', '43.09', '61.43', '19.61', '43.07', '61.47', '24.74', '47.4', '61.24', '25.01', '47.48'], ['61.64', '19.83', '43.35', '61.5', '19.9', '43.36', '62.13', '24.45', '47.49', '61.67', '24.76', '47.51'], ['62.03', '20.82', '44.44', '61.64', '21.19', '44.61', '62.22', '25.49', '48.3', '61.64', '26.45', '48.69'], ['63.93', '22.03', '46.31', '63.95', '22.12', '46.4', '62.04', '27.43', '49.54', '61.98', '27.7', '49.68'], ['64.33', '22.1', '46.54', '64.14', '22.19', '46.54', '62.18', '27.59', '49.71', '61.64', '28.37', '49.93'], ['65.77', '21.92', '46.98', '65.82', '22.14', '47.19', '62.64', '27.4', '49.83', '62.7', '27.69', '50.04'], ['67.37', '24.31', '49.75', '67.25', '24.35', '49.73', '64.61', '28.44', '51.51', '64.46', '28.78', '51.66'], ['66.52', '25.13', '50.03', '66.78', '25.33', '50.31', '63.82', '30.15', '52.17', '63.34', '31.63', '52.21'], ['66.34', '25.39', '50.16', '66.45', '25.51', '50.3', '64.72', '30.06', '52.59', '64.47', '30.48', '52.72']]
column
['P', 'R', 'F0.5', 'P', 'R', 'F0.5', 'P', 'R', 'F0.5', 'P', 'R', 'F0.5']
['fluency boost']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>seq2seq || P</th> <th>seq2seq || R</th> <th>seq2seq || F0.5</th> <th>fluency boost || P</th> <th>fluency boost || R</th> <th>fluency boost || F0.5</th> <th>seq2seq (+LM) || P</th> <th>seq2seq (+LM) || R</th> <th>seq2seq (+LM) || F0.5</th> <th>fluency boost (+LM) || P</th> <th>fluency boost (+LM) || R</th> <th>fluency boost (+LM) || F0.5</th> </tr> </thead> <tbody> <tr> <td>Model || normal seq2seq</td> <td>61.06</td> <td>18.49</td> <td>41.81</td> <td>61.56</td> <td>18.85</td> <td>42.37</td> <td>61.75</td> <td>23.3</td> <td>46.42</td> <td>61.94</td> <td>23.7</td> <td>46.83</td> </tr> <tr> <td>Model || back-boost</td> <td>61.66</td> <td>19.54</td> <td>43.09</td> <td>61.43</td> <td>19.61</td> <td>43.07</td> <td>61.47</td> <td>24.74</td> <td>47.4</td> <td>61.24</td> <td>25.01</td> <td>47.48</td> </tr> <tr> <td>Model || self-boost</td> <td>61.64</td> <td>19.83</td> <td>43.35</td> <td>61.5</td> <td>19.9</td> <td>43.36</td> <td>62.13</td> <td>24.45</td> <td>47.49</td> <td>61.67</td> <td>24.76</td> <td>47.51</td> </tr> <tr> <td>Model || dual-boost</td> <td>62.03</td> <td>20.82</td> <td>44.44</td> <td>61.64</td> <td>21.19</td> <td>44.61</td> <td>62.22</td> <td>25.49</td> <td>48.3</td> <td>61.64</td> <td>26.45</td> <td>48.69</td> </tr> <tr> <td>Model || back-boost (+native)</td> <td>63.93</td> <td>22.03</td> <td>46.31</td> <td>63.95</td> <td>22.12</td> <td>46.4</td> <td>62.04</td> <td>27.43</td> <td>49.54</td> <td>61.98</td> <td>27.7</td> <td>49.68</td> </tr> <tr> <td>Model || self-boost (+native)</td> <td>64.33</td> <td>22.1</td> <td>46.54</td> <td>64.14</td> <td>22.19</td> <td>46.54</td> <td>62.18</td> <td>27.59</td> <td>49.71</td> <td>61.64</td> <td>28.37</td> <td>49.93</td> </tr> <tr> <td>Model || dual-boost (+native)</td> <td>65.77</td> <td>21.92</td> <td>46.98</td> <td>65.82</td> <td>22.14</td> <td>47.19</td> <td>62.64</td> <td>27.4</td> <td>49.83</td> <td>62.7</td> <td>27.69</td> <td>50.04</td> </tr> <tr> <td>Model || back-boost (+native)★</td> <td>67.37</td> <td>24.31</td> <td>49.75</td> <td>67.25</td> <td>24.35</td> <td>49.73</td> <td>64.61</td> <td>28.44</td> <td>51.51</td> <td>64.46</td> <td>28.78</td> <td>51.66</td> </tr> <tr> <td>Model || self-boost (+native)★</td> <td>66.52</td> <td>25.13</td> <td>50.03</td> <td>66.78</td> <td>25.33</td> <td>50.31</td> <td>63.82</td> <td>30.15</td> <td>52.17</td> <td>63.34</td> <td>31.63</td> <td>52.21</td> </tr> <tr> <td>Model || dual-boost (+native)★</td> <td>66.34</td> <td>25.39</td> <td>50.16</td> <td>66.45</td> <td>25.51</td> <td>50.3</td> <td>64.72</td> <td>30.06</td> <td>52.59</td> <td>64.47</td> <td>30.48</td> <td>52.72</td> </tr> </tbody></table>
Table 2
table_2
P18-1097
6
acl2018
The effectiveness of various inference approaches can be observed by comparing the results in Table 2 by column. Compared to the normal seq2seq inference and seq2seq (+LM) baselines, fluency boost inference brings about on average 0.14 and 0.18 gain on F0.5 respectively, which is a significant6 improvement, demonstrating multi-round edits by fluency boost inference is effective. Take our best system (the last row in Table 2) as an example, among 1,312 sentences in the CoNLL-2014 dataset, seq2seq inference with shallow fusion LM edits 566 sentences. In contrast, fluency boost inference additionally edits 23 sentences during the second round inference, improving F0.5 from 52.59 to 52.72.
[1, 1, 1, 1]
['The effectiveness of various inference approaches can be observed by comparing the results in Table 2 by column.', 'Compared to the normal seq2seq inference and seq2seq (+LM) baselines, fluency boost inference brings about on average 0.14 and 0.18 gain on F0.5 respectively, which is a significant6 improvement, demonstrating multi-round edits by fluency boost inference is effective.', 'Take our best system (the last row in Table 2) as an example, among 1,312 sentences in the CoNLL-2014 dataset, seq2seq inference with shallow fusion LM edits 566 sentences.', 'In contrast, fluency boost inference additionally edits 23 sentences during the second round inference, improving F0.5 from 52.59 to 52.72.']
[None, ['seq2seq', 'seq2seq (+LM)', 'fluency boost', 'fluency boost (+LM)', 'F0.5'], ['self-boost (+native)★', 'dual-boost (+native)★'], ['fluency boost (+LM)', 'F0.5', 'dual-boost (+native)★']]
1
P18-1103table_1
Experimental results of DAM and other comparison approaches on Ubuntu Corpus V1 and Douban Conversation Corpus.
1
[['DualEncoderlstm'], ['DualEncoderbilstm'], ['MV-LSTM'], ['Match-LSTM'], ['Multiview'], ['DL2R'], ['SMNdynamic'], ['DAM'], ['DAMfirst'], ['DAMlast'], ['DAMself'], ['DAMcross']]
2
[['Ubuntu Corpus', 'R2@1'], ['Ubuntu Corpus', 'R10@1'], ['Ubuntu Corpus', 'R10@2'], ['Ubuntu Corpus', 'R10@5'], ['Douban Conversation Corpus', 'MAP'], ['Douban Conversation Corpus', 'MRR'], ['Douban Conversation Corpus', 'P@1'], ['Douban Conversation Corpus', 'R10@1'], ['Douban Conversation Corpus', 'R10@2'], ['Douban Conversation Corpus', 'R10@5']]
[['0.901', '0.638', '0.784', '0.949', '0.485', '0.527', '0.32', '0.187', '0.343', '0.72'], ['0.895', '0.63', '0.78', '0.944', '0.479', '0.514', '0.313', '0.184', '0.33', '0.716'], ['0.906', '0.653', '0.804', '0.946', '0.498', '0.538', '0.348', '0.202', '0.351', '0.71'], ['0.904', '0.653', '0.799', '0.944', '0.5', '0.537', '0.345', '0.202', '0.348', '0.72'], ['0.908', '0.662', '0.801', '0.951', '0.505', '0.543', '0.342', '0.202', '0.35', '0.729'], ['0.899', '0.626', '0.783', '0.944', '0.488', '0.527', '0.33', '0.193', '0.342', '0.705'], ['0.926', '0.726', '0.847', '0.961', '0.529', '0.569', '0.397', '0.233', '0.396', '0.724'], ['0.938', '0.767', '0.874', '0.969', '0.55', '0.601', '0.427', '0.254', '0.41', '0.757'], ['0.927', '0.736', '0.854', '0.962', '0.528', '0.579', '0.4', '0.229', '0.396', '0.741'], ['0.932', '0.752', '0.861', '0.965', '0.539', '0.583', '0.408', '0.242', '0.407', '0.748'], ['0.931', '0.741', '0.859', '0.964', '0.527', '0.574', '0.382', '0.221', '0.403', '0.75'], ['0.932', '0.749', '0.863', '0.966', '0.535', '0.585', '0.4', '0.234', '0.411', '0.733']]
column
['R2@1', 'R10@1', 'R10@2', 'R10@5', 'MAP', 'MRR', 'P@1', 'R10@1', 'R10@2', 'R10@5']
['DAM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ubuntu Corpus || R2@1</th> <th>Ubuntu Corpus || R10@1</th> <th>Ubuntu Corpus || R10@2</th> <th>Ubuntu Corpus || R10@5</th> <th>Douban Conversation Corpus || MAP</th> <th>Douban Conversation Corpus || MRR</th> <th>Douban Conversation Corpus || P@1</th> <th>Douban Conversation Corpus || R10@1</th> <th>Douban Conversation Corpus || R10@2</th> <th>Douban Conversation Corpus || R10@5</th> </tr> </thead> <tbody> <tr> <td>DualEncoderlstm</td> <td>0.901</td> <td>0.638</td> <td>0.784</td> <td>0.949</td> <td>0.485</td> <td>0.527</td> <td>0.32</td> <td>0.187</td> <td>0.343</td> <td>0.72</td> </tr> <tr> <td>DualEncoderbilstm</td> <td>0.895</td> <td>0.63</td> <td>0.78</td> <td>0.944</td> <td>0.479</td> <td>0.514</td> <td>0.313</td> <td>0.184</td> <td>0.33</td> <td>0.716</td> </tr> <tr> <td>MV-LSTM</td> <td>0.906</td> <td>0.653</td> <td>0.804</td> <td>0.946</td> <td>0.498</td> <td>0.538</td> <td>0.348</td> <td>0.202</td> <td>0.351</td> <td>0.71</td> </tr> <tr> <td>Match-LSTM</td> <td>0.904</td> <td>0.653</td> <td>0.799</td> <td>0.944</td> <td>0.5</td> <td>0.537</td> <td>0.345</td> <td>0.202</td> <td>0.348</td> <td>0.72</td> </tr> <tr> <td>Multiview</td> <td>0.908</td> <td>0.662</td> <td>0.801</td> <td>0.951</td> <td>0.505</td> <td>0.543</td> <td>0.342</td> <td>0.202</td> <td>0.35</td> <td>0.729</td> </tr> <tr> <td>DL2R</td> <td>0.899</td> <td>0.626</td> <td>0.783</td> <td>0.944</td> <td>0.488</td> <td>0.527</td> <td>0.33</td> <td>0.193</td> <td>0.342</td> <td>0.705</td> </tr> <tr> <td>SMNdynamic</td> <td>0.926</td> <td>0.726</td> <td>0.847</td> <td>0.961</td> <td>0.529</td> <td>0.569</td> <td>0.397</td> <td>0.233</td> <td>0.396</td> <td>0.724</td> </tr> <tr> <td>DAM</td> <td>0.938</td> <td>0.767</td> <td>0.874</td> <td>0.969</td> <td>0.55</td> <td>0.601</td> <td>0.427</td> <td>0.254</td> <td>0.41</td> <td>0.757</td> </tr> <tr> <td>DAMfirst</td> <td>0.927</td> <td>0.736</td> <td>0.854</td> <td>0.962</td> <td>0.528</td> <td>0.579</td> <td>0.4</td> <td>0.229</td> <td>0.396</td> <td>0.741</td> </tr> <tr> <td>DAMlast</td> <td>0.932</td> <td>0.752</td> <td>0.861</td> <td>0.965</td> <td>0.539</td> <td>0.583</td> <td>0.408</td> <td>0.242</td> <td>0.407</td> <td>0.748</td> </tr> <tr> <td>DAMself</td> <td>0.931</td> <td>0.741</td> <td>0.859</td> <td>0.964</td> <td>0.527</td> <td>0.574</td> <td>0.382</td> <td>0.221</td> <td>0.403</td> <td>0.75</td> </tr> <tr> <td>DAMcross</td> <td>0.932</td> <td>0.749</td> <td>0.863</td> <td>0.966</td> <td>0.535</td> <td>0.585</td> <td>0.4</td> <td>0.234</td> <td>0.411</td> <td>0.733</td> </tr> </tbody></table>
Table 1
table_1
P18-1103
6
acl2018
Table 1 shows the evaluation results of DAM as well as all comparison models. As demonstrated, DAM significantly outperforms other competitors on both Ubuntu Corpus and Douban Conversation Corpus, including SMNdynamic, which is the state-of-the-art baseline, demonstrating the superior power of attention mechanism in matching response with multi-turn context. Besides, both the performances of DAMf irst and DAMself decrease a lot compared with DAM, which shows the effectiveness of self-attention and cross-attention. Both DAMf irst and DAMlast underperform DAM, which demonstrates the benefits of using multigrained representations. Also the absence of self-attention-match brings down the precision, as shown in DAMcross, exhibiting the necessity of jointly considering textual relevance and dependency information in response selection. One notable point is that, while DAMf irst is able to achieve close performance to SMNdynamic, it is about 2.3 times faster than SMNdynamic in our implementation as it is very simple in computation. We believe that DAMf irst is more suitable to the scenario that has limitations in computation time or memories but requires high precise, such as industry application or working as an component in other neural networks like GANs.
[1, 1, 1, 1, 1, 1, 2]
['Table 1 shows the evaluation results of DAM as well as all comparison models.', 'As demonstrated, DAM significantly outperforms other competitors on both Ubuntu Corpus and Douban Conversation Corpus, including SMNdynamic, which is the state-of-the-art baseline, demonstrating the superior power of attention mechanism in matching response with multi-turn context.', 'Besides, both the performances of DAMf irst and DAMself decrease a lot compared with DAM, which shows the effectiveness of self-attention and cross-attention.', 'Both DAMf irst and DAMlast underperform DAM, which demonstrates the benefits of using multigrained representations.', 'Also the absence of self-attention-match brings down the precision, as shown in DAMcross, exhibiting the necessity of jointly considering textual relevance and dependency information in response selection.', 'One notable point is that, while DAMf irst is able to achieve close performance to SMNdynamic, it is about 2.3 times faster than SMNdynamic in our implementation as it is very simple in computation.', 'We believe that DAMf irst is more suitable to the scenario that has limitations in computation time or memories but requires high precise, such as industry application or working as an component in other neural networks like GANs.']
[['DAM'], ['DAM', 'Ubuntu Corpus', 'Douban Conversation Corpus', 'SMNdynamic'], ['DAMfirst', 'DAMself', 'DAM'], ['DAMfirst', 'DAMlast', 'DAM'], ['DAMcross'], ['DAMfirst', 'SMNdynamic'], ['DAMfirst']]
1
P18-1108table_2
Test set performance comparison on the CTB dataset
3
[['Model', 'Single Model', 'Charniak (2000)'], ['Model', 'Single Model', 'Zhu et al. (2013)'], ['Model', 'Single Model', 'Wang et al. (2015)'], ['Model', 'Single Model', 'Watanabe and Sumita (2015)'], ['Model', 'Single Model', 'Dyer et al. (2016)'], ['Model', 'Single Model', 'Liu and Zhang (2017b)'], ['Model', 'Single Model', 'Liu and Zhang (2017a)'], ['Model', 'Our Model', '-'], ['Model', 'Semi-supervised', 'Zhu et al. (2013)'], ['Model', 'Semi-supervised', 'Wang and Xue (2014)'], ['Model', 'Semi-supervised', 'Wang et al. (2015)'], ['Model', 'Re-ranking', 'Charniak and Johnson (2005)'], ['Model', 'Re-ranking', 'Dyer et al. (2016)']]
1
[['LP'], ['LR'], ['F1']]
[['82.1', '79.6', '80.8'], ['84.3', '82.1', '83.2'], ['-', '-', '83.2'], ['-', '-', '84.3'], ['-', '-', '84.6'], ['85.9', '85.2', '85.5'], ['-', '-', '86.1'], ['86.6', '86.4', '86.5'], ['86.8', '84.4', '85.6'], ['-', '-', '86.3'], ['-', '-', '86.6'], ['83.8', '80.8', '82.3'], ['-', '-', '86.9']]
column
['LP', 'LR', 'F1']
['Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LP</th> <th>LR</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Single Model || Charniak (2000)</td> <td>82.1</td> <td>79.6</td> <td>80.8</td> </tr> <tr> <td>Model || Single Model || Zhu et al. (2013)</td> <td>84.3</td> <td>82.1</td> <td>83.2</td> </tr> <tr> <td>Model || Single Model || Wang et al. (2015)</td> <td>-</td> <td>-</td> <td>83.2</td> </tr> <tr> <td>Model || Single Model || Watanabe and Sumita (2015)</td> <td>-</td> <td>-</td> <td>84.3</td> </tr> <tr> <td>Model || Single Model || Dyer et al. (2016)</td> <td>-</td> <td>-</td> <td>84.6</td> </tr> <tr> <td>Model || Single Model || Liu and Zhang (2017b)</td> <td>85.9</td> <td>85.2</td> <td>85.5</td> </tr> <tr> <td>Model || Single Model || Liu and Zhang (2017a)</td> <td>-</td> <td>-</td> <td>86.1</td> </tr> <tr> <td>Model || Our Model || -</td> <td>86.6</td> <td>86.4</td> <td>86.5</td> </tr> <tr> <td>Model || Semi-supervised || Zhu et al. (2013)</td> <td>86.8</td> <td>84.4</td> <td>85.6</td> </tr> <tr> <td>Model || Semi-supervised || Wang and Xue (2014)</td> <td>-</td> <td>-</td> <td>86.3</td> </tr> <tr> <td>Model || Semi-supervised || Wang et al. (2015)</td> <td>-</td> <td>-</td> <td>86.6</td> </tr> <tr> <td>Model || Re-ranking || Charniak and Johnson (2005)</td> <td>83.8</td> <td>80.8</td> <td>82.3</td> </tr> <tr> <td>Model || Re-ranking || Dyer et al. (2016)</td> <td>-</td> <td>-</td> <td>86.9</td> </tr> </tbody></table>
Table 2
table_2
P18-1108
6
acl2018
Table 2 reports our results compared to other benchmarks. To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set.
[1, 1]
['Table 2 reports our results compared to other benchmarks.', 'To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set.']
[None, ['Our Model', 'F1']]
1
P18-1110table_4
Performance of RSP on QBANKDEV.
5
[['Training Data', 'WSJ', '40k', 'QBANK', '0'], ['Training Data', 'WSJ', '0', 'QBANK', '2k'], ['Training Data', 'WSJ', '40k', 'QBANK', '2k'], ['Training Data', 'WSJ', '40k', 'QBANK', '50'], ['Training Data', 'WSJ', '40k', 'QBANK', '100'], ['Training Data', 'WSJ', '40k', 'QBANK', '400']]
1
[['Rec.'], ['Prec.'], ['F1']]
[['91.07', '88.77', '89.91'], ['94.44', '96.23', '95.32'], ['95.84', '97.02', '96.43'], ['93.85', '95.91', '94.87'], ['95.08', '96.06', '95.57'], ['94.94', '97.05', '95.99']]
column
['Rec.', 'Prec.', 'F1']
['F1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec.</th> <th>Prec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Data || WSJ || 40k || QBANK || 0</td> <td>91.07</td> <td>88.77</td> <td>89.91</td> </tr> <tr> <td>Training Data || WSJ || 0 || QBANK || 2k</td> <td>94.44</td> <td>96.23</td> <td>95.32</td> </tr> <tr> <td>Training Data || WSJ || 40k || QBANK || 2k</td> <td>95.84</td> <td>97.02</td> <td>96.43</td> </tr> <tr> <td>Training Data || WSJ || 40k || QBANK || 50</td> <td>93.85</td> <td>95.91</td> <td>94.87</td> </tr> <tr> <td>Training Data || WSJ || 40k || QBANK || 100</td> <td>95.08</td> <td>96.06</td> <td>95.57</td> </tr> <tr> <td>Training Data || WSJ || 40k || QBANK || 400</td> <td>94.94</td> <td>97.05</td> <td>95.99</td> </tr> </tbody></table>
Table 4
table_4
P18-1110
5
acl2018
Surprisingly, with only 50 annotated questions (see Table 4), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%. This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.
[1, 1]
['Surprisingly, with only 50 annotated questions (see Table 4), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.', 'This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.']
[['F1', 'WSJ', 'QBANK'], ['F1', 'WSJ', 'QBANK']]
1
P18-1110table_5
Performance of RSP on GENIADEV.
5
[['Training Data', 'WSJ', '40k', 'GENIA', '0'], ['Training Data', 'WSJ', '0', 'GENIA', '14k'], ['Training Data', 'WSJ', '40k', 'GENIA', '14k'], ['Training Data', 'WSJ', '40k', 'GENIA', '50'], ['Training Data', 'WSJ', '40k', 'GENIA', '100'], ['Training Data', 'WSJ', '40k', 'GENIA', '400']]
1
[['Rec.'], ['Prec.'], ['F1']]
[['72.51', '88.84', '79.85'], ['88.04', '92.3', '90.12'], ['88.24', '92.33', '90.24'], ['82.3', '90.55', '86.23'], ['83.94', '89.97', '86.85'], ['85.52', '91.01', '88.18']]
column
['Rec.', 'Prec.', 'F1']
['F1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec.</th> <th>Prec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Data || WSJ || 40k || GENIA || 0</td> <td>72.51</td> <td>88.84</td> <td>79.85</td> </tr> <tr> <td>Training Data || WSJ || 0 || GENIA || 14k</td> <td>88.04</td> <td>92.3</td> <td>90.12</td> </tr> <tr> <td>Training Data || WSJ || 40k || GENIA || 14k</td> <td>88.24</td> <td>92.33</td> <td>90.24</td> </tr> <tr> <td>Training Data || WSJ || 40k || GENIA || 50</td> <td>82.3</td> <td>90.55</td> <td>86.23</td> </tr> <tr> <td>Training Data || WSJ || 40k || GENIA || 100</td> <td>83.94</td> <td>89.97</td> <td>86.85</td> </tr> <tr> <td>Training Data || WSJ || 40k || GENIA || 400</td> <td>85.52</td> <td>91.01</td> <td>88.18</td> </tr> </tbody></table>
Table 5
table_5
P18-1110
5
acl2018
On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005), we see a similar, if somewhat less dramatic, trend. See Table 5. With 50 annotated sentences, performance on GENIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McCloskyfs thesis (McClosky, 2010) ? the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed. That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.
[1, 1, 1, 1]
['On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005), we see a similar, if somewhat less dramatic, trend.', 'See Table 5.', 'With 50 annotated sentences, performance on GENIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky\x81fs thesis (McClosky, 2010) ? the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.', 'That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.']
[['GENIA'], None, ['GENIA', 'F1'], ['GENIA', 'F1']]
1
P18-1111table_2
Results of the proposed method and the baselines on the SemEval 2013 task.
3
[['Method', 'Baselines', 'SFS (Versley 2013)'], ['Method', 'Baselines', 'IIITH (Surtani et al. 2013)'], ['Method', 'Baselines', 'MELODI (Van de Cruys et al. 2013)'], ['Method', 'Baselines', 'SemEval 2013 Baseline (Hendrickx et al. 2013)'], ['Method', 'This paper', 'Baseline'], ['Method', 'This paper', 'Our method']]
1
[['isomorphic'], ['non-isomorphic']]
[['23.1', '17.9'], ['23.1', '25.8'], ['13', '54.8'], ['13.8', '40.6'], ['3.8', '16.1'], ['28.2', '28.4']]
column
['F1', 'F1']
['Our method']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>isomorphic</th> <th>non-isomorphic</th> </tr> </thead> <tbody> <tr> <td>Method || Baselines || SFS (Versley 2013)</td> <td>23.1</td> <td>17.9</td> </tr> <tr> <td>Method || Baselines || IIITH (Surtani et al. 2013)</td> <td>23.1</td> <td>25.8</td> </tr> <tr> <td>Method || Baselines || MELODI (Van de Cruys et al. 2013)</td> <td>13</td> <td>54.8</td> </tr> <tr> <td>Method || Baselines || SemEval 2013 Baseline (Hendrickx et al. 2013)</td> <td>13.8</td> <td>40.6</td> </tr> <tr> <td>Method || This paper || Baseline</td> <td>3.8</td> <td>16.1</td> </tr> <tr> <td>Method || This paper || Our method</td> <td>28.2</td> <td>28.4</td> </tr> </tbody></table>
Table 2
table_2
P18-1111
7
acl2018
Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings. Our method outperforms all the methods in the isomorphic setting. In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision. The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.
[1, 1, 1, 2]
['Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.', 'Our method outperforms all the methods in the isomorphic setting.', 'In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.', 'The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.']
[['Our method', 'Baselines'], ['Our method', 'Baselines', 'isomorphic'], ['Our method', 'SFS (Versley 2013)', 'IIITH (Surtani et al. 2013)', 'non-isomorphic'], ['Our method', 'Baseline']]
1
P18-1111table_4
Classification results. For each dataset split, the top part consists of baseline methods and the bottom part of methods from this paper. The best performance in each part appears in bold.
4
[['Dataset & Split', 'Tratz fine Random', 'Method', 'Tratz and Hovy (2010)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'Dima (2016)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'Shwartz and Waterson (2018)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'distributional'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'paraphrase'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'integrated'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'Tratz and Hovy (2010)'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'Dima (2016)'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'Shwartz and Waterson (2018)'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'distributional'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'paraphrase'], ['Dataset & Split', 'Tratz fine Lexical', 'Method', 'integrated'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'Tratz and Hovy (2010)'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'Dima (2016)'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'Shwartz and Waterson (2018)'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'distributional'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'paraphrase'], ['Dataset & Split', 'Tratz coarse Random', 'Method', 'integrated'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'Tratz and Hovy (2010)'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'Dima (2016)'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'Shwartz and Waterson (2018)'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'distributional'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'paraphrase'], ['Dataset & Split', 'Tratz coarse Lexical', 'Method', 'integrated']]
1
[['F1']]
[['0.739'], ['0.725'], ['0.714'], ['0.677'], ['0.505'], ['0.673'], ['0.34'], ['0.334'], ['0.429'], ['0.356'], ['0.333'], ['0.37'], ['0.76'], ['0.775'], ['0.736'], ['0.689'], ['0.557'], ['0.7'], ['0.391'], ['0.372'], ['0.478'], ['0.37'], ['0.345'], ['0.393']]
column
['F1']
['integrated']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || Tratz and Hovy (2010)</td> <td>0.739</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || Dima (2016)</td> <td>0.725</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || Shwartz and Waterson (2018)</td> <td>0.714</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || distributional</td> <td>0.677</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || paraphrase</td> <td>0.505</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Random || Method || integrated</td> <td>0.673</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || Tratz and Hovy (2010)</td> <td>0.34</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || Dima (2016)</td> <td>0.334</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || Shwartz and Waterson (2018)</td> <td>0.429</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || distributional</td> <td>0.356</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || paraphrase</td> <td>0.333</td> </tr> <tr> <td>Dataset &amp; Split || Tratz fine Lexical || Method || integrated</td> <td>0.37</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || Tratz and Hovy (2010)</td> <td>0.76</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || Dima (2016)</td> <td>0.775</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || Shwartz and Waterson (2018)</td> <td>0.736</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || distributional</td> <td>0.689</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || paraphrase</td> <td>0.557</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Random || Method || integrated</td> <td>0.7</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || Tratz and Hovy (2010)</td> <td>0.391</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || Dima (2016)</td> <td>0.372</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || Shwartz and Waterson (2018)</td> <td>0.478</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || distributional</td> <td>0.37</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || paraphrase</td> <td>0.345</td> </tr> <tr> <td>Dataset &amp; Split || Tratz coarse Lexical || Method || integrated</td> <td>0.393</td> </tr> </tbody></table>
Table 4
table_4
P18-1111
8
acl2018
Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits. The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods. The contribution of the paraphrase component is especially noticeable in the lexical splits. As expected, the integrated method in Shwartz and Waterson (2018), in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model. The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.
[1, 1, 1, 1, 2]
["Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", 'The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.', 'The contribution of the paraphrase component is especially noticeable in the lexical splits.', 'As expected, the integrated method in Shwartz and Waterson (2018), in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.', 'The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.']
[['Tratz fine Random', 'Tratz fine Lexical', 'Tratz coarse Random', 'Tratz coarse Lexical'], ['paraphrase', 'distributional', 'integrated'], ['Tratz coarse Lexical', 'Tratz fine Lexical'], ['Shwartz and Waterson (2018)', 'Tratz coarse Lexical', 'integrated', 'Tratz fine Lexical'], ['Tratz coarse Lexical', 'integrated', 'Tratz fine Lexical']]
1
P18-1112table_3
With and without sentiment
4
[['Corpus', 'Objective', 'Subcorpus Sentiment?', 'With'], ['Corpus', 'Objective', 'Subcorpus Sentiment?', 'Without'], ['Corpus', 'Subjective', 'Subcorpus Sentiment?', 'With'], ['Corpus', 'Subjective', 'Subcorpus Sentiment?', 'Without'], ['Random Embeddings', '-', '-', '-']]
2
[['Sentiment', 'Amazon'], ['Sentiment', 'RT'], ['Subjectivity', '-'], ['Topic', '-']]
[['81.8', '75.2', '90.7', '83.1'], ['76.1', '67.2', '87.8', '82.6'], ['85.5', '78.0', '90.3', '82.5'], ['79.8', '71.0', '89.1', '82.2'], ['76.1', '62.2', '80.1', '71.5']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['With', 'Without']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment || Amazon</th> <th>Sentiment || RT</th> <th>Subjectivity || -</th> <th>Topic || -</th> </tr> </thead> <tbody> <tr> <td>Corpus || Objective || Subcorpus Sentiment? || With</td> <td>81.8</td> <td>75.2</td> <td>90.7</td> <td>83.1</td> </tr> <tr> <td>Corpus || Objective || Subcorpus Sentiment? || Without</td> <td>76.1</td> <td>67.2</td> <td>87.8</td> <td>82.6</td> </tr> <tr> <td>Corpus || Subjective || Subcorpus Sentiment? || With</td> <td>85.5</td> <td>78.0</td> <td>90.3</td> <td>82.5</td> </tr> <tr> <td>Corpus || Subjective || Subcorpus Sentiment? || Without</td> <td>79.8</td> <td>71.0</td> <td>89.1</td> <td>82.2</td> </tr> <tr> <td>Random Embeddings || - || - || -</td> <td>76.1</td> <td>62.2</td> <td>80.1</td> <td>71.5</td> </tr> </tbody></table>
Table 3
table_3
P18-1112
5
acl2018
To control for the “amount” of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004). For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement. We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments. Table 3 shows the results, including that of random word embeddings for reference. Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification. Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon.
[2, 1, 2, 1, 1, 1]
['To control for the “amount” of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004).', 'For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.', 'We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.', 'Table 3 shows the results, including that of random word embeddings for reference.', 'Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.', 'Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon.']
[['Subjective', 'Objective'], ['Subcorpus Sentiment?', 'With', 'Without'], None, ['Random Embeddings'], ['Sentiment', 'Subjectivity', 'Topic'], ['Without', 'Subjective', 'Objective', 'Sentiment', 'Random Embeddings', 'Amazon']]
1
P18-1112table_4
Comparison of Sentiment-Infused Word Embeddings on Sentiment Classification Task
3
[['Corpus/Category', 'Amazon', 'Amazon Instant Video'], ['Corpus/Category', 'Amazon', 'Android Apps'], ['Corpus/Category', 'Amazon', 'Automotive'], ['Corpus/Category', 'Amazon', 'Baby'], ['Corpus/Category', 'Amazon', 'Beauty'], ['Corpus/Category', 'Amazon', 'Books'], ['Corpus/Category', 'Amazon', 'CD & Vinyl'], ['Corpus/Category', 'Amazon', 'Cell Phones'], ['Corpus/Category', 'Amazon', 'Clothing'], ['Corpus/Category', 'Amazon', 'Digital Music'], ['Corpus/Category', 'Amazon', 'Electronics'], ['Corpus/Category', 'Amazon', 'Grocery & Food'], ['Corpus/Category', 'Amazon', 'Health'], ['Corpus/Category', 'Amazon', 'Home & Kitchen'], ['Corpus/Category', 'Amazon', 'Kindle Store'], ['Corpus/Category', 'Amazon', 'Movies & TV'], ['Corpus/Category', 'Amazon', 'Musical Instruments'], ['Corpus/Category', 'Amazon', 'Office'], ['Corpus/Category', 'Amazon', 'Garden'], ['Corpus/Category', 'Amazon', 'Pet Supplies'], ['Corpus/Category', 'Amazon', 'Sports & Outdoors'], ['Corpus/Category', 'Amazon', 'Tools'], ['Corpus/Category', 'Amazon', 'Toys & Games'], ['Corpus/Category', 'Amazon', 'Video Games'], ['Corpus/Category', 'Average', '-'], ['Corpus/Category', 'Rotten Tomatoes', '-']]
3
[['Objective Embeddings', 'Word2Vec', '-'], ['Objective Embeddings', 'Retrofitting', '-'], ['Objective Embeddings', 'Refining', '-'], ['Objective Embeddings', 'SentiVec', 'Spherical'], ['Objective Embeddings', 'SentiVec', 'Logistic'], ['Subjective Embeddings', 'Word2Vec', '-'], ['Subjective Embeddings', 'Retrofitting', '-'], ['Subjective Embeddings', 'Refining', '-'], ['Subjective Embeddings', 'SentiVec', 'Spherical'], ['Subjective Embeddings', 'SentiVec', 'Logistic']]
[['84.1', '84.1', '81.9', '84.9*', '84.9*', '87.8', '87.8', '86.9', '88.1', '88.2'], ['83.0', '83.0', '80.9', '84.0*', '84.0*', '86.3', '86.3', '85.0', '86.6', '86.5'], ['80.7', '80.7', '78.8', '81.0', '81.3', '85.1', '85.1', '83.8', '84.9', '85.0'], ['80.9', '80.9', '78.6', '82.1', '82.2*', '84.2', '84.2', '82.8', '84.4', '84.6'], ['81.8', '81.8', '79.8', '82.4', '82.7*', '85.2', '85.2', '83.5', '85.2', '85.4'], ['80.9', '80.9', '78.9', '81.0', '81.3', '85.3', '85.3', '83.6', '85.3', '85.5'], ['79.4', '79.4', '77.6', '79.4', '79.9', '83.5', '83.5', '81.9', '83.7', '83.6'], ['82.2', '82.2', '80.0', '82.9', '83.0*', '86.8', '86.8', '85.3', '86.8', '87.0'], ['82.6', '82.6', '80.7', '83.8', '84.0*', '86.3', '86.3', '84.7', '86.4', '86.8'], ['82.3', '82.3', '80.5', '82.8', '83.0*', '86.3', '86.3', '84.6', '86.1', '86.3'], ['81.0', '81.0', '78.8', '80.9', '81.3', '85.2', '85.2', '83.6', '85.3', '85.3'], ['81.7', '81.7', '79.4', '83.1*', '83.1*', '85.0', '85.0', '83.7', '85.1', '85.6*'], ['79.7', '79.7', '77.9', '80.4*', '80.4', '84.0', '84.0', '82.3', '84.0', '84.3'], ['81.6', '81.6', '79.5', '82.1', '82.1', '85.4', '85.4', '83.9', '85.3', '85.4'], ['84.7', '84.7', '83.2', '85.2', '85.4*', '88.3', '88.3', '87.2', '88.3', '88.6'], ['81.4', '81.4', '78.5', '81.9', '81.9', '85.2', '85.2', '83.5', '85.4', '85.5'], ['81.7', '81.6', '79.7', '82.4', '82.4', '85.8', '85.8', '84.1', '85.9', '85.7'], ['82.0', '82.0', '80.0', '83.0*', '82.9', '86.1', '86.1', '84.5', '86.4', '86.5*'], ['80.4', '80.4', '77.9', '81.0', '81.5', '84.1', '84.1', '82.5', '84.3', '84.6*'], ['79.7', '79.7', '77.5', '80.4', '80.2', '83.2', '83.2', '81.5', '83.4', '83.8'], ['80.8', '80.8', '79.1', '81.3*', '81.2', '84.6', '84.6', '83.1', '84.3', '84.7'], ['81.0', '81.0', '79.3', '81.0', '81.3', '84.7', '84.7', '83.2', '84.8', '84.9'], ['83.8', '83.8', '82.0', '84.7', '84.9*', '87.2', '87.2', '85.7', '87.1', '87.5'], ['80.3', '80.3', '77.4', '81.5', '81.7*', '84.9', '84.9', '83.2', '85.0', '84.9'], ['81.6', '81.6', '79.5', '82.2', '82.4', '85.4', '85.4', '83.9', '85.5', '85.7'], ['75.6', '75.6', '73.4', '75.8*', '75.4', '77.9', '77.9', '76.7', '77.7', '77.9']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['SentiVec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Objective Embeddings || Word2Vec || -</th> <th>Objective Embeddings || Retrofitting || -</th> <th>Objective Embeddings || Refining || -</th> <th>Objective Embeddings || SentiVec || Spherical</th> <th>Objective Embeddings || SentiVec || Logistic</th> <th>Subjective Embeddings || Word2Vec || -</th> <th>Subjective Embeddings || Retrofitting || -</th> <th>Subjective Embeddings || Refining || -</th> <th>Subjective Embeddings || SentiVec || Spherical</th> <th>Subjective Embeddings || SentiVec || Logistic</th> </tr> </thead> <tbody> <tr> <td>Corpus/Category || Amazon || Amazon Instant Video</td> <td>84.1</td> <td>84.1</td> <td>81.9</td> <td>84.9*</td> <td>84.9*</td> <td>87.8</td> <td>87.8</td> <td>86.9</td> <td>88.1</td> <td>88.2</td> </tr> <tr> <td>Corpus/Category || Amazon || Android Apps</td> <td>83.0</td> <td>83.0</td> <td>80.9</td> <td>84.0*</td> <td>84.0*</td> <td>86.3</td> <td>86.3</td> <td>85.0</td> <td>86.6</td> <td>86.5</td> </tr> <tr> <td>Corpus/Category || Amazon || Automotive</td> <td>80.7</td> <td>80.7</td> <td>78.8</td> <td>81.0</td> <td>81.3</td> <td>85.1</td> <td>85.1</td> <td>83.8</td> <td>84.9</td> <td>85.0</td> </tr> <tr> <td>Corpus/Category || Amazon || Baby</td> <td>80.9</td> <td>80.9</td> <td>78.6</td> <td>82.1</td> <td>82.2*</td> <td>84.2</td> <td>84.2</td> <td>82.8</td> <td>84.4</td> <td>84.6</td> </tr> <tr> <td>Corpus/Category || Amazon || Beauty</td> <td>81.8</td> <td>81.8</td> <td>79.8</td> <td>82.4</td> <td>82.7*</td> <td>85.2</td> <td>85.2</td> <td>83.5</td> <td>85.2</td> <td>85.4</td> </tr> <tr> <td>Corpus/Category || Amazon || Books</td> <td>80.9</td> <td>80.9</td> <td>78.9</td> <td>81.0</td> <td>81.3</td> <td>85.3</td> <td>85.3</td> <td>83.6</td> <td>85.3</td> <td>85.5</td> </tr> <tr> <td>Corpus/Category || Amazon || CD &amp; Vinyl</td> <td>79.4</td> <td>79.4</td> <td>77.6</td> <td>79.4</td> <td>79.9</td> <td>83.5</td> <td>83.5</td> <td>81.9</td> <td>83.7</td> <td>83.6</td> </tr> <tr> <td>Corpus/Category || Amazon || Cell Phones</td> <td>82.2</td> <td>82.2</td> <td>80.0</td> <td>82.9</td> <td>83.0*</td> <td>86.8</td> <td>86.8</td> <td>85.3</td> <td>86.8</td> <td>87.0</td> </tr> <tr> <td>Corpus/Category || Amazon || Clothing</td> <td>82.6</td> <td>82.6</td> <td>80.7</td> <td>83.8</td> <td>84.0*</td> <td>86.3</td> <td>86.3</td> <td>84.7</td> <td>86.4</td> <td>86.8</td> </tr> <tr> <td>Corpus/Category || Amazon || Digital Music</td> <td>82.3</td> <td>82.3</td> <td>80.5</td> <td>82.8</td> <td>83.0*</td> <td>86.3</td> <td>86.3</td> <td>84.6</td> <td>86.1</td> <td>86.3</td> </tr> <tr> <td>Corpus/Category || Amazon || Electronics</td> <td>81.0</td> <td>81.0</td> <td>78.8</td> <td>80.9</td> <td>81.3</td> <td>85.2</td> <td>85.2</td> <td>83.6</td> <td>85.3</td> <td>85.3</td> </tr> <tr> <td>Corpus/Category || Amazon || Grocery &amp; Food</td> <td>81.7</td> <td>81.7</td> <td>79.4</td> <td>83.1*</td> <td>83.1*</td> <td>85.0</td> <td>85.0</td> <td>83.7</td> <td>85.1</td> <td>85.6*</td> </tr> <tr> <td>Corpus/Category || Amazon || Health</td> <td>79.7</td> <td>79.7</td> <td>77.9</td> <td>80.4*</td> <td>80.4</td> <td>84.0</td> <td>84.0</td> <td>82.3</td> <td>84.0</td> <td>84.3</td> </tr> <tr> <td>Corpus/Category || Amazon || Home &amp; Kitchen</td> <td>81.6</td> <td>81.6</td> <td>79.5</td> <td>82.1</td> <td>82.1</td> <td>85.4</td> <td>85.4</td> <td>83.9</td> <td>85.3</td> <td>85.4</td> </tr> <tr> <td>Corpus/Category || Amazon || Kindle Store</td> <td>84.7</td> <td>84.7</td> <td>83.2</td> <td>85.2</td> <td>85.4*</td> <td>88.3</td> <td>88.3</td> <td>87.2</td> <td>88.3</td> <td>88.6</td> </tr> <tr> <td>Corpus/Category || Amazon || Movies &amp; TV</td> <td>81.4</td> <td>81.4</td> <td>78.5</td> <td>81.9</td> <td>81.9</td> <td>85.2</td> <td>85.2</td> <td>83.5</td> <td>85.4</td> <td>85.5</td> </tr> <tr> <td>Corpus/Category || Amazon || Musical Instruments</td> <td>81.7</td> <td>81.6</td> <td>79.7</td> <td>82.4</td> <td>82.4</td> <td>85.8</td> <td>85.8</td> <td>84.1</td> <td>85.9</td> <td>85.7</td> </tr> <tr> <td>Corpus/Category || Amazon || Office</td> <td>82.0</td> <td>82.0</td> <td>80.0</td> <td>83.0*</td> <td>82.9</td> <td>86.1</td> <td>86.1</td> <td>84.5</td> <td>86.4</td> <td>86.5*</td> </tr> <tr> <td>Corpus/Category || Amazon || Garden</td> <td>80.4</td> <td>80.4</td> <td>77.9</td> <td>81.0</td> <td>81.5</td> <td>84.1</td> <td>84.1</td> <td>82.5</td> <td>84.3</td> <td>84.6*</td> </tr> <tr> <td>Corpus/Category || Amazon || Pet Supplies</td> <td>79.7</td> <td>79.7</td> <td>77.5</td> <td>80.4</td> <td>80.2</td> <td>83.2</td> <td>83.2</td> <td>81.5</td> <td>83.4</td> <td>83.8</td> </tr> <tr> <td>Corpus/Category || Amazon || Sports &amp; Outdoors</td> <td>80.8</td> <td>80.8</td> <td>79.1</td> <td>81.3*</td> <td>81.2</td> <td>84.6</td> <td>84.6</td> <td>83.1</td> <td>84.3</td> <td>84.7</td> </tr> <tr> <td>Corpus/Category || Amazon || Tools</td> <td>81.0</td> <td>81.0</td> <td>79.3</td> <td>81.0</td> <td>81.3</td> <td>84.7</td> <td>84.7</td> <td>83.2</td> <td>84.8</td> <td>84.9</td> </tr> <tr> <td>Corpus/Category || Amazon || Toys &amp; Games</td> <td>83.8</td> <td>83.8</td> <td>82.0</td> <td>84.7</td> <td>84.9*</td> <td>87.2</td> <td>87.2</td> <td>85.7</td> <td>87.1</td> <td>87.5</td> </tr> <tr> <td>Corpus/Category || Amazon || Video Games</td> <td>80.3</td> <td>80.3</td> <td>77.4</td> <td>81.5</td> <td>81.7*</td> <td>84.9</td> <td>84.9</td> <td>83.2</td> <td>85.0</td> <td>84.9</td> </tr> <tr> <td>Corpus/Category || Average || -</td> <td>81.6</td> <td>81.6</td> <td>79.5</td> <td>82.2</td> <td>82.4</td> <td>85.4</td> <td>85.4</td> <td>83.9</td> <td>85.5</td> <td>85.7</td> </tr> <tr> <td>Corpus/Category || Rotten Tomatoes || -</td> <td>75.6</td> <td>75.6</td> <td>73.4</td> <td>75.8*</td> <td>75.4</td> <td>77.9</td> <td>77.9</td> <td>76.7</td> <td>77.7</td> <td>77.9</td> </tr> </tbody></table>
Table 4
table_4
P18-1112
8
acl2018
Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes. For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold. An asterisk indicates statistically significant8 results at 5% in comparison to Word2Vec. Both SentiVec variants outperform Word2Vec in the vast majority of the cases. The degree of outperformance is higher for the Objective than the Subjective word embeddings. This is a reasonable trend given our previous findings in Section 3. As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources. Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label. SentiVec also outperforms the two baselines that benefit from the same lexical resources. Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point). Refining makes the word embeddings perform worse on the sentiment classification task.
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1]
['Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.', 'For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.', 'An asterisk indicates statistically significant8 results at 5% in comparison to Word2Vec.', 'Both SentiVec variants outperform Word2Vec in the vast majority of the cases.', 'The degree of outperformance is higher for the Objective than the Subjective word embeddings.', 'This is a reasonable trend given our previous findings in Section 3.', 'As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.', 'Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.', 'SentiVec also outperforms the two baselines that benefit from the same lexical resources.', 'Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).', 'Refining makes the word embeddings perform worse on the sentiment classification task.']
[['Amazon', 'Rotten Tomatoes'], ['Objective Embeddings', 'Subjective Embeddings'], ['Word2Vec'], ['Word2Vec', 'SentiVec', 'Objective Embeddings', 'Subjective Embeddings'], ['SentiVec', 'Objective Embeddings', 'Subjective Embeddings'], None, ['Corpus/Category'], None, ['SentiVec', 'Retrofitting', 'Refining'], ['Retrofitting', 'Word2Vec', 'Objective Embeddings', 'Subjective Embeddings'], ['Refining']]
1
P18-1112table_5
Comparison of Word Embeddings on Subjectivity and Topic Classification Tasks
3
[['Corpus/Category', 'Topic', 'Computers'], ['Corpus/Category', 'Topic', 'Misc'], ['Corpus/Category', 'Topic', 'Politics'], ['Corpus/Category', 'Topic', 'Recreation'], ['Corpus/Category', 'Topic', 'Religion'], ['Corpus/Category', 'Topic', 'Science'], ['Corpus/Category', 'Average', '-'], ['Corpus/Category', '-', '-']]
3
[['Objective Embeddings', 'Word2Vec', '-'], ['Objective Embeddings', 'Retrofitting', '-'], ['Objective Embeddings', 'Refining', '-'], ['Objective Embeddings', 'SentiVec', 'Spherical'], ['Objective Embeddings', 'SentiVec', 'Logistic'], ['Subjective Embeddings', 'Word2Vec', '-'], ['Subjective Embeddings', 'Retrofitting', '-'], ['Subjective Embeddings', 'Refining', '-'], ['Subjective Embeddings', 'SentiVec', 'Spherical'], ['Subjective Embeddings', 'SentiVec', 'Logistic']]
[['79.8', '79.8', '79.6', '79.6', '79.8', '79.8', '79.8', '79.8', '79.7', '79.7'], ['89.8', '89.8', '89.7', '89.8', '90.0', '90.4', '90.4', '90.6', '90.4', '90.3'], ['84.6', '84.6', '84.4', '84.5', '84.6', '83.8', '83.8', '83.5', '83.6', '83.5'], ['83.4', '83.4', '83.1', '83.1', '83.2', '82.6', '82.6', '82.5', '82.7', '82.8'], ['84.6', '84.6', '84.5', '84.5', '84.6', '84.2', '84.2', '84.2', '84.1', '84.2'], ['78.2', '78.2', '78.2', '78.1', '78.3', '76.4', '76.4', '76.1', '76.7', '76.6'], ['83.4', '83.4', '83.2', '83.3', '83.4', '82.8', '82.8', '82.8', '82.9', '82.8'], ['90.6', '90.6', '90.0', '90.6', '90.6', '90.6', '90.6', '90.3', '90.7', '90.8']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['SentiVec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Objective Embeddings || Word2Vec || -</th> <th>Objective Embeddings || Retrofitting || -</th> <th>Objective Embeddings || Refining || -</th> <th>Objective Embeddings || SentiVec || Spherical</th> <th>Objective Embeddings || SentiVec || Logistic</th> <th>Subjective Embeddings || Word2Vec || -</th> <th>Subjective Embeddings || Retrofitting || -</th> <th>Subjective Embeddings || Refining || -</th> <th>Subjective Embeddings || SentiVec || Spherical</th> <th>Subjective Embeddings || SentiVec || Logistic</th> </tr> </thead> <tbody> <tr> <td>Corpus/Category || Topic || Computers</td> <td>79.8</td> <td>79.8</td> <td>79.6</td> <td>79.6</td> <td>79.8</td> <td>79.8</td> <td>79.8</td> <td>79.8</td> <td>79.7</td> <td>79.7</td> </tr> <tr> <td>Corpus/Category || Topic || Misc</td> <td>89.8</td> <td>89.8</td> <td>89.7</td> <td>89.8</td> <td>90.0</td> <td>90.4</td> <td>90.4</td> <td>90.6</td> <td>90.4</td> <td>90.3</td> </tr> <tr> <td>Corpus/Category || Topic || Politics</td> <td>84.6</td> <td>84.6</td> <td>84.4</td> <td>84.5</td> <td>84.6</td> <td>83.8</td> <td>83.8</td> <td>83.5</td> <td>83.6</td> <td>83.5</td> </tr> <tr> <td>Corpus/Category || Topic || Recreation</td> <td>83.4</td> <td>83.4</td> <td>83.1</td> <td>83.1</td> <td>83.2</td> <td>82.6</td> <td>82.6</td> <td>82.5</td> <td>82.7</td> <td>82.8</td> </tr> <tr> <td>Corpus/Category || Topic || Religion</td> <td>84.6</td> <td>84.6</td> <td>84.5</td> <td>84.5</td> <td>84.6</td> <td>84.2</td> <td>84.2</td> <td>84.2</td> <td>84.1</td> <td>84.2</td> </tr> <tr> <td>Corpus/Category || Topic || Science</td> <td>78.2</td> <td>78.2</td> <td>78.2</td> <td>78.1</td> <td>78.3</td> <td>76.4</td> <td>76.4</td> <td>76.1</td> <td>76.7</td> <td>76.6</td> </tr> <tr> <td>Corpus/Category || Average || -</td> <td>83.4</td> <td>83.4</td> <td>83.2</td> <td>83.3</td> <td>83.4</td> <td>82.8</td> <td>82.8</td> <td>82.8</td> <td>82.9</td> <td>82.8</td> </tr> <tr> <td>Corpus/Category || - || -</td> <td>90.6</td> <td>90.6</td> <td>90.0</td> <td>90.6</td> <td>90.6</td> <td>90.6</td> <td>90.6</td> <td>90.3</td> <td>90.7</td> <td>90.8</td> </tr> </tbody></table>
Table 5
table_5
P18-1112
8
acl2018
Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods. Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.
[1, 1]
['Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.', 'Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.']
[['Computers', 'Misc', 'Politics', 'Recreation', 'Religion'], ['SentiVec', 'Retrofitting', 'Refining', 'Topic']]
1
P18-1113table_1
Metaphor identification results. NB: * denotes that our model outperforms the baseline significantly, based on two-tailed paired t-test with p < 0.001.
3
[['Method', 'Phrase', 'Shutova et al. (2016)'], ['Method', 'Phrase', 'Rei et al. (2017)'], ['Method', 'Phrase', 'SIM-CBOWI+O'], ['Method', 'Phrase', 'SIM-SGI+O'], ['Method', 'Sent.', 'Melamud et al. (2016)'], ['Method', 'Sent.', 'SIM-SGI'], ['Method', 'Sent.', 'SIM-SGI+O'], ['Method', 'Sent.', 'SIM-CBOWI'], ['Method', 'Sent.', 'SIM-CBOWI+O']]
1
[['P'], ['R'], ['F1']]
[['0.67', '0.76', '0.71'], ['0.74', '0.76', '0.74'], ['0.66', '0.78', '0.72'], ['0.68', '0.82', '0.74*'], ['0.60', '0.80', '0.69'], ['0.56', '0.95', '0.70'], ['0.62', '0.89', '0.73'], ['0.59', '0.91', '0.72'], ['0.66', '0.88', '0.75*']]
column
['P', 'R', 'F1']
['SIM-SGI+O', 'SIM-CBOWI+O']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Phrase || Shutova et al. (2016)</td> <td>0.67</td> <td>0.76</td> <td>0.71</td> </tr> <tr> <td>Method || Phrase || Rei et al. (2017)</td> <td>0.74</td> <td>0.76</td> <td>0.74</td> </tr> <tr> <td>Method || Phrase || SIM-CBOWI+O</td> <td>0.66</td> <td>0.78</td> <td>0.72</td> </tr> <tr> <td>Method || Phrase || SIM-SGI+O</td> <td>0.68</td> <td>0.82</td> <td>0.74*</td> </tr> <tr> <td>Method || Sent. || Melamud et al. (2016)</td> <td>0.60</td> <td>0.80</td> <td>0.69</td> </tr> <tr> <td>Method || Sent. || SIM-SGI</td> <td>0.56</td> <td>0.95</td> <td>0.70</td> </tr> <tr> <td>Method || Sent. || SIM-SGI+O</td> <td>0.62</td> <td>0.89</td> <td>0.73</td> </tr> <tr> <td>Method || Sent. || SIM-CBOWI</td> <td>0.59</td> <td>0.91</td> <td>0.72</td> </tr> <tr> <td>Method || Sent. || SIM-CBOWI+O</td> <td>0.66</td> <td>0.88</td> <td>0.75*</td> </tr> </tbody></table>
Table 1
table_1
P18-1113
7
acl2018
Table 1 shows the performance of our model and the baselines on the task of metaphor identification. All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set. For sentence level metaphor identification, it can be observed that all our models outperform the baseline (Melamud et al., 2016), with SIM-CBOWI+O giving the highest F1 score of 75% which is a 6% gain over the baseline. We also see that models based on both input and output vectors (i.e.,SIM-CBOWI+O and SIM-SGI+O) yield better performance than the models based on input vectors only (i.e., SIM-CBOWI and SIM-SGI ). Such an observation supports our assumption that using input and output vectors can better model similarity between words that have different types of POS, than simply using input vectors. When comparing CBOW and Skip-gram based models, we see that CBOW based models generally achieve better performance in precision whereas Skip-gram based models perform better in recall. In terms of phrase level metaphor identification, we compare our best performing models (i.e., SIM-CBOWI+O and SIM-SGI+O) against the approaches of Shutova et al. (2016) and Rei et al.(2017). In contrast to the sentence level evaluation in which SIM-CBOWI+O gives the best performance, SIM-SGI+O performs best for the phrase level evaluation. This is likely due to the fact that Skip-gram is trained by using a centre word to maximise the probability of each context word, whereas CBOW uses the average of context word input vectors to maximise the probability of the centre word. Thus, Skip-gram performs better in modelling one-word context, while CBOW has better performance in modelling multi-context words. When comparing to the baselines, our model SIM-SGI+O significantly outperforms the word embedding based approach by Shutova et al. (2016), and gives the same performance as the deep supervised method (Rei et al., 2017) which requires a large amount of labelled data for training and cost in training time. SIM-CBOWI+O and SIM-SGI+O are also evaluated with different thresholds for both phrase and sentence level metaphor identification.
[1, 2, 1, 1, 2, 1, 1, 1, 2, 2, 1, 2]
['Table 1 shows the performance of our model and the baselines on the task of metaphor identification.', 'All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set.', 'For sentence level metaphor identification, it can be observed that all our models outperform the baseline (Melamud et al., 2016), with SIM-CBOWI+O giving the highest F1 score of 75% which is a 6% gain over the baseline.', 'We also see that models based on both input and output vectors (i.e.,SIM-CBOWI+O and SIM-SGI+O) yield better performance than the models based on input vectors only (i.e., SIM-CBOWI and SIM-SGI ).', 'Such an observation supports our assumption that using input and output vectors can better model similarity between words that have different types of POS, than simply using input vectors.', 'When comparing CBOW and Skip-gram based models, we see that CBOW based models generally achieve better performance in precision whereas Skip-gram based models perform better in recall.', 'In terms of phrase level metaphor identification, we compare our best performing models (i.e.,\r\nSIM-CBOWI+O and SIM-SGI+O) against the approaches of Shutova et al. (2016) and Rei et al.(2017).', 'In contrast to the sentence level evaluation in which SIM-CBOWI+O gives the best\r\nperformance, SIM-SGI+O performs best for the phrase level evaluation.', 'This is likely due to the fact that Skip-gram is trained by using a centre word to maximise the probability of each context word, whereas CBOW uses the average of context word input vectors to maximise the probability of the centre word.', 'Thus, Skip-gram performs better in modelling one-word context, while CBOW has better performance in modelling multi-context words.', 'When comparing to the baselines, our model SIM-SGI+O significantly outperforms the word embedding based approach by Shutova et al. (2016), and gives the same performance as the deep supervised method (Rei et al., 2017) which requires a large amount of labelled data for training and cost in training time.', 'SIM-CBOWI+O and SIM-SGI+O are also evaluated with different thresholds for both phrase and sentence level metaphor identification.']
[None, ['SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-SGI', 'SIM-CBOWI'], ['Melamud et al. (2016)', 'Sent.', 'SIM-SGI', 'SIM-SGI+O', 'SIM-CBOWI', 'F1'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-CBOWI', 'SIM-SGI'], None, ['P', 'R', 'SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-SGI', 'SIM-CBOWI'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Shutova et al. (2016)', 'Rei et al. (2017)'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Phrase', 'Sent.'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Phrase', 'Sent.'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Phrase', 'Sent.'], ['SIM-SGI+O', 'Shutova et al. (2016)', 'Rei et al. (2017)'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Phrase', 'Sent.']]
1
P18-1114table_2
Performance comparison (%) of our LMMs and the baselines on two basic NLP tasks (word similarity & syntactic analogy) and one downstream task (text classification). The bold digits indicate the best performances.
1
[['Wordsim-353'], ['RW'], ['RG-65'], ['SCWS'], ['Men-3k'], ['WS-353-REL'], ['Syntactic Analogy'], ['Text Classification']]
1
[['CBOW'], ['Skip-gram'], ['GloVe'], ['EMM'], ['LMM-A'], ['LMM-S'], ['LMM-M']]
[['58.77', '61.94', '49.40', '60.01', '62.05', '63.13', '61.54'], ['40.58', '36.42', '33.40', '40.83', '43.12', '42.14', '40.51'], ['56.50', '62.81', '59.92', '60.85', '62.51', '62.49', '63.07'], ['63.13', '60.20', '47.98', '60.28', '61.86', '61.71', '63.02'], ['68.07', '66.30', '60.56', '66.76', '66.26', '68.36', '64.65'], ['49.72', '57.05', '47.46', '54.48', '56.14', '58.47', '55.19'], ['13.46', '13.14', '13.94', '17.34', '20.38', '17.59', '18.30'], ['78.26', '79.40', '77.01', '80.00', '80.67', '80.59', '81.28']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['LMM-A', 'LMM-S', 'LMM-M']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CBOW</th> <th>Skip-gram</th> <th>GloVe</th> <th>EMM</th> <th>LMM-A</th> <th>LMM-S</th> <th>LMM-M</th> </tr> </thead> <tbody> <tr> <td>Wordsim-353</td> <td>58.77</td> <td>61.94</td> <td>49.40</td> <td>60.01</td> <td>62.05</td> <td>63.13</td> <td>61.54</td> </tr> <tr> <td>RW</td> <td>40.58</td> <td>36.42</td> <td>33.40</td> <td>40.83</td> <td>43.12</td> <td>42.14</td> <td>40.51</td> </tr> <tr> <td>RG-65</td> <td>56.50</td> <td>62.81</td> <td>59.92</td> <td>60.85</td> <td>62.51</td> <td>62.49</td> <td>63.07</td> </tr> <tr> <td>SCWS</td> <td>63.13</td> <td>60.20</td> <td>47.98</td> <td>60.28</td> <td>61.86</td> <td>61.71</td> <td>63.02</td> </tr> <tr> <td>Men-3k</td> <td>68.07</td> <td>66.30</td> <td>60.56</td> <td>66.76</td> <td>66.26</td> <td>68.36</td> <td>64.65</td> </tr> <tr> <td>WS-353-REL</td> <td>49.72</td> <td>57.05</td> <td>47.46</td> <td>54.48</td> <td>56.14</td> <td>58.47</td> <td>55.19</td> </tr> <tr> <td>Syntactic Analogy</td> <td>13.46</td> <td>13.14</td> <td>13.94</td> <td>17.34</td> <td>20.38</td> <td>17.59</td> <td>18.30</td> </tr> <tr> <td>Text Classification</td> <td>78.26</td> <td>79.40</td> <td>77.01</td> <td>80.00</td> <td>80.67</td> <td>80.59</td> <td>81.28</td> </tr> </tbody></table>
Table 2
table_2
P18-1114
8
acl2018
Word similarity is conducted to test the semantic information which is encoded in word embeddings, and the results are listed in Table 2 (first 6 rows). We observe that our models surpass the comparative baselines on five datasets. Compared with the base model CBOW, it is remarkable that our models approximately achieve improvements of more than 5% and 7%, respectively, in the performance on the golden standard Wordsim-353 and RG-65. On WS-353-REL, the difference between CBOW and LMM-S even reaches 8%. The advantage demonstrates the effectiveness of our methods. Based on our strategy, more semantic information will be captured in corpus when adding more latent meanings in the context window. B. By incorporating mophemes, EMM also performs better than other baselines but fails to get the performance as well as ours. Actually, EMM mainly tunes the distributions of words in vector space to let the morpheme-similar words gather closer, which means it just encodes more morphological properties into word embeddings but lacks the ability to capture more semantic information. Specially, because of the medium size corpus and the experimental settings, GloVe doesnft perform as well as that described in (Pennington et al., 2014). . 5.2 The Results on Syntactic Analogy. In (Mikolov et al., 2013c), the dataset is divided into adjectives, nouns and verbs. For brevity, we only report performance on the whole dataset. As the middle row of Table 2 shows, all of our models outperform the comparative baselines to a great extent. Compared with CBOW, the advantage of LMM-A even reaches to 7%. Besides, we observe that the suffix of gbh usually is the same as the suffix of gdh when answering question ga is to b as c is to dh. Based on our strategy, morphemesimilar words will not only gather closer but have a trend to group near the latent meanings of their morphemes, which makes our embeddings have the advantage to deal with the syntactic analogy problem. EMM also performs well on this task but is still weaker than our models. Actually, syntactic analogy is also a semantics-related task because gch and gdh are with similar meanings. Since our models are better to capture semantic information, they lead to higher performance than the explicitly morphology-based models. The results are displayed in the bottom row of Table 2. Since we simply use the average embedding of words as the feature vector for 10-categorization classification, the overall classification accuracies of all models are merely aroud 80%. However, the classification accuracies of our LMMs still surpass all the baselines, especailly CBOW and GloVe.
[1, 1, 1, 1, 2, 2, 1, 2, 1, 0, 0, 0, 1, 1, 0, 2, 1, 0, 2, 1, 1, 1]
['Word similarity is conducted to test the semantic information which is encoded in word embeddings, and the results are listed in Table 2 (first 6 rows).', 'We observe that our models surpass the comparative baselines on five datasets.', 'Compared with the base model CBOW, it is remarkable that our models approximately achieve improvements of more than 5% and 7%, respectively, in the performance on the golden standard Wordsim-353 and RG-65.', 'On WS-353-REL, the difference between CBOW and LMM-S even reaches 8%.', 'The advantage demonstrates the effectiveness of our methods.', 'Based on our strategy, more semantic information will be captured in corpus when adding more latent meanings in the context window. B.', 'By incorporating mophemes, EMM also performs better than other baselines but fails to get the performance as well as ours.', 'Actually, EMM mainly tunes the distributions of words in vector space to let the morpheme-similar words gather closer, which means it just encodes more morphological properties into word embeddings but lacks the ability to capture more semantic information.', 'Specially, because of the medium size corpus and the experimental settings, GloVe\ndoesn\x81ft perform as well as that described in\n(Pennington et al., 2014).\n.', '5.2 The Results on Syntactic Analogy.', ' In (Mikolov et al., 2013c), the dataset is divided into adjectives, nouns and verbs.', 'For brevity, we only report performance on the whole dataset.', 'As the middle row of Table 2 shows, all of our models outperform the comparative baselines to a great extent.', 'Compared with CBOW, the advantage of LMM-A even reaches to 7%.', 'Besides, we observe that the suffix of \x81gb\x81h usually is the same as the suffix of \x81gd\x81h when answering question \x81ga is to b as c is to d\x81h.', 'Based on our strategy, morphemesimilar words will not only gather closer but have a trend to group near the latent meanings of their morphemes, which makes our embeddings have the advantage to deal with the syntactic analogy problem.', 'EMM also performs well on this task but is still weaker than our models.', 'Actually, syntactic analogy is also a semantics-related task because \x81gc\x81h and \x81gd\x81h are with similar meanings.', 'Since our models are better to capture semantic information, they lead to higher performance than the explicitly morphology-based models.', 'The results are displayed in the bottom row of Table 2.', 'Since we simply use the average embedding of words as the feature vector for 10-categorization classification, the overall classification accuracies of all models are merely aroud 80%.', 'However, the classification accuracies of our LMMs still surpass all the baselines, especailly CBOW and GloVe.']
[None, ['LMM-A', 'LMM-S', 'LMM-M'], ['CBOW', 'Wordsim-353', 'RG-65'], ['WS-353-REL', 'CBOW', 'LMM-S'], ['LMM-A', 'LMM-S', 'LMM-M'], None, ['EMM', 'LMM-A', 'LMM-S', 'LMM-M'], ['EMM'], ['GloVe'], None, None, None, ['Syntactic Analogy', 'EMM', 'LMM-A', 'LMM-S', 'LMM-M'], ['Syntactic Analogy', 'CBOW', 'LMM-A'], None, None, ['Syntactic Analogy', 'EMM', 'LMM-A', 'LMM-S', 'LMM-M'], None, ['EMM', 'LMM-A', 'LMM-S', 'LMM-M'], ['Text Classification'], ['Text Classification'], ['Text Classification', 'LMM-A', 'LMM-S', 'LMM-M', 'CBOW', 'Skip-gram', 'GloVe']]
1
P18-1118table_4
Our Memory-to-Context Source Memory NMT variants vs. S-NMT and Source context NMT baselines. bold: Best performance, †, ♠, ♣, ♦: Statistically significantly better than only S-NMT, S-NMT & Jean et al. (2017), S-NMT & Wang et al. (2017), all baselines, respectively.
1
[['Jean et al. (2017)'], ['Wang et al. (2017)'], ['S-NMT'], ['S-NMT + src mem'], ['S-NMT + both mems']]
3
[['BLEU', 'Fr→En', '-'], ['BLEU', 'De→En', 'NC-11'], ['BLEU', 'De→En', 'NC-16'], ['BLEU', 'Et→En', '-'], ['METEOR', 'Fr→En', '-'], ['METEOR', 'De→En', 'NC-11'], ['METEOR', 'De→En', 'NC-16'], ['METEOR', 'Et→En', '-']]
[['21.95', '6.04', '10.26', '21.67', '24.10', '11.61', '15.56', '25.77'], ['21.87', '5.49', '10.14', '22.06', '24.13', '11.05', '15.20', '26.00'], ['20.85', '5.24', '9.18', '20.42', '23.27', '10.90', '14.35', '24.65'], ['21.91', '6.26', '10.20', '22.10', '24.04', '11.52', '15.45', '25.92'], ['22.00', '6.57', '10.54', '22.32', '24.40', '12.24', '16.18', '26.34']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'METEOR', 'METEOR', 'METEOR', 'METEOR']
['S-NMT + src mem', 'S-NMT + both mems']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || Fr→En || -</th> <th>BLEU || De→En || NC-11</th> <th>BLEU || De→En || NC-16</th> <th>BLEU || Et→En || -</th> <th>METEOR || Fr→En || -</th> <th>METEOR || De→En || NC-11</th> <th>METEOR || De→En || NC-16</th> <th>METEOR || Et→En || -</th> </tr> </thead> <tbody> <tr> <td>Jean et al. (2017)</td> <td>21.95</td> <td>6.04</td> <td>10.26</td> <td>21.67</td> <td>24.10</td> <td>11.61</td> <td>15.56</td> <td>25.77</td> </tr> <tr> <td>Wang et al. (2017)</td> <td>21.87</td> <td>5.49</td> <td>10.14</td> <td>22.06</td> <td>24.13</td> <td>11.05</td> <td>15.20</td> <td>26.00</td> </tr> <tr> <td>S-NMT</td> <td>20.85</td> <td>5.24</td> <td>9.18</td> <td>20.42</td> <td>23.27</td> <td>10.90</td> <td>14.35</td> <td>24.65</td> </tr> <tr> <td>S-NMT + src mem</td> <td>21.91</td> <td>6.26</td> <td>10.20</td> <td>22.10</td> <td>24.04</td> <td>11.52</td> <td>15.45</td> <td>25.92</td> </tr> <tr> <td>S-NMT + both mems</td> <td>22.00</td> <td>6.57</td> <td>10.54</td> <td>22.32</td> <td>24.40</td> <td>12.24</td> <td>16.18</td> <td>26.34</td> </tr> </tbody></table>
Table 4
table_4
P18-1118
7
acl2018
Table 4 shows comparison of our Memory-to-Context model variants source context-NMT models (Jean et al., 2017; Wang et al., 2017). For German→English, our S-NMT+src mem model is comparable to Jean et al. (2017) but outperforms Wang et al. (2017) for one test set according to BLEU, and for both test sets according to METEOR. For Estonian→English, our model outperforms Jean et al. (2017). Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned). Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).
[1, 1, 1, 2]
['Table 4 shows comparison of our Memory-to-Context model variants source context-NMT models (Jean et al., 2017; Wang et al., 2017).', 'For German→English, our S-NMT+src mem model is comparable to Jean et al. (2017) but outperforms Wang et al. (2017) for one test set according to BLEU, and for both test sets according to METEOR.', 'For Estonian→English, our model outperforms Jean et al. (2017). Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).', 'Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).']
[['S-NMT', 'S-NMT + src mem', 'S-NMT + both mems', 'Jean et al. (2017)', 'Wang et al. (2017)'], ['De→En', 'S-NMT + src mem', 'Jean et al. (2017)', 'Wang et al. (2017)', 'BLEU', 'METEOR'], ['Et→En', 'S-NMT + src mem', 'S-NMT + both mems', 'Jean et al. (2017)'], ['S-NMT + src mem', 'S-NMT + both mems']]
1
P18-1119table_1
Results on LGL, WikToR (WIK) and GeoVirus (GEO). Lower AUC and Average Error are better while higher Acc@161km is better. Figures in brackets are scores on identical subsets of each dataset. †Only the AUC decimal part shown. ‡Average Error rounded up to the nearest 100km.
2
[['Geocoder', 'CamCoder'], ['Geocoder', 'Edinburgh'], ['Geocoder', 'Yahoo!'], ['Geocoder', 'Population'], ['Geocoder', 'CLAVIN'], ['Geocoder', 'GeoTxt'], ['Geocoder', 'Topocluster'], ['Geocoder', 'Santos et al.']]
2
[['Area Under Curve', 'LGL'], ['Area Under Curve', 'WIK'], ['Area Under Curve', 'GEO'], ['Average Error', 'LGL'], ['Average Error', 'WIK'], ['Average Error', 'GEO'], ['Accuracy@161km', 'LGL'], ['Accuracy@161km', 'WIK'], ['Accuracy@161km', 'GEO']]
[['22 (18)', '33 (37)', '31 (32)', '7 (5)', '11 (9)', '3 (3)', '76 (83)', '65 (57)', '82 (80)'], ['25 (22)', '53 (58)', '33 (34)', '8 (8)', '31 (30)', '5 (4)', '76 (80)', '42 (36)', '78 (78)'], ['34 (35)', '44 (53)', '40 (44)', '6 (5)', '23 (25)', '3 (3)', '72 (75)', '52 (39)', '70 (65)'], ['27 (22)', '68 (71)', '32 (32)', '12 (10)', '45 (42)', '5 (3)', '70 (79)', '22 (14)', '80 (80)'], ['26 (20)', '70 (69)', '32 (33)', '13 (9)', '43 (39)', '6 (5)', '71 (80)', '16 (16)', '79 (80)'], ['29 (21)', '70 (71)', '33 (34)', '14 (9)', '47 (45)', '6 (5)', '68 (80)', '18 (14)', '79 (79)'], ['38 (36)', '63 (66)', 'NA', '12 (8)', '38 (35)', 'NA', '63 (71)', '26 (20)', 'NA'], ['NA', 'NA', 'NA', '8', 'NA', 'NA', '71', 'NA', 'NA']]
column
['Area Under Curve', 'Area Under Curve', 'Area Under Curve', 'Average Error', 'Average Error', 'Average Error', 'Accuracy@161km', 'Accuracy@161km', 'Accuracy@161km']
['CamCoder']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Area Under Curve || LGL</th> <th>Area Under Curve || WIK</th> <th>Area Under Curve || GEO</th> <th>Average Error || LGL</th> <th>Average Error || WIK</th> <th>Average Error || GEO</th> <th>Accuracy@161km || LGL</th> <th>Accuracy@161km || WIK</th> <th>Accuracy@161km || GEO</th> </tr> </thead> <tbody> <tr> <td>Geocoder || CamCoder</td> <td>22 (18)</td> <td>33 (37)</td> <td>31 (32)</td> <td>7 (5)</td> <td>11 (9)</td> <td>3 (3)</td> <td>76 (83)</td> <td>65 (57)</td> <td>82 (80)</td> </tr> <tr> <td>Geocoder || Edinburgh</td> <td>25 (22)</td> <td>53 (58)</td> <td>33 (34)</td> <td>8 (8)</td> <td>31 (30)</td> <td>5 (4)</td> <td>76 (80)</td> <td>42 (36)</td> <td>78 (78)</td> </tr> <tr> <td>Geocoder || Yahoo!</td> <td>34 (35)</td> <td>44 (53)</td> <td>40 (44)</td> <td>6 (5)</td> <td>23 (25)</td> <td>3 (3)</td> <td>72 (75)</td> <td>52 (39)</td> <td>70 (65)</td> </tr> <tr> <td>Geocoder || Population</td> <td>27 (22)</td> <td>68 (71)</td> <td>32 (32)</td> <td>12 (10)</td> <td>45 (42)</td> <td>5 (3)</td> <td>70 (79)</td> <td>22 (14)</td> <td>80 (80)</td> </tr> <tr> <td>Geocoder || CLAVIN</td> <td>26 (20)</td> <td>70 (69)</td> <td>32 (33)</td> <td>13 (9)</td> <td>43 (39)</td> <td>6 (5)</td> <td>71 (80)</td> <td>16 (16)</td> <td>79 (80)</td> </tr> <tr> <td>Geocoder || GeoTxt</td> <td>29 (21)</td> <td>70 (71)</td> <td>33 (34)</td> <td>14 (9)</td> <td>47 (45)</td> <td>6 (5)</td> <td>68 (80)</td> <td>18 (14)</td> <td>79 (79)</td> </tr> <tr> <td>Geocoder || Topocluster</td> <td>38 (36)</td> <td>63 (66)</td> <td>NA</td> <td>12 (8)</td> <td>38 (35)</td> <td>NA</td> <td>63 (71)</td> <td>26 (20)</td> <td>NA</td> </tr> <tr> <td>Geocoder || Santos et al.</td> <td>NA</td> <td>NA</td> <td>NA</td> <td>8</td> <td>NA</td> <td>NA</td> <td>71</td> <td>NA</td> <td>NA</td> </tr> </tbody></table>
Table 1
table_1
P18-1119
7
acl2018
Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table 1 as strongly correlated scores for subsets of different sizes, with which to assess model performance. Table 1 also shows scores in brackets for the overlapping partition of all systems in order to compare performance on identical instances: GeoVirus 601 (26%), LGL 787 (17%) and WikToR 2,202 (9%). The geocoding difficulty based on the ambiguity of each dataset is: LGL (moderate to hard), WIK (very hard), GEO (easy to moderate). A population baseline also features in the evaluation. The baseline is conceptually simple: choose the candidate with the highest population, akin to the most frequent word sense in WSD. Table 1 shows the effectiveness of this heuristic, which is competitive with many geocoders, even outperforming some. However, the baseline is not effective on WikToR as the dataset was deliberately constructed as a tough ambiguity test. Table 1 shows how several geocoders mirror the behaviour of the population baseline. This simple but effective heuristic is rarely used in system comparisons, and where evaluated (Santos et al., 2015; Leidner, 2008), it is inconsistent with expected figures (due to unpublished resources, we are unable to investigate). We note that no single computational paradigm dominates Table 1. The rule-based (Edinburgh, GeoTxt, CLAVIN), (Topocluster), machine learning (CamCoder, Santos) and other (Yahoo!, Population) geocoders occupy different ranks across the three datasets. Due to space constraints, Table 1 does not show figures for another type of scenario we tested, a shorter lexical context, using 200 words instead of the standard 400. CamCoder proved to be robust to reduced context, with only a small performance decline. Using the same format as Table 1, AUC errors for LGL increased from 22 (18) to 23 (19), WIK from 33 (37) to 37 (40) and GEO remained the same at 31 (32). This means that reducing model input size to save computational resources would still deliver accurate results.
[1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2]
['Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table 1 as strongly correlated scores for subsets of different sizes, with which to assess model performance.', 'Table 1 also shows scores in brackets for the overlapping partition of all systems in order to compare performance on identical instances: GeoVirus 601 (26%), LGL 787 (17%) and WikToR 2,202 (9%).', 'The geocoding difficulty based on the ambiguity of each dataset is: LGL (moderate to hard), WIK (very hard), GEO (easy to moderate).', 'A population baseline also features in the evaluation.', 'The baseline is conceptually simple: choose the candidate with the highest population, akin to the most frequent word sense in WSD.', 'Table 1 shows the effectiveness of this heuristic, which is competitive with many geocoders, even outperforming some.', 'However, the baseline is not effective on WikToR as the dataset was deliberately constructed as a tough ambiguity test.', 'Table 1 shows how several geocoders mirror the behaviour of the population baseline.', 'This simple but effective heuristic is rarely used in system comparisons, and where evaluated (Santos et al., 2015; Leidner, 2008), it is inconsistent with expected figures (due to unpublished resources, we are unable to investigate).', 'We note that no single computational paradigm dominates Table 1.', 'The rule-based (Edinburgh, GeoTxt, CLAVIN), (Topocluster), machine learning (CamCoder, Santos) and other (Yahoo!, Population) geocoders occupy different ranks across the three datasets.', 'Due to space constraints, Table 1 does not show figures for another type of scenario we tested, a shorter lexical context, using 200 words instead of the standard 400.', 'CamCoder proved to be robust to reduced context, with only a small performance decline.', 'Using the same format as Table 1, AUC errors for LGL increased from 22 (18) to 23 (19), WIK from 33 (37) to 37 (40) and GEO remained the same at 31 (32).', 'This means that reducing model input size to save computational resources would still deliver accurate results.']
[None, ['LGL', 'WIK', 'GEO'], ['Geocoder', 'LGL', 'WIK', 'GEO'], None, None, ['Geocoder'], ['WIK'], None, None, None, ['Edinburgh', 'GeoTxt', 'CLAVIN', 'Topocluster', 'CamCoder', 'Santos et al.', 'Yahoo!', 'Population'], None, ['CamCoder'], ['CamCoder', 'Area Under Curve', 'LGL', 'WIK', 'GEO'], ['CamCoder', 'Area Under Curve', 'LGL', 'WIK', 'GEO']]
1
P18-1129table_2
The dependency parsing results. Significance test (Nilsson and Nivre, 2008) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01.
1
[['Baseline'], ['Ensemble'], ['Distill (reference alpha=1.0)'], ['Distill (exploration T=1.0)'], ['Distill (both)'], ['Ballesteros et al. (2016) (dyn. oracle)'], ['Andor et al. (2016) (local B=1)'], ['Buckman et al. (2016) (local B=8)'], ['Andor et al. (2016) (local B=32)'], ['Andor et al. (2016) (global B=32)'], ['Dozat and Manning (2016)'], ['Kuncoro et al. (2016)'], ['Kuncoro et al. (2017)']]
1
[['LAS']]
[['90.83'], ['92.73'], ['91.99'], ['92.00'], ['92.14'], ['91.42'], ['91.02'], ['91.19'], ['91.70'], ['92.79'], ['94.08'], ['92.06'], ['94.60']]
column
['LAS']
['Ensemble', 'Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>90.83</td> </tr> <tr> <td>Ensemble</td> <td>92.73</td> </tr> <tr> <td>Distill (reference alpha=1.0)</td> <td>91.99</td> </tr> <tr> <td>Distill (exploration T=1.0)</td> <td>92.00</td> </tr> <tr> <td>Distill (both)</td> <td>92.14</td> </tr> <tr> <td>Ballesteros et al. (2016) (dyn. oracle)</td> <td>91.42</td> </tr> <tr> <td>Andor et al. (2016) (local B=1)</td> <td>91.02</td> </tr> <tr> <td>Buckman et al. (2016) (local B=8)</td> <td>91.19</td> </tr> <tr> <td>Andor et al. (2016) (local B=32)</td> <td>91.70</td> </tr> <tr> <td>Andor et al. (2016) (global B=32)</td> <td>92.79</td> </tr> <tr> <td>Dozat and Manning (2016)</td> <td>94.08</td> </tr> <tr> <td>Kuncoro et al. (2016)</td> <td>92.06</td> </tr> <tr> <td>Kuncoro et al. (2017)</td> <td>94.60</td> </tr> </tbody></table>
Table 2
table_2
P18-1129
6
acl2018
Table 2 shows our PTB experimental results. From this result, we can see that the ensemble model outperforms the baseline model by 1.90 in LAS. For our distillation from reference, when setting alpha = 1.0, best performance on development set is achieved and the test LAS is 91.99. We also compare our parser with the other parsers in Table 2. The second group shows the greedy transition-based parsers in previous literatures. Andor et al. (2016) presented an alternative state representation and explored both greedy and beam search decoding. (Ballesteros et al., 2016) explores training the greedy parser with dynamic oracle. Our distillation parser outperforms all these greedy counterparts. The third group shows parsers trained on different techniques including decoding with beam search (Buckman et al.,2016; Andor et al., 2016), training transitionbased parser with beam search (Andor et al.,2016), graph-based parsing (Dozat and Manning,2016), distilling a graph-based parser from the output of 20 parsers (Kuncoro et al., 2016), and converting constituent parsing results to dependencies (Kuncoro et al., 2017). Our distillation parser still outperforms its transition-based counterparts but lags the others.
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 2 shows our PTB experimental results.', 'From this result, we can see that the ensemble model outperforms the baseline model by 1.90 in LAS.', 'For our distillation from reference, when setting alpha = 1.0, best performance on development set is achieved and the test LAS is 91.99.', 'We also compare our parser with the other parsers in Table 2.', 'The second group shows the greedy transition-based parsers in previous literatures.', 'Andor et al. (2016) presented an alternative state representation and explored both greedy and beam search decoding.', '(Ballesteros et al., 2016) explores training the greedy parser with dynamic oracle.', 'Our distillation parser outperforms all these greedy counterparts.', 'The third group shows parsers trained on different techniques including decoding with beam search (Buckman et al.,2016; Andor et al., 2016), training transitionbased parser with beam search (Andor et al.,2016), graph-based parsing (Dozat and Manning,2016), distilling a graph-based parser from the output of 20 parsers (Kuncoro et al., 2016), and converting constituent parsing results to dependencies (Kuncoro et al., 2017).', 'Our distillation parser still outperforms its transition-based counterparts but lags the others.']
[None, ['Ensemble', 'Baseline', 'LAS'], ['Distill (reference alpha=1.0)', 'LAS'], ['Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)', 'Ballesteros et al. (2016) (dyn. oracle)', 'Andor et al. (2016) (local B=1)'], ['Ballesteros et al. (2016) (dyn. oracle)', 'Andor et al. (2016) (local B=1)'], ['Andor et al. (2016) (local B=1)'], ['Ballesteros et al. (2016) (dyn. oracle)'], ['Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)', 'Ballesteros et al. (2016) (dyn. oracle)', 'Andor et al. (2016) (local B=1)'], ['Buckman et al. (2016) (local B=8)', 'Andor et al. (2016) (local B=32)', 'Andor et al. (2016) (global B=32)', 'Dozat and Manning (2016)', 'Kuncoro et al. (2016)', 'Kuncoro et al. (2017)'], ['Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)', 'Buckman et al. (2016) (local B=8)', 'Andor et al. (2016) (local B=32)', 'Andor et al. (2016) (global B=32)', 'Dozat and Manning (2016)', 'Kuncoro et al. (2016)', 'Kuncoro et al. (2017)']]
1
P18-1129table_3
The machine translation results. MIXER denotes that of Ranzato et al. (2015), BSO denotes that of Wiseman and Rush (2016). Significance test (Koehn, 2004) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01.
1
[['Baseline'], ['Ensemble'], ['Distill (reference alpha=0.8)'], ['Distill (exploration T=0.1)'], ['Distill (both)'], ['MIXER'], ['BSO (local B=1)'], ['BSO (global B=1)']]
1
[['BLEU']]
[['22.79'], ['26.26'], ['24.76'], ['24.64'], ['25.44'], ['20.73'], ['22.53'], ['23.83']]
column
['BLEU']
['Ensemble', 'Distill (reference alpha=0.8)', 'Distill (exploration T=0.1)', 'Distill (both)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>22.79</td> </tr> <tr> <td>Ensemble</td> <td>26.26</td> </tr> <tr> <td>Distill (reference alpha=0.8)</td> <td>24.76</td> </tr> <tr> <td>Distill (exploration T=0.1)</td> <td>24.64</td> </tr> <tr> <td>Distill (both)</td> <td>25.44</td> </tr> <tr> <td>MIXER</td> <td>20.73</td> </tr> <tr> <td>BSO (local B=1)</td> <td>22.53</td> </tr> <tr> <td>BSO (global B=1)</td> <td>23.83</td> </tr> </tbody></table>
Table 3
table_3
P18-1129
6
acl2018
Table 3 shows the experimental results on IWSLT 2014 dataset. Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score. Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score. Like in the parsing experiments, sharpen the distribution when exploring the search space is more helpful to the model’s performance but the differences when T ≤ 0.2 is not significant as shown in Figure 3. We set T = 0.1 in our distillation from exploration experiments since it achieves the best development score. Table 3 shows the exploration result of a BLEU score of 24.64 and it slightly lags the best reference model. Distilling from both the reference and exploration improves the single model’s performance by a large margin and achieves a BLEU score of 25.44. We also compare our model with other translation models including the one trained with reinforcement learning (Ranzato et al., 2015) and that using beam search in training (Wiseman and Rush, 2016). Our distillation translator outperforms these models.
[1, 1, 1, 0, 0, 1, 1, 1, 1]
['Table 3 shows the experimental results on IWSLT 2014 dataset.', 'Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score.', 'Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score.', 'Like in the parsing experiments, sharpen the distribution when exploring the search space is more helpful to the modelâ\x80\x99s performance but the differences when T â\x89¤ 0.2 is not significant as shown in Figure 3.', 'We set T = 0.1 in our distillation from exploration experiments since it achieves the best development score.', 'Table 3 shows the exploration result of a BLEU score of 24.64 and it slightly lags the best reference model.', 'Distilling from both the reference and exploration improves the single modelâ\x80\x99s performance by a large margin and achieves a BLEU score of 25.44.', 'We also compare our model with other translation models including the one trained with reinforcement learning (Ranzato et al., 2015) and that using beam search in training (Wiseman and Rush, 2016).', 'Our distillation translator outperforms these models.']
[None, ['Ensemble', 'Baseline', 'BLEU'], ['Distill (reference alpha=0.8)', 'BLEU'], None, None, ['Distill (exploration T=0.1)', 'BLEU'], ['Distill (both)', 'BLEU'], ['MIXER', 'BSO (local B=1)', 'BSO (global B=1)'], ['Distill (reference alpha=0.8)', 'Distill (exploration T=0.1)', 'Distill (both)']]
1
P18-1129table_4
The ranking performance of parsers’ output distributions evaluated in MAP on “problematic” states.
1
[['Baseline'], ['Ensemble'], ['Distill (both)']]
1
[['optimal-yet-ambiguous'], ['non-optimal']]
[['68.59', '89.59'], ['74.19', '90.90'], ['81.15', '91.38']]
column
['MAP', 'MAP']
['Ensemble', 'Distill (both)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>optimal-yet-ambiguous</th> <th>non-optimal</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>68.59</td> <td>89.59</td> </tr> <tr> <td>Ensemble</td> <td>74.19</td> <td>90.90</td> </tr> <tr> <td>Distill (both)</td> <td>81.15</td> <td>91.38</td> </tr> </tbody></table>
Table 4
table_4
P18-1129
8
acl2018
The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states. This observation indicates the ensemble output distribution is more informative thus generalizes well on problematic states and achieves better performance. We also observe that the distillation model perform better than both the baseline and ensemble. We attribute this to the fact that the distillation model is learned from exploration.
[1, 2, 1, 2]
['The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states.', 'This observation indicates the ensemble output distribution is more informative thus generalizes well on problematic states and achieves better performance.', 'We also observe that the distillation model perform better than both the baseline and ensemble.', 'We attribute this to the fact that the distillation model is learned from exploration.']
[['Ensemble', 'Baseline', 'optimal-yet-ambiguous', 'non-optimal'], ['Ensemble'], ['Distill (both)', 'Ensemble', 'Baseline', 'optimal-yet-ambiguous', 'non-optimal'], ['Distill (both)']]
1
P18-1130table_1
UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems. “T” and “G” indicate transitionand graph-based models, respectively. For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation. For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.
3
[['System', 'Chen and Manning (2014)', 'T'], ['System', 'Ballesteros et al. (2015)', 'T'], ['System', 'Dyer et al. (2015)', 'T'], ['System', 'Bohnet and Nivre (2012)', 'T'], ['System', 'Ballesteros et al. (2016)', 'T'], ['System', 'Kiperwasser and Goldberg (2016)', 'T'], ['System', 'Weiss et al. (2015)', 'T'], ['System', 'Andor et al. (2016)', 'T'], ['System', 'Kiperwasser and Goldberg (2016)', 'G'], ['System', 'Wang and Chang (2016)', 'G'], ['System', 'Cheng et al. (2016)', 'G'], ['System', 'Kuncoro et al. (2016)', 'G'], ['System', 'Ma and Hovy (2017)', 'G'], ['System', 'BIAF: Dozat and Manning (2017)', 'G'], ['System', 'BIAF: re-impl', 'G'], ['System', 'STACKPTR: Org', 'T'], ['System', 'STACKPTR: +gpar', 'T'], ['System', 'STACKPTR: +sib', 'T'], ['System', 'STACKPTR: Full', 'T']]
2
[['English', 'UAS'], ['English', 'LAS'], ['Chinese', 'UAS'], ['Chinese', 'LAS'], ['German', 'UAS'], ['German', 'LAS']]
[['91.8', '89.6', '83.9', '82.4', '-', '-'], ['91.63', '89.44', '85.30', '83.72', '88.83', '86.10'], ['93.1', '90.9', '87.2', '85.7', '-', '-'], ['93.33', '21.22', '87.3', '85.9', '91.4', '89.4'], ['93.56', '91.42', '87.65', '86.21', '-', '-'], ['93.9', '91.9', '87.6', '86.1', '-', '-'], ['94.26', '92.41', '-', '-', '-', '-'], ['94.61', '92.79', '-', '-', '90.91', '89.15'], ['93.1', '91.0', '86.6', '85.1', '-', '-'], ['94.08', '91.82', '87.55', '86.23', '-', '-'], ['94.10', '91.49', '88.1', '85.7', '-', '-'], ['94.26', '92.06', '88.87', '87.30', '91.60', '89.24'], ['94.88', '92.98', '89.05', '87.74', '92.58', '90.54'], ['95.74', '94.08', '89.30', '88.23', '93.46', '91.44'], ['95.84', '94.21', '90.43', '89.14', '93.85', '92.32'], ['95.77', '94.12', '90.48', '89.19', '93.59', '92.06'], ['95.78', '94.12', '90.49', '89.19', '93.65', '92.12'], ['95.85', '94.18', '90.43', '89.15', '93.76', '92.21'], ['95.87', '94.19', '90.59', '89.29', '93.65', '92.11']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['STACKPTR: Full']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English || UAS</th> <th>English || LAS</th> <th>Chinese || UAS</th> <th>Chinese || LAS</th> <th>German || UAS</th> <th>German || LAS</th> </tr> </thead> <tbody> <tr> <td>System || Chen and Manning (2014) || T</td> <td>91.8</td> <td>89.6</td> <td>83.9</td> <td>82.4</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Ballesteros et al. (2015) || T</td> <td>91.63</td> <td>89.44</td> <td>85.30</td> <td>83.72</td> <td>88.83</td> <td>86.10</td> </tr> <tr> <td>System || Dyer et al. (2015) || T</td> <td>93.1</td> <td>90.9</td> <td>87.2</td> <td>85.7</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Bohnet and Nivre (2012) || T</td> <td>93.33</td> <td>21.22</td> <td>87.3</td> <td>85.9</td> <td>91.4</td> <td>89.4</td> </tr> <tr> <td>System || Ballesteros et al. (2016) || T</td> <td>93.56</td> <td>91.42</td> <td>87.65</td> <td>86.21</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Kiperwasser and Goldberg (2016) || T</td> <td>93.9</td> <td>91.9</td> <td>87.6</td> <td>86.1</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Weiss et al. (2015) || T</td> <td>94.26</td> <td>92.41</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Andor et al. (2016) || T</td> <td>94.61</td> <td>92.79</td> <td>-</td> <td>-</td> <td>90.91</td> <td>89.15</td> </tr> <tr> <td>System || Kiperwasser and Goldberg (2016) || G</td> <td>93.1</td> <td>91.0</td> <td>86.6</td> <td>85.1</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Wang and Chang (2016) || G</td> <td>94.08</td> <td>91.82</td> <td>87.55</td> <td>86.23</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Cheng et al. (2016) || G</td> <td>94.10</td> <td>91.49</td> <td>88.1</td> <td>85.7</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Kuncoro et al. (2016) || G</td> <td>94.26</td> <td>92.06</td> <td>88.87</td> <td>87.30</td> <td>91.60</td> <td>89.24</td> </tr> <tr> <td>System || Ma and Hovy (2017) || G</td> <td>94.88</td> <td>92.98</td> <td>89.05</td> <td>87.74</td> <td>92.58</td> <td>90.54</td> </tr> <tr> <td>System || BIAF: Dozat and Manning (2017) || G</td> <td>95.74</td> <td>94.08</td> <td>89.30</td> <td>88.23</td> <td>93.46</td> <td>91.44</td> </tr> <tr> <td>System || BIAF: re-impl || G</td> <td>95.84</td> <td>94.21</td> <td>90.43</td> <td>89.14</td> <td>93.85</td> <td>92.32</td> </tr> <tr> <td>System || STACKPTR: Org || T</td> <td>95.77</td> <td>94.12</td> <td>90.48</td> <td>89.19</td> <td>93.59</td> <td>92.06</td> </tr> <tr> <td>System || STACKPTR: +gpar || T</td> <td>95.78</td> <td>94.12</td> <td>90.49</td> <td>89.19</td> <td>93.65</td> <td>92.12</td> </tr> <tr> <td>System || STACKPTR: +sib || T</td> <td>95.85</td> <td>94.18</td> <td>90.43</td> <td>89.15</td> <td>93.76</td> <td>92.21</td> </tr> <tr> <td>System || STACKPTR: Full || T</td> <td>95.87</td> <td>94.19</td> <td>90.59</td> <td>89.29</td> <td>93.65</td> <td>92.11</td> </tr> </tbody></table>
Table 1
table_1
P18-1130
7
acl2018
Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison. Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run. Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers. Our re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017), demonstrating the effectiveness of the character-level information. Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English. On German, the performance is competitive with BIAF, and significantly better than other models.
[1, 2, 1, 1, 1, 1]
['Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.', 'Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.', 'Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.', 'Our re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017), demonstrating the effectiveness of the\ncharacter-level information.', 'Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.', 'On German, the performance is competitive with BIAF, and significantly better than other models.']
[['STACKPTR: Org', 'STACKPTR: +gpar', 'STACKPTR: +sib', 'STACKPTR: Full', 'UAS', 'LAS'], ['BIAF: re-impl', 'STACKPTR: Org', 'STACKPTR: +gpar', 'STACKPTR: +sib', 'STACKPTR: Full'], ['STACKPTR: Full', 'English', 'Chinese', 'German'], ['BIAF: re-impl', 'BIAF: Dozat and Manning (2017)'], ['UAS', 'LAS', 'Chinese', 'English', 'STACKPTR: Full'], ['STACKPTR: Org', 'STACKPTR: +gpar', 'STACKPTR: +sib', 'STACKPTR: Full', 'UAS', 'LAS', 'German', 'BIAF: re-impl']]
1
P18-1130table_2
Parsing performance on the test data of PTB with different versions of POS tags.
2
[['POS', 'Gold'], ['POS', 'Pred'], ['POS', 'None']]
1
[['UAS'], ['LAS'], ['UCM'], ['LCM']]
[['96.12±0.03', '95.06±0.05', '62.22±0.33', '55.74±0.44'], ['95.87±0.04', '94.19±0.04', '61.43±0.49', '49.68±0.47'], ['95.90±0.05', '94.21±0.04', '61.58±0.39', '49.87±0.46']]
column
['UAS', 'LAS', 'UCM', 'LCM']
['Gold']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> <th>UCM</th> <th>LCM</th> </tr> </thead> <tbody> <tr> <td>POS || Gold</td> <td>96.12±0.03</td> <td>95.06±0.05</td> <td>62.22±0.33</td> <td>55.74±0.44</td> </tr> <tr> <td>POS || Pred</td> <td>95.87±0.04</td> <td>94.19±0.04</td> <td>61.43±0.49</td> <td>49.68±0.47</td> </tr> <tr> <td>POS || None</td> <td>95.90±0.05</td> <td>94.21±0.04</td> <td>61.58±0.39</td> <td>49.87±0.46</td> </tr> </tbody></table>
Table 2
table_2
P18-1130
7
acl2018
Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB. The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information. The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags. It illustrates that an end-to-end parser that doesnft rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).
[1, 1, 1, 1]
['Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.', 'The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.', 'The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.', 'It illustrates that an end-to-end parser that doesn\x81ft rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).']
[['POS', 'UAS', 'LAS', 'UCM', 'LCM'], ['Gold', 'Pred', 'None'], ['Pred', 'None'], ['Pred', 'None']]
1
P18-1130table_4
UAS and LAS on both the development and test datasets of 12 treebanks from UD Treebanks, together with BIAF for comparison.
1
[['bg'], ['ca'], ['cs'], ['de'], ['en'], ['es'], ['fr'], ['it'], ['nl'], ['no'], ['ro'], ['ru']]
3
[['Dev', 'BIAF', 'UAS'], ['Dev', 'BIAF', 'LAS'], ['Dev', 'STACKPTR', 'UAS'], ['Dev', 'STACKPTR', 'LAS'], ['Test', 'BIAF', 'UAS'], ['Test', 'BIAF', 'LAS'], ['Test', 'STACKPTR', 'UAS'], ['Test', 'STACKPTR', 'LAS']]
[['93.92±0.13', '89.05±0.11', '94.09±0.16', '89.17±0.14', '94.30±0.16', '90.04±0.16', '94.31±0.06', '89.96±0.07'], ['94.21±0.05', '91.97±0.06', '94.47±0.02', '92.51±0.05', '94.36±0.06', '92.05±0.07', '94.47±0.02', '92.39±0.02'], ['94.14±0.03', '90.89±0.04', '94.33±0.04', '91.24±0.05', '94.06±0.04', '90.60±0.05', '94.21±0.06', '90.94±0.07'], ['91.89±0.11', '88.39±0.17', '92.26±0.11', '88.79±0.15', '90.26±0.19', '86.11±0.25', '90.26±0.07', '86.16±0.01'], ['92.51±0.08', '90.50±0.07', '92.47±0.03', '90.46±0.02', '91.91±0.17', '89.82±0.16', '91.93±0.07', '89.83±0.06'], ['93.46±0.05', '91.13±0.07', '93.54±0.06', '91.34±0.05', '93.72±0.07', '91.33±0.08', '93.77±0.07', '91.52±0.07'], ['95.05±0.04', '92.76±0.07', '94.97±0.04', '92.57±0.06', '92.62±0.15', '89.51±0.14', '92.90±0.20', '89.88±0.23'], ['94.89±0.12', '92.58±0.12', '94.93±0.09', '92.90±0.10', '94.75±0.12', '92.72±0.12', '94.70±0.07', '92.55±0.09'], ['93.39±0.08', '90.90±0.07', '93.94±0.11', '91.67±0.08', '93.44±0.09', '91.04±0.06', '93.98±0.05', '91.73±0.07'], ['95.44±0.05', '93.73±0.05', '95.52±0.08', '93.80±0.08', '95.28±0.05', '93.58±0.05', '95.33±0.03', '93.62±0.03'], ['91.97±0.13', '85.38±0.03', '92.06±0.08', '85.58±0.12', '91.94±0.07', '85.61±0.13', '91.80±0.11', '85.34±0.21'], ['93.81±0.05', '91.85±0.06', '94.11±0.07', '92.29±0.10', '94.40±0.03', '92.68±0.04', '94.69±0.04', '93.07±0.03']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['STACKPTR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || BIAF || UAS</th> <th>Dev || BIAF || LAS</th> <th>Dev || STACKPTR || UAS</th> <th>Dev || STACKPTR || LAS</th> <th>Test || BIAF || UAS</th> <th>Test || BIAF || LAS</th> <th>Test || STACKPTR || UAS</th> <th>Test || STACKPTR || LAS</th> </tr> </thead> <tbody> <tr> <td>bg</td> <td>93.92±0.13</td> <td>89.05±0.11</td> <td>94.09±0.16</td> <td>89.17±0.14</td> <td>94.30±0.16</td> <td>90.04±0.16</td> <td>94.31±0.06</td> <td>89.96±0.07</td> </tr> <tr> <td>ca</td> <td>94.21±0.05</td> <td>91.97±0.06</td> <td>94.47±0.02</td> <td>92.51±0.05</td> <td>94.36±0.06</td> <td>92.05±0.07</td> <td>94.47±0.02</td> <td>92.39±0.02</td> </tr> <tr> <td>cs</td> <td>94.14±0.03</td> <td>90.89±0.04</td> <td>94.33±0.04</td> <td>91.24±0.05</td> <td>94.06±0.04</td> <td>90.60±0.05</td> <td>94.21±0.06</td> <td>90.94±0.07</td> </tr> <tr> <td>de</td> <td>91.89±0.11</td> <td>88.39±0.17</td> <td>92.26±0.11</td> <td>88.79±0.15</td> <td>90.26±0.19</td> <td>86.11±0.25</td> <td>90.26±0.07</td> <td>86.16±0.01</td> </tr> <tr> <td>en</td> <td>92.51±0.08</td> <td>90.50±0.07</td> <td>92.47±0.03</td> <td>90.46±0.02</td> <td>91.91±0.17</td> <td>89.82±0.16</td> <td>91.93±0.07</td> <td>89.83±0.06</td> </tr> <tr> <td>es</td> <td>93.46±0.05</td> <td>91.13±0.07</td> <td>93.54±0.06</td> <td>91.34±0.05</td> <td>93.72±0.07</td> <td>91.33±0.08</td> <td>93.77±0.07</td> <td>91.52±0.07</td> </tr> <tr> <td>fr</td> <td>95.05±0.04</td> <td>92.76±0.07</td> <td>94.97±0.04</td> <td>92.57±0.06</td> <td>92.62±0.15</td> <td>89.51±0.14</td> <td>92.90±0.20</td> <td>89.88±0.23</td> </tr> <tr> <td>it</td> <td>94.89±0.12</td> <td>92.58±0.12</td> <td>94.93±0.09</td> <td>92.90±0.10</td> <td>94.75±0.12</td> <td>92.72±0.12</td> <td>94.70±0.07</td> <td>92.55±0.09</td> </tr> <tr> <td>nl</td> <td>93.39±0.08</td> <td>90.90±0.07</td> <td>93.94±0.11</td> <td>91.67±0.08</td> <td>93.44±0.09</td> <td>91.04±0.06</td> <td>93.98±0.05</td> <td>91.73±0.07</td> </tr> <tr> <td>no</td> <td>95.44±0.05</td> <td>93.73±0.05</td> <td>95.52±0.08</td> <td>93.80±0.08</td> <td>95.28±0.05</td> <td>93.58±0.05</td> <td>95.33±0.03</td> <td>93.62±0.03</td> </tr> <tr> <td>ro</td> <td>91.97±0.13</td> <td>85.38±0.03</td> <td>92.06±0.08</td> <td>85.58±0.12</td> <td>91.94±0.07</td> <td>85.61±0.13</td> <td>91.80±0.11</td> <td>85.34±0.21</td> </tr> <tr> <td>ru</td> <td>93.81±0.05</td> <td>91.85±0.06</td> <td>94.11±0.07</td> <td>92.29±0.10</td> <td>94.40±0.03</td> <td>92.68±0.04</td> <td>94.69±0.04</td> <td>93.07±0.03</td> </tr> </tbody></table>
Table 4
table_4
P18-1130
9
acl2018
Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language. First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages ? all with UAS are higher than 90%. On nine languages ? Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish ? STACKPTR outperforms BIAF for both UAS and LAS. On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF. On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.
[1, 1, 1, 1, 1]
['Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.', 'First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages ? all with UAS are higher than 90%.', 'On nine languages ? Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish ? STACKPTR outperforms BIAF for both UAS and LAS.', 'On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.', 'On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.']
[['BIAF', 'STACKPTR'], ['BIAF', 'STACKPTR', 'UAS'], ['ca', 'cs', 'nl', 'de', 'en', 'fr', 'no', 'ru', 'es', 'STACKPTR', 'UAS', 'LAS'], ['bg', 'STACKPTR', 'UAS', 'BIAF', 'LAS'], ['it', 'ro', 'BIAF']]
1
P18-1132table_2
Number agreement error rates for various LSTM language models, broken down by the number of attractors. The top two rows represent the random and majority class baselines, while the next row (†) is the reported result from Linzen et al. (2016) for an LSTM language model with 50 hidden units (some entries, denoted by ≈, are approximately derived from a chart, since Linzen et al. (2016) did not provide a full table of results). We report results of our LSTM implementations of various hidden layer sizes, along with our re-run of the Jozefowicz et al. (2016) language model, in the next five rows. We lastly report the performance of a state of the art character LSTM baseline with a large model capacity (Melis et al., 2018).
1
[['Random'], ['Majority'], ['LSTM H=50'], ['Our LSTM H=50'], ['Our LSTM H=150'], ['Our LSTM H=250'], ['Our LSTM H=350'], ['1B Word LSTM (repl)'], ['Char LSTM']]
1
[['n=0'], ['n=1'], ['n=2'], ['n=3'], ['n=4']]
[['50.0', '50.0', '50.0', '50.0', '50.0'], ['32.0', '32.0', '32.0', '32.0', '32.0'], ['6.8', '32.6', '?50', '?65', '?70'], ['2.4', '8.0', '15.7', '26.1', '34.65'], ['1.5', '4.5', '9.0', '14.3', '17.6'], ['1.4', '3.3', '5.9', '9.7', '13.9'], ['1.3', '3.0', '5.7', '9.7', '13.8'], ['2.8', '8.0', '14.0', '21.8', '20.0'], ['1.2', '5.5', '11.8', '20.4', '27.8']]
column
['error', 'error', 'error', 'error', 'error']
['Our LSTM H=350']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>n=0</th> <th>n=1</th> <th>n=2</th> <th>n=3</th> <th>n=4</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>50.0</td> <td>50.0</td> <td>50.0</td> <td>50.0</td> <td>50.0</td> </tr> <tr> <td>Majority</td> <td>32.0</td> <td>32.0</td> <td>32.0</td> <td>32.0</td> <td>32.0</td> </tr> <tr> <td>LSTM H=50</td> <td>6.8</td> <td>32.6</td> <td>?50</td> <td>?65</td> <td>?70</td> </tr> <tr> <td>Our LSTM H=50</td> <td>2.4</td> <td>8.0</td> <td>15.7</td> <td>26.1</td> <td>34.65</td> </tr> <tr> <td>Our LSTM H=150</td> <td>1.5</td> <td>4.5</td> <td>9.0</td> <td>14.3</td> <td>17.6</td> </tr> <tr> <td>Our LSTM H=250</td> <td>1.4</td> <td>3.3</td> <td>5.9</td> <td>9.7</td> <td>13.9</td> </tr> <tr> <td>Our LSTM H=350</td> <td>1.3</td> <td>3.0</td> <td>5.7</td> <td>9.7</td> <td>13.8</td> </tr> <tr> <td>1B Word LSTM (repl)</td> <td>2.8</td> <td>8.0</td> <td>14.0</td> <td>21.8</td> <td>20.0</td> </tr> <tr> <td>Char LSTM</td> <td>1.2</td> <td>5.5</td> <td>11.8</td> <td>20.4</td> <td>27.8</td> </tr> </tbody></table>
Table 2
table_2
P18-1132
3
acl2018
Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement. For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.5 . As demonstrated on the last row of Table 2, we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.
[1, 1, 1]
['Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.', 'For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.5 .', 'As demonstrated on the last row of Table 2, we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.']
[['Our LSTM H=350'], ['LSTM H=50', 'Our LSTM H=50', 'Our LSTM H=150', 'Our LSTM H=250', 'Our LSTM H=350'], ['Char LSTM', '1B Word LSTM (repl)']]
1
P18-1136table_5
Evaluation on In-Car Assistant. Human, rulebased and KV Retrieval Net evaluation (with *) are reported from (Eric et al., 2017), which are not directly comparable. Mem2Seq achieves highest BLEU and entity F1 score over baselines.
1
[['Human*'], ['Rule-Based*'], ['KV Retrieval Net*'], ['Seq2Seq'], ['+Attn'], ['Ptr-Unk'], ['Mem2Seq H1'], ['Mem2Seq H3'], ['Mem2Seq H6']]
1
[['BLEU'], ['Ent. F1'], ['Sch. F1'], ['Wea. F1'], ['Nav. F1']]
[['13.5', '60.7', '64.3', '61.6', '55.2'], ['6.6', '43.8', '61.3', '39.5', '40.4'], ['13.2', '48.0', '62.9', '47.0', '41.3'], ['8.4', '10.3', '09.7', '14.1', '07.0'], ['9.3', '19.9', '23.4', '25.6', '10.8'], ['8.3', '22.7', '26.9', '26.7', '14.9'], ['11.6', '32.4', '39.8', '33.6', '24.6'], ['12.6', '33.4', '49.3', '32.8', '20.0'], ['9.9', '23.6', '34.3', '33.0', '4.4']]
column
['BLEU', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1']
['Mem2Seq H3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Ent. F1</th> <th>Sch. F1</th> <th>Wea. F1</th> <th>Nav. F1</th> </tr> </thead> <tbody> <tr> <td>Human*</td> <td>13.5</td> <td>60.7</td> <td>64.3</td> <td>61.6</td> <td>55.2</td> </tr> <tr> <td>Rule-Based*</td> <td>6.6</td> <td>43.8</td> <td>61.3</td> <td>39.5</td> <td>40.4</td> </tr> <tr> <td>KV Retrieval Net*</td> <td>13.2</td> <td>48.0</td> <td>62.9</td> <td>47.0</td> <td>41.3</td> </tr> <tr> <td>Seq2Seq</td> <td>8.4</td> <td>10.3</td> <td>09.7</td> <td>14.1</td> <td>07.0</td> </tr> <tr> <td>+Attn</td> <td>9.3</td> <td>19.9</td> <td>23.4</td> <td>25.6</td> <td>10.8</td> </tr> <tr> <td>Ptr-Unk</td> <td>8.3</td> <td>22.7</td> <td>26.9</td> <td>26.7</td> <td>14.9</td> </tr> <tr> <td>Mem2Seq H1</td> <td>11.6</td> <td>32.4</td> <td>39.8</td> <td>33.6</td> <td>24.6</td> </tr> <tr> <td>Mem2Seq H3</td> <td>12.6</td> <td>33.4</td> <td>49.3</td> <td>32.8</td> <td>20.0</td> </tr> <tr> <td>Mem2Seq H6</td> <td>9.9</td> <td>23.6</td> <td>34.3</td> <td>33.0</td> <td>4.4</td> </tr> </tbody></table>
Table 5
table_5
P18-1136
5
acl2018
In Table 5, our model can achieve highest 12.6 BLEU score. In addition, Mem2Seq has shown promising results in terms of Entity F1 scores (33.4%), which are, in general, much higher than those of other baselines. Note that the numbers reported from Eric et al. (2017) are not directly comparable to ours as we mention below. The other baselines such as Seq2Seq or PtrUnk especially have worse performances in this dataset since it is very inefficient for RNN methods to encode longer KB information, which is the advantage of Mem2Seq. Furthermore, we observe an interesting phenomenon that humans can easily achieve a high entity F1 score with a low BLEU score. This implies that stronger reasoning ability over entities (hops) is crucial, but the results may not be similar to the golden answer. We believe humans can produce good answers even with a low BLEU score, since there could be different ways to express the same concepts. Note that the results of KV Retrieval Net baseline reported in Table 5 come from the original paper (Eric et al., 2017) of In-Car Assistant, where they simplified the task by mapping the expression of entities to a canonical form using named entity recognition (NER) and linking.
[1, 1, 2, 1, 1, 2, 2, 2]
['In Table 5, our model can achieve highest 12.6 BLEU score.', 'In addition, Mem2Seq has shown promising results in terms of Entity F1 scores (33.4%), which are, in general, much higher than those of other baselines.', 'Note that the numbers reported from Eric et al. (2017) are not directly comparable to ours as we mention below.', 'The other baselines such as Seq2Seq or PtrUnk especially have worse performances in this dataset since it is very inefficient for RNN methods to encode longer KB information, which is the advantage of Mem2Seq.', 'Furthermore, we observe an interesting phenomenon that humans can easily achieve a high entity F1 score with a low BLEU score.', 'This implies that stronger reasoning ability over entities (hops) is crucial, but the results may not be similar to the golden answer.', 'We believe humans can produce good answers even with a low BLEU score, since there could be different ways to express the same concepts.', 'Note that the results of KV Retrieval Net baseline reported in Table 5 come from the original paper (Eric et al., 2017) of In-Car Assistant, where they simplified the task by mapping the expression of entities to a canonical form using named entity recognition (NER) and linking.']
[['Mem2Seq H3', 'BLEU'], ['Mem2Seq H3', 'Ent. F1'], ['Human*', 'Rule-Based*', 'KV Retrieval Net*'], ['Seq2Seq', 'Ptr-Unk', 'Mem2Seq H1', 'Mem2Seq H3', 'Mem2Seq H6'], ['Human*', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1'], ['Human*', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1'], ['Human*', 'BLEU'], ['KV Retrieval Net*']]
1
P18-1138table_3
Evaluation results on factoid question answering dialogues.
2
[['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']]
1
[['accuracy (%)'], ['recall (%)']]
[['7.8', '7.5'], ['3.7', '3.9'], ['70.3', '63.1'], ['67.0', '56.2'], ['77.6', '77.3'], ['55.1', '46.6']]
column
['accuracy (%)', 'recall (%)']
['NKD-ori', 'NKD-gated', 'NKD-atte']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>accuracy (%)</th> <th>recall (%)</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>7.8</td> <td>7.5</td> </tr> <tr> <td>model || HRED</td> <td>3.7</td> <td>3.9</td> </tr> <tr> <td>model || GenDS</td> <td>70.3</td> <td>63.1</td> </tr> <tr> <td>model || NKD-ori</td> <td>67.0</td> <td>56.2</td> </tr> <tr> <td>model || NKD-gated</td> <td>77.6</td> <td>77.3</td> </tr> <tr> <td>model || NKD-atte</td> <td>55.1</td> <td>46.6</td> </tr> </tbody></table>
Table 3
table_3
P18-1138
6
acl2018
Table 3 displays the accuracy and recall of entities on factoid question answering dialogues. The performance of NKD is slightly better than the specific QA solution GenDS, while LSTM and HRED which are designed for chi-chat almost fail in this task. All the variants of NKD models are capable of generating entities with an accuracy of 60% to 70%, and NKD-gated achieves the best performance with an accuracy of 77.6% and a recall of 77.3%.
[1, 1, 1]
['Table 3 displays the accuracy and recall of entities on factoid question answering dialogues.', 'The performance of NKD is slightly better than the specific QA solution GenDS, while LSTM and HRED which are designed for chi-chat almost fail in this task.', 'All the variants of NKD models are capable of generating entities with an accuracy of 60% to 70%, and NKD-gated achieves the best performance with an accuracy of 77.6% and a recall of 77.3%.']
[['accuracy (%)', 'recall (%)'], ['NKD-gated', 'GenDS', 'LSTM', 'HRED'], ['NKD-ori', 'NKD-gated', 'NKD-atte', 'accuracy (%)', 'recall (%)']]
1
P18-1138table_4
Evaluation results on entire dataset.
2
[['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']]
1
[['accuracy (%)'], ['recall (%)'], ['entity number']]
[['2.6', '2.5', '1.65'], ['1.4', '1.5', '1.79'], ['20.9', '17.4', '1.34'], ['22.9', '19.7', '2.55'], ['24.8', '25.6', '1.59'], ['18.4', '16.0', '3.41']]
column
['accuracy (%)', 'recall (%)', 'entity number']
['NKD-gated']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>accuracy (%)</th> <th>recall (%)</th> <th>entity number</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>2.6</td> <td>2.5</td> <td>1.65</td> </tr> <tr> <td>model || HRED</td> <td>1.4</td> <td>1.5</td> <td>1.79</td> </tr> <tr> <td>model || GenDS</td> <td>20.9</td> <td>17.4</td> <td>1.34</td> </tr> <tr> <td>model || NKD-ori</td> <td>22.9</td> <td>19.7</td> <td>2.55</td> </tr> <tr> <td>model || NKD-gated</td> <td>24.8</td> <td>25.6</td> <td>1.59</td> </tr> <tr> <td>model || NKD-atte</td> <td>18.4</td> <td>16.0</td> <td>3.41</td> </tr> </tbody></table>
Table 4
table_4
P18-1138
6
acl2018
Table 4 lists the accuracy and recall of entities on the entire dataset including both the factoid QA and knowledge grounded chit-chats. Not surprisingly, both NKD-ori and NKD-gated outperform GenDS on the entire dataset, and the relative improvement over GenDS is even higher than the improvement in QA dialogues. It confirms that although NKD and GenDS are comparable in answering factoid questions, NKD is better at introducing the knowledge entities for knowledge grounded chit-chats. All the NKD variants in Table 4 generate more entities than GenDS. LSTM and HRED also produce a certain amount of entities, but are of low accuracies and recalls. We also noticed that NKDgated achieves the highest accuracy and recall, but generates fewer entities compared with NKDori and NKD-gated, whereas NKD-atte generates more entities but also with relatively low accuracies and recalls. This demonstrates that NKDgated not only learns to generate more entities but also maintains the quality ( with a relatively high accuracy and recall ).
[1, 1, 2, 1, 1, 1, 2]
['Table 4 lists the accuracy and recall of entities on the entire dataset including both the factoid QA and knowledge grounded chit-chats.', 'Not surprisingly, both NKD-ori and NKD-gated outperform GenDS on the entire dataset, and the relative improvement over GenDS is even higher than the improvement in QA dialogues.', 'It confirms that although NKD and GenDS are comparable in answering factoid questions, NKD is better at introducing the knowledge entities for knowledge grounded chit-chats.', 'All the NKD variants in Table 4 generate more entities than GenDS.', 'LSTM and HRED also produce a certain amount of entities, but are of low accuracies and recalls.', 'We also noticed that NKDgated achieves the highest accuracy and recall, but generates fewer entities compared with NKDori and NKD-gated, whereas NKD-atte generates more entities but also with relatively low accuracies and recalls.', 'This demonstrates that NKDgated not only learns to generate more entities but also maintains the quality ( with a relatively high accuracy and recall ).']
[['accuracy (%)', 'recall (%)', 'entity number'], ['NKD-ori', 'NKD-gated', 'GenDS'], ['NKD-ori', 'NKD-gated', 'GenDS'], ['NKD-ori', 'NKD-gated', 'NKD-atte', 'entity number'], ['LSTM', 'HRED', 'entity number', 'accuracy (%)', 'recall (%)'], ['NKD-gated', 'NKD-ori', 'NKD-atte', 'accuracy (%)', 'recall (%)', 'entity number'], ['NKD-gated', 'accuracy (%)', 'recall (%)', 'entity number']]
1
P18-1138table_5
Human evaluation result.
2
[['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']]
1
[['Fluency'], ['Appropriateness of knowledge'], ['Entire Correctness']]
[['2.52', '0.88', '0.8'], ['2.48', '0.36', '0.32'], ['2.76', '1.36', '1.34'], ['2.42', '1.92', '1.58'], ['2.08', '1.72', '1.44'], ['2.7', '1.54', '1.38']]
column
['Fluency', 'Appropriateness of knowledge', 'Entire Correctness']
['NKD-ori', 'NKD-gated', 'NKD-atte']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Appropriateness of knowledge</th> <th>Entire Correctness</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>2.52</td> <td>0.88</td> <td>0.8</td> </tr> <tr> <td>model || HRED</td> <td>2.48</td> <td>0.36</td> <td>0.32</td> </tr> <tr> <td>model || GenDS</td> <td>2.76</td> <td>1.36</td> <td>1.34</td> </tr> <tr> <td>model || NKD-ori</td> <td>2.42</td> <td>1.92</td> <td>1.58</td> </tr> <tr> <td>model || NKD-gated</td> <td>2.08</td> <td>1.72</td> <td>1.44</td> </tr> <tr> <td>model || NKD-atte</td> <td>2.7</td> <td>1.54</td> <td>1.38</td> </tr> </tbody></table>
Table 5
table_5
P18-1138
7
acl2018
The results of human evaluation in Table 5 also validate the superiority of the proposed model, especially on appropriateness. Responses generated by LSTM and HRED are of high fluency, but are simply repetitions, or even dull responses as gI donft know.h, gGood.h. NKD-gated is more adept at incorporating the knowledge base with respect to appropriateness and correctness, while NKDatte generates more fluent responses. NKD-ori is a compromise, and obtains the best correctness in completing an entire dialogue. Four evaluators rated the scores independently. The pairwise Cohenfs Kappa agreement scores are 0.67 on fluency, 0.54 on appropriateness, and 0.60 on entire correctness, which indicate a strong annotator agreement.
[1, 1, 1, 1, 2, 2]
['The results of human evaluation in Table 5 also validate the superiority of the proposed model, especially on appropriateness.', 'Responses generated by LSTM and HRED are of high fluency, but are simply repetitions, or even dull responses as \x81gI don\x81ft know.\x81h, \x81gGood.\x81h.', 'NKD-gated is more adept at incorporating the knowledge base with respect to appropriateness and correctness, while NKDatte generates more fluent responses.', 'NKD-ori is a compromise, and obtains the best correctness in completing an entire dialogue.', 'Four evaluators rated the scores independently.', 'The pairwise Cohen\x81fs Kappa agreement scores are 0.67 on fluency, 0.54 on appropriateness, and 0.60 on entire correctness, which indicate a strong annotator agreement.']
[None, ['LSTM', 'HRED', 'Fluency'], ['NKD-gated', 'Appropriateness of knowledge', 'Entire Correctness', 'NKD-atte', 'Fluency'], ['NKD-ori', 'Appropriateness of knowledge', 'Entire Correctness'], None, ['Fluency', 'Appropriateness of knowledge', 'Entire Correctness']]
1
P18-1141table_3
Roundtrip translation (mean/median accuracy) and sentiment analysis (F1) results for wordbased (WORD) and character-based (CHAR) multilingual embeddings. N (coverage): # queries contained in the embedding space. The best result across WORD and CHAR is set in bold.
1
[['RTSIMPLE'], ['BOW'], ['S-ID'], ['SAMPLE'], ['CLIQUE'], ['N(t)'], ['N(t)-CLIQUE'], ['N(t)-CC'], ['N(t)-EDGE']]
4
[['roundtrip translation', 'WORD', 'S1', 'µ'], ['roundtrip translation', 'WORD', 'S1', 'Md'], ['roundtrip translation', 'WORD', 'R1', 'µ'], ['roundtrip translation', 'WORD', 'R1', 'Md'], ['roundtrip translation', 'WORD', 'S4', 'µ'], ['roundtrip translation', 'WORD', 'S4', 'Md'], ['roundtrip translation', 'WORD', 'S16', 'µ'], ['roundtrip translation', 'WORD', 'S16', 'Md'], ['roundtrip translation', 'WORD', '-', 'N'], ['roundtrip translation', 'CHAR', 'S1', 'µ'], ['roundtrip translation', 'CHAR', 'S1', 'Md'], ['roundtrip translation', 'CHAR', 'R1', 'µ'], ['roundtrip translation', 'CHAR', 'R1', 'Md'], ['roundtrip translation', 'CHAR', 'S4', 'µ'], ['roundtrip translation', 'CHAR', 'S4', 'Md'], ['roundtrip translation', 'CHAR', 'S16', 'µ'], ['roundtrip translation', 'CHAR', 'S16', 'Md'], ['roundtrip translation', 'CHAR', '-', 'N'], ['sentiment analysis', 'WORD', '-', 'pos'], ['sentiment analysis', 'WORD', '-', 'neg'], ['sentiment analysis', 'CHAR', '-', 'pos'], ['sentiment analysis', 'CHAR', '-', 'neg']]
[['33', '24', '37', '36', '', '', '', '', '67', '24', '13', '32', '21', '', '', '', '', '70', '', '', '', ''], ['7', '5', '8', '7', '13', '12', '26', '28', '69', '3', '2', '3', '2', '5', '4', '10', '11', '70', '33', '81', '13', '83'], ['46', '46', '52', '55', '63', '76', '79', '91', '65', '9', '5', '9', '5', '14', '9', '25', '22', '70', '79', '88', '65', '86'], ['33', '23', '43', '42', '54', '59', '82', '96', '65', '53', '59', '59', '72', '67', '85', '79', '99', '58', '82', '89', '77', '89'], ['43', '36', '59', '63', '67', '77', '93', '99', '69', '42', '46', '48', '55', '60', '76', '73', '98', '53', '84', '89', '69', '88'], ['54', '59', '61', '69', '80', '87', '94', '100', '69', '50', '53', '54', '59', '73', '82', '90', '99', '66', '82', '89', '87', '90'], ['11', '0', '11', '0', '16', '0', '22', '0', '18', '39', '45', '41', '47', '58', '74', '76', '94', '56', '22', '84', '61', '84'], ['3', '0', '3', '0', '5', '0', '7', '0', '5', '11', '0', '11', '0', '16', '0', '25', '0', '21', '4', '84', '40', '83'], ['35', '30', '43', '36', '56', '55', '7', '94', '69', '39', '29', '49', '52', '64', '78', '88', '100', '63', '84', '90', '84', '89']]
column
['µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'N', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'N', 'pos', 'neg', 'pos', 'neg']
['N(t)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>roundtrip translation || WORD || S1 || µ</th> <th>roundtrip translation || WORD || S1 || Md</th> <th>roundtrip translation || WORD || R1 || µ</th> <th>roundtrip translation || WORD || R1 || Md</th> <th>roundtrip translation || WORD || S4 || µ</th> <th>roundtrip translation || WORD || S4 || Md</th> <th>roundtrip translation || WORD || S16 || µ</th> <th>roundtrip translation || WORD || S16 || Md</th> <th>roundtrip translation || WORD || - || N</th> <th>roundtrip translation || CHAR || S1 || µ</th> <th>roundtrip translation || CHAR || S1 || Md</th> <th>roundtrip translation || CHAR || R1 || µ</th> <th>roundtrip translation || CHAR || R1 || Md</th> <th>roundtrip translation || CHAR || S4 || µ</th> <th>roundtrip translation || CHAR || S4 || Md</th> <th>roundtrip translation || CHAR || S16 || µ</th> <th>roundtrip translation || CHAR || S16 || Md</th> <th>roundtrip translation || CHAR || - || N</th> <th>sentiment analysis || WORD || - || pos</th> <th>sentiment analysis || WORD || - || neg</th> <th>sentiment analysis || CHAR || - || pos</th> <th>sentiment analysis || CHAR || - || neg</th> </tr> </thead> <tbody> <tr> <td>RTSIMPLE</td> <td>33</td> <td>24</td> <td>37</td> <td>36</td> <td></td> <td></td> <td></td> <td></td> <td>67</td> <td>24</td> <td>13</td> <td>32</td> <td>21</td> <td></td> <td></td> <td></td> <td></td> <td>70</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>BOW</td> <td>7</td> <td>5</td> <td>8</td> <td>7</td> <td>13</td> <td>12</td> <td>26</td> <td>28</td> <td>69</td> <td>3</td> <td>2</td> <td>3</td> <td>2</td> <td>5</td> <td>4</td> <td>10</td> <td>11</td> <td>70</td> <td>33</td> <td>81</td> <td>13</td> <td>83</td> </tr> <tr> <td>S-ID</td> <td>46</td> <td>46</td> <td>52</td> <td>55</td> <td>63</td> <td>76</td> <td>79</td> <td>91</td> <td>65</td> <td>9</td> <td>5</td> <td>9</td> <td>5</td> <td>14</td> <td>9</td> <td>25</td> <td>22</td> <td>70</td> <td>79</td> <td>88</td> <td>65</td> <td>86</td> </tr> <tr> <td>SAMPLE</td> <td>33</td> <td>23</td> <td>43</td> <td>42</td> <td>54</td> <td>59</td> <td>82</td> <td>96</td> <td>65</td> <td>53</td> <td>59</td> <td>59</td> <td>72</td> <td>67</td> <td>85</td> <td>79</td> <td>99</td> <td>58</td> <td>82</td> <td>89</td> <td>77</td> <td>89</td> </tr> <tr> <td>CLIQUE</td> <td>43</td> <td>36</td> <td>59</td> <td>63</td> <td>67</td> <td>77</td> <td>93</td> <td>99</td> <td>69</td> <td>42</td> <td>46</td> <td>48</td> <td>55</td> <td>60</td> <td>76</td> <td>73</td> <td>98</td> <td>53</td> <td>84</td> <td>89</td> <td>69</td> <td>88</td> </tr> <tr> <td>N(t)</td> <td>54</td> <td>59</td> <td>61</td> <td>69</td> <td>80</td> <td>87</td> <td>94</td> <td>100</td> <td>69</td> <td>50</td> <td>53</td> <td>54</td> <td>59</td> <td>73</td> <td>82</td> <td>90</td> <td>99</td> <td>66</td> <td>82</td> <td>89</td> <td>87</td> <td>90</td> </tr> <tr> <td>N(t)-CLIQUE</td> <td>11</td> <td>0</td> <td>11</td> <td>0</td> <td>16</td> <td>0</td> <td>22</td> <td>0</td> <td>18</td> <td>39</td> <td>45</td> <td>41</td> <td>47</td> <td>58</td> <td>74</td> <td>76</td> <td>94</td> <td>56</td> <td>22</td> <td>84</td> <td>61</td> <td>84</td> </tr> <tr> <td>N(t)-CC</td> <td>3</td> <td>0</td> <td>3</td> <td>0</td> <td>5</td> <td>0</td> <td>7</td> <td>0</td> <td>5</td> <td>11</td> <td>0</td> <td>11</td> <td>0</td> <td>16</td> <td>0</td> <td>25</td> <td>0</td> <td>21</td> <td>4</td> <td>84</td> <td>40</td> <td>83</td> </tr> <tr> <td>N(t)-EDGE</td> <td>35</td> <td>30</td> <td>43</td> <td>36</td> <td>56</td> <td>55</td> <td>7</td> <td>94</td> <td>69</td> <td>39</td> <td>29</td> <td>49</td> <td>52</td> <td>64</td> <td>78</td> <td>88</td> <td>100</td> <td>63</td> <td>84</td> <td>90</td> <td>84</td> <td>89</td> </tr> </tbody></table>
Table 3
table_3
P18-1141
7
acl2018
Table 3 presents evaluation results for roundtrip translation and sentiment analysis. Validity of roundtrip (RT) evaluation results. RTSIMPLE (line 1) is not competitive; e.g., its accuracy is lower by almost half compared to N(t). We also see that RT is an excellent differentiator of poor multilingual embeddings (e.g., BOW) vs. higher-quality ones like S-ID and N(t). This indicates that RT translation can serve as an effective evaluation measure. The concept-based multilingual embedding learning algorithms CLIQUE and N(t)(lines 5-6) consistently (except S1 WORD) outperform BOW and S-ID (lines 2-3) that are not based on concepts. BOW performs poorly in our low-resource setting; this is not surprising since BOW methods rely on large datasets and are therefore expected to fail in the face of severe sparseness. S-ID performs reasonably well for WORD, but even in that case it is outperformed by N(t), in some cases by a large margin, e.g., µ of 63 for S-ID vs. 80 for N(t) for S4. For CHAR, S-ID results are poor. On sentiment classification, N(t) also consistently outperforms S-ID. While S-ID provides a clearer signal to the embedding learner than BOW, it is still relatively crude to represent a word as - essentially - its binary vector of verse occurrence. Concept-based methods perform better because they can exploit the more informative dictionary graph. Comparison of graph-theoretic definitions of concepts:N(t)-CLIQUE, N(t)-CC. N(t) (line6) has the most consistent good performance across tasks and evaluation measures. Postfiltering target neighborhoods down to cliques (line 7) and CCs (line 8) does not work. The reason is that the resulting number of concepts is too small; see, e.g., low coverages of N = 18 (N(t)-CLIQUE) and N = 5 (N(t)-CC) for WORD and N = 21 (N(t)-CC) for CHAR. N(t)-CLIQUE results are highly increased for CHAR, but still poorer by a large margin than the best methods. We can interpret this result as an instance of a precision-recall tradeoff: presumably the quality of the concepts found by N(t)-CLIQUE and N(t)-CC is better (higher precision), but there are too few of them (low recall) to get good evaluation numbers. Comparison of graph-theoretic definitions of concepts: CLIQUE. CLIQUE has strong performance for a subset of measures, e.g., ranks consistently second for RT (except S1 WORD) and sentiment analysis in WORD. Although CLIQUE is perhaps the most intuitive way of inducing a concept from a dictionary graph, it may suffer in relatively high-noise settings like ours. Comparison of graph-theoretic definitions of concepts: N(t) vs. N(t)-EDGE. Recall that N(t)-EDGE postfilters target neighborhoods by only considering pairs of pivot words that are linked by a dictionary edge. This gqualityh filter does seem to work in some cases, e.g., best performance S16 Md for CHAR. But results for WORD are much poorer. SAMPLE performs best for CHAR: best results in five out of eight cases. However, its coverage is low: N = 58. This is also the reason that it does not perform well on sentiment analysis for CHAR (F1 = 77 for pos). Target neighborhoods N(t). The overall best method is N(t). It is the best method more often than any other method and in the other cases, it ranks second.
[1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 presents evaluation results for roundtrip translation and sentiment analysis.', 'Validity of roundtrip (RT) evaluation results.', 'RTSIMPLE (line 1) is not competitive; e.g., its accuracy is lower by almost half compared to N(t).', 'We also see that RT is an excellent differentiator of poor multilingual embeddings (e.g., BOW) vs. higher-quality ones like S-ID and N(t).', 'This indicates that RT translation can serve as an effective evaluation measure.', 'The concept-based multilingual embedding learning algorithms CLIQUE and N(t)(lines 5-6) consistently (except S1 WORD) outperform BOW and S-ID (lines 2-3) that are not based on concepts.', 'BOW performs poorly in our low-resource setting; this is not surprising since BOW methods rely on large datasets and are therefore expected to fail in the face of severe sparseness.', 'S-ID performs reasonably well for WORD, but even in that case it is outperformed by N(t), in some cases by a large margin, e.g., µ of 63 for S-ID vs. 80 for N(t) for S4.', 'For CHAR, S-ID results are poor.', 'On sentiment classification, N(t) also consistently outperforms S-ID.', 'While S-ID provides a clearer signal to the embedding learner than BOW, it is still relatively crude to represent a word as - essentially - its binary vector of verse occurrence.', 'Concept-based methods perform better because they can exploit the more informative dictionary graph.', 'Comparison of graph-theoretic definitions of concepts:N(t)-CLIQUE, N(t)-CC.', 'N(t) (line6) has the most consistent good performance across tasks and evaluation measures.', 'Postfiltering target neighborhoods down to cliques (line 7) and CCs (line 8) does not work.', 'The reason is that the resulting number of concepts is too small; see, e.g., low coverages of N = 18 (N(t)-CLIQUE) and N = 5 (N(t)-CC) for WORD and N = 21 (N(t)-CC) for CHAR. N(t)-CLIQUE results are highly increased for CHAR, but still poorer by a large margin than the best methods.', 'We can interpret this result as an instance of a precision-recall tradeoff: presumably the quality of the concepts found by N(t)-CLIQUE and N(t)-CC is better (higher precision), but there are too few of them (low recall) to get good evaluation numbers.', 'Comparison of graph-theoretic definitions of concepts: CLIQUE.', 'CLIQUE has strong performance for a subset of measures, e.g., ranks consistently second for RT (except S1 WORD) and sentiment analysis in WORD.', 'Although CLIQUE is perhaps the most intuitive way of inducing a concept from a dictionary graph, it may suffer in relatively high-noise settings like ours.', 'Comparison of graph-theoretic definitions of concepts: N(t) vs. N(t)-EDGE.', 'Recall that N(t)-EDGE postfilters target neighborhoods by only considering pairs of pivot words that are\r\nlinked by a dictionary edge.', 'This \x81gquality\x81h filter does seem to work in some cases, e.g., best performance S16 Md for CHAR.', 'But results for WORD are much poorer.', 'SAMPLE performs best for CHAR: best results in five out of eight cases.', 'However, its coverage is low: N = 58.', 'This is also the reason that it does not perform well on sentiment analysis for CHAR (F1 = 77 for pos).', 'Target neighborhoods N(t).', 'The overall best method is N(t).', 'It is the best method more often than any other method and in the other cases, it ranks second.']
[['roundtrip translation', 'sentiment analysis'], ['roundtrip translation'], ['RTSIMPLE', 'N(t)'], ['RTSIMPLE', 'S-ID', 'N(t)', 'BOW'], None, ['CLIQUE', 'N(t)'], ['BOW'], ['S-ID', 'WORD', 'N(t)', 'S4', 'µ'], ['CHAR', 'S-ID'], ['sentiment analysis', 'N(t)', 'S-ID'], ['S-ID', 'BOW'], ['CLIQUE'], ['N(t)-CLIQUE', 'N(t)-CC'], ['N(t)', 'N(t)-CLIQUE', 'N(t)-CC'], ['N(t)-CLIQUE', 'N(t)-CC'], ['N(t)-CLIQUE', 'N(t)-CC', 'N', 'WORD', 'CHAR', 'roundtrip translation'], ['N(t)-CLIQUE', 'N(t)-CC', 'roundtrip translation'], ['CLIQUE'], ['CLIQUE', 'roundtrip translation', 'sentiment analysis', 'WORD'], ['CLIQUE'], ['N(t)', 'N(t)-EDGE'], ['N(t)-EDGE'], ['N(t)-EDGE', 'CHAR', 'S16', 'Md'], ['N(t)-EDGE', 'WORD'], ['SAMPLE', 'roundtrip translation', 'CHAR'], ['SAMPLE', 'roundtrip translation', 'CHAR', 'N'], ['SAMPLE', 'sentiment analysis', 'pos'], ['N(t)'], ['N(t)'], ['N(t)']]
1
P18-1144table_4
Development results.
4
[['Input', 'Auto seg', 'Models', 'Word baseline'], ['Input', 'Auto seg', 'Models', 'Word+char LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char+bichar LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char CNN'], ['Input', 'Auto seg', 'Models', 'Word+char+bichar CNN'], ['Input', 'No seg', 'Models', 'Char baseline'], ['Input', 'No seg', 'Models', 'Char+softword'], ['Input', 'No seg', 'Models', 'Char+bichar'], ['Input', 'No seg', 'Models', 'Char+bichar+softword'], ['Input', 'No seg', 'Models', 'Lattice']]
1
[['P'], ['R'], ['F1']]
[['73.20', '57.05', '64.12'], ['71.98', '65.41', '68.54'], ['71.08', '65.83', '68.35'], ['72.63', '67.60', '70.03'], ['73.06', '66.29', '69.51'], ['72.01', '65.50', '68.60'], ['67.12', '58.42', '62.47'], ['69.30', '62.47', '65.71'], ['71.67', '64.02', '67.63'], ['72.64', '66.89', '69.64'], ['74.64', '68.83', '71.62']]
column
['P', 'R', 'F1']
['Lattice']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Input || Auto seg || Models || Word baseline</td> <td>73.20</td> <td>57.05</td> <td>64.12</td> </tr> <tr> <td>Input || Auto seg || Models || Word+char LSTM</td> <td>71.98</td> <td>65.41</td> <td>68.54</td> </tr> <tr> <td>Input || Auto seg || Models || Word+char LSTM'</td> <td>71.08</td> <td>65.83</td> <td>68.35</td> </tr> <tr> <td>Input || Auto seg || Models || Word+char+bichar LSTM</td> <td>72.63</td> <td>67.60</td> <td>70.03</td> </tr> <tr> <td>Input || Auto seg || Models || Word+char CNN</td> <td>73.06</td> <td>66.29</td> <td>69.51</td> </tr> <tr> <td>Input || Auto seg || Models || Word+char+bichar CNN</td> <td>72.01</td> <td>65.50</td> <td>68.60</td> </tr> <tr> <td>Input || No seg || Models || Char baseline</td> <td>67.12</td> <td>58.42</td> <td>62.47</td> </tr> <tr> <td>Input || No seg || Models || Char+softword</td> <td>69.30</td> <td>62.47</td> <td>65.71</td> </tr> <tr> <td>Input || No seg || Models || Char+bichar</td> <td>71.67</td> <td>64.02</td> <td>67.63</td> </tr> <tr> <td>Input || No seg || Models || Char+bichar+softword</td> <td>72.64</td> <td>66.89</td> <td>69.64</td> </tr> <tr> <td>Input || No seg || Models || Lattice</td> <td>74.64</td> <td>68.83</td> <td>71.62</td> </tr> </tbody></table>
Table 4
table_4
P18-1144
7
acl2018
As shown in Table 4, without using word segmentation, a characterbased LSTM-CRF model gives a development F1- score of 62.47%. Adding character-bigram and softword representations as described in Section 3.1 increases the F1-score to 67.63% and 65.71%, respectively, demonstrating the usefulness of both sources of information. In addition, a combination of both gives a 69.64% F1-score, which is the best among various character representations. We thus choose this model in the remaining experiments. Word-based NER. Table 4 shows a variety of different settings for word-based Chinese NER. With automatic segmentation, a word-based LSTM CRF baseline gives a 64.12% F1-score, which is higher compared to the character-based baseline. This demonstrates that both word information and character information are useful for Chinese NER. The two methods of using character LSTM to enrich word representations in Section 3.2, namely word+char LSTM and word+char LSTM(cid:48), lead to similar improvements. A CNN representation of character sequences gives a slightly higher F1-score compared to LSTM character representations. On the other hand, further using character bigram information leads to increased F1-score over word+char LSTM, but decreased F1-score over word+char CNN. A possible reason is that CNN inherently captures character n-gram information. As a result, we use word+char+bichar LSTM for wordbased NER in the remaining experiments, which gives the best development results, and is structurally consistent with the state-of-the-art English NER models in the literature. As shown in Table 4, the lattice LSTM-CRF model gives a development F1-score of 71.62%, which is significantly higher compared with both the word-based and character-based methods, despite that it does not use character bigrams or word segmentation information. The fact that it significantly outperforms char+softword shows the advantage of lattice word information as compared with segmentor word information.
[1, 1, 1, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1]
['As shown in Table 4, without using word segmentation, a characterbased LSTM-CRF model gives a development F1- score of 62.47%.', 'Adding character-bigram and softword representations as described in Section 3.1 increases the F1-score to 67.63% and 65.71%, respectively, demonstrating the usefulness of both sources of information.', 'In addition, a combination of both gives a 69.64% F1-score, which is the best among various character representations.', 'We thus choose this model in the remaining experiments.', 'Word-based NER.', 'Table 4 shows a variety of different settings for word-based Chinese NER.', 'With automatic segmentation, a word-based LSTM CRF baseline gives a 64.12% F1-score, which is higher compared to the character-based baseline.', 'This demonstrates that both word information and character information are useful for Chinese NER.', 'The two methods of using character LSTM to enrich word representations in Section 3.2, namely word+char LSTM and word+char LSTM(cid:48), lead to similar improvements.', 'A CNN representation of character sequences gives a slightly higher F1-score compared to LSTM character representations.', 'On the other hand, further using character bigram information leads to increased F1-score over word+char LSTM, but decreased F1-score over word+char CNN.', 'A possible reason is that CNN inherently captures character n-gram information.', 'As a result, we use word+char+bichar LSTM for wordbased NER in the remaining experiments, which gives the best development results, and is structurally consistent with the state-of-the-art English NER models in the literature.', 'As shown in Table 4, the lattice LSTM-CRF model gives a development F1-score of 71.62%, which is significantly higher compared with both the word-based and character-based methods, despite that it does not use character bigrams or word segmentation information.', 'The fact that it significantly outperforms char+softword shows the advantage of lattice word information as compared with segmentor word information.']
[['F1', 'Char baseline'], ['F1', 'Char+softword', 'Char+bichar'], ['F1', 'Char+bichar+softword'], ['Char+bichar+softword'], None, ['Word+char LSTM', "Word+char LSTM'", 'Word+char+bichar LSTM', 'Word+char CNN', 'Word+char+bichar CNN'], ['Auto seg', 'F1', 'Word baseline', 'Char baseline'], ['Word baseline', 'Char baseline'], ['Word+char LSTM'], ['Word+char LSTM', 'Word+char CNN'], ['F1', 'Word+char+bichar CNN', 'Word+char LSTM', 'Word+char CNN'], ['Word+char CNN', 'Word+char+bichar CNN'], ['Word+char+bichar LSTM'], ['Lattice', 'F1'], ['Lattice', 'F1', 'Char+softword']]
1
P18-1145table_3
Performances of character-based methods on KBP2017Eval Trigger Identification task.
2
[['Model', 'FBRNN(Char)'], ['Model', 'NPN(IOB)'], ['Model', 'NPN(Task-specific)']]
1
[['P'], ['R'], ['F1']]
[['57.97', '36.92', '45.11'], ['60.96', '47.39', '53.32'], ['64.32', '53.16', '58.21']]
column
['P', 'R', 'F1']
['NPN(Task-specific)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || FBRNN(Char)</td> <td>57.97</td> <td>36.92</td> <td>45.11</td> </tr> <tr> <td>Model || NPN(IOB)</td> <td>60.96</td> <td>47.39</td> <td>53.32</td> </tr> <tr> <td>Model || NPN(Task-specific)</td> <td>64.32</td> <td>53.16</td> <td>58.21</td> </tr> </tbody></table>
Table 3
table_3
P18-1145
7
acl2018
Table 3 shows the results on KBP2017Eval. We can see that NPN(Task-specific) outperforms other methods significantly. We believe this is because: 1) FBRNN(Char) only regards tokens in the candidate table as potential trigger nuggets, which limits the choice of possible trigger nuggets and results in a very low recall rate. . 2) To accurately identify a trigger, NPN(IOB) and conventional character-based methods require all characters in a trigger being classified correctly, which is very challenging (Zeng et al., 2016): many characters appear in a trigger nugget will not serve as a part of a trigger nugget in the majority of contexts, thus they will be easily classified into gNILh.
[1, 1, 2, 2]
['Table 3 shows the results on KBP2017Eval.', 'We can see that NPN(Task-specific) outperforms other methods significantly.', 'We believe this is because: 1) FBRNN(Char) only regards tokens in the candidate table as potential trigger nuggets, which limits the choice of possible trigger nuggets and\nresults in a very low recall rate.\n.', '2) To accurately identify a trigger, NPN(IOB) and conventional character-based methods require all characters in a trigger being classified correctly, which is very challenging (Zeng et al., 2016): many characters appear in a trigger nugget will not serve as a part of a trigger nugget in the majority of contexts, thus they will be easily classified into \x81gNIL\x81h.']
[None, ['NPN(Task-specific)'], ['FBRNN(Char)'], ['NPN(IOB)']]
1
P18-1145table_6
Results of using different representation on Trigger Classification task on KBP2017Eval.
2
[['Model', 'DMCNN(Word)'], ['Model', 'NPN(Char)'], ['Model', 'NPN(Task-specific)']]
1
[['P'], ['R'], ['F1']]
[['54.81', '46.84', '50.51'], ['56.19', '43.88', '49.28'], ['57.63', '47.63', '52.15']]
column
['P', 'R', 'F1']
['NPN(Task-specific)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || DMCNN(Word)</td> <td>54.81</td> <td>46.84</td> <td>50.51</td> </tr> <tr> <td>Model || NPN(Char)</td> <td>56.19</td> <td>43.88</td> <td>49.28</td> </tr> <tr> <td>Model || NPN(Task-specific)</td> <td>57.63</td> <td>47.63</td> <td>52.15</td> </tr> </tbody></table>
Table 6
table_6
P18-1145
8
acl2018
Table 6 shows the experiment results. We can see that neither character-level or wordlevel representation can achieve competitive results with the NPNs. This verified the necessity of hybrid representation.
[1, 1, 1]
['Table 6 shows the experiment results.', 'We can see that neither character-level or wordlevel representation can achieve competitive results with the NPNs.', 'This verified the necessity of hybrid representation.']
[None, ['NPN(Task-specific)', 'DMCNN(Word)', 'NPN(Char)'], ['NPN(Task-specific)']]
1
P18-1150table_2
TEST results. “(200K)”, “(2M)” and “(20M)” represent training with the corresponding number of additional sentences from Gigaword.
2
[['Model', 'PBMT'], ['Model', 'SNRG'], ['Model', 'Tree2Str'], ['Model', 'MSeq2seq+Anon'], ['Model', 'Graph2seq+copy'], ['Model', 'Graph2seq+charLSTM+copy'], ['Model', 'MSeq2seq+Anon (200K)'], ['Model', 'MSeq2seq+Anon (2M)'], ['Model', 'Seq2seq+charLSTM+copy (200K)'], ['Model', 'Seq2seq+charLSTM+copy (2M)'], ['Model', 'Graph2seq+charLSTM+copy (200K)'], ['Model', 'Graph2seq+charLSTM+copy (2M)']]
1
[['BLEU']]
[['26.9'], ['25.6'], ['23.0'], ['22.0'], ['22.7'], ['23.3'], ['27.4'], ['32.3'], ['27.4'], ['31.7'], ['28.2'], ['33.0']]
column
['BLEU']
['Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || PBMT</td> <td>26.9</td> </tr> <tr> <td>Model || SNRG</td> <td>25.6</td> </tr> <tr> <td>Model || Tree2Str</td> <td>23.0</td> </tr> <tr> <td>Model || MSeq2seq+Anon</td> <td>22.0</td> </tr> <tr> <td>Model || Graph2seq+copy</td> <td>22.7</td> </tr> <tr> <td>Model || Graph2seq+charLSTM+copy</td> <td>23.3</td> </tr> <tr> <td>Model || MSeq2seq+Anon (200K)</td> <td>27.4</td> </tr> <tr> <td>Model || MSeq2seq+Anon (2M)</td> <td>32.3</td> </tr> <tr> <td>Model || Seq2seq+charLSTM+copy (200K)</td> <td>27.4</td> </tr> <tr> <td>Model || Seq2seq+charLSTM+copy (2M)</td> <td>31.7</td> </tr> <tr> <td>Model || Graph2seq+charLSTM+copy (200K)</td> <td>28.2</td> </tr> <tr> <td>Model || Graph2seq+charLSTM+copy (2M)</td> <td>33.0</td> </tr> </tbody></table>
Table 2
table_2
P18-1150
9
acl2018
Table 2 compares our final results with existing work. MSeq2seq+Anon (Konstas et al., 2017) is an attentional multi-layer sequence-to-sequence model trained with the anonymized data. PBMT (Pourdamghani et al., 2016) adopts a phrase-based model for machine translation (Koehn et al., 2003) on the input of linearized AMR graph, SNRG (Song et al., 2017) uses synchronous node replacement grammar for parsing the AMR graph while generating the text, and Tree2Str (Flanigan et al., 2016b) converts AMR graphs into trees by splitting the re-entrances before using a tree transducer to generate the results. Graph2seq+charLSTM+copy achieves a BLEU score of 23.3, which is 1.3 points better than MSeq2seq+Anon trained on the same AMR corpus. In addition, our model without character LSTM is still 0.7 BLEU points higher than MSeq2seq+Anon. Note that MSeq2seq+Anon relies on anonymization, which requires additional manual work for defining mapping rules, thus limiting its usability on other languages and domains. The neural models tend to underperform statistical models when trained on limited (16K) gold data, but performs better with scaled silver data (Konstas et al., 2017). Following Konstas et al. (2017), we also evaluate our model using both the AMR corpus and sampled sentences from Gigaword. Using additional 200K or 2M gigaword sentences, Graph2seq+charLSTM+copy achieves BLEU scores of 28.2 and 33.0, respectively, which are 0.8 and 0.7 BLEU points better than MSeq2seq+Anon using the same amount of data, respectively. The BLEU scores are 5.3 and 10.1 points better than the result when it is only trained with the AMR corpus, respectively. This shows that our model can benefit from scaled data with automatically generated AMR graphs, and it is more effective than MSeq2seq+Anon using the same amount of data.
[1, 2, 2, 1, 1, 2, 2, 1, 1, 1, 2]
['Table 2 compares our final results with existing work.', 'MSeq2seq+Anon (Konstas et al., 2017) is an attentional multi-layer sequence-to-sequence model trained with the anonymized data.', 'PBMT (Pourdamghani et al., 2016) adopts a phrase-based model for machine translation (Koehn et al., 2003) on the input of linearized AMR graph, SNRG (Song et al., 2017) uses synchronous node replacement grammar for parsing the AMR graph while generating the text, and Tree2Str (Flanigan et al., 2016b) converts AMR graphs into trees by splitting the re-entrances before using a tree transducer to generate the results.', 'Graph2seq+charLSTM+copy achieves a BLEU score of 23.3, which is 1.3 points better than MSeq2seq+Anon trained on the same AMR corpus.', 'In addition, our model without character LSTM is still 0.7 BLEU points higher than MSeq2seq+Anon.', 'Note that MSeq2seq+Anon relies on anonymization, which requires additional manual work for defining mapping rules, thus limiting its usability on other languages and domains.', 'The neural models tend to underperform statistical models when trained on limited (16K) gold data, but performs better with scaled silver data (Konstas et al., 2017).', 'Following Konstas et al. (2017), we also evaluate our model using both the AMR corpus and sampled sentences from Gigaword.', 'Using additional 200K or 2M gigaword sentences, Graph2seq+charLSTM+copy achieves BLEU scores of 28.2 and 33.0, respectively, which are 0.8 and 0.7 BLEU points better than MSeq2seq+Anon using the same amount of data, respectively.', 'The BLEU scores are 5.3 and 10.1 points better than the result when it is only trained with the AMR corpus, respectively.', 'This shows that our model can benefit from scaled data with automatically generated AMR graphs, and it is more effective than MSeq2seq+Anon using the same amount of data.']
[None, ['MSeq2seq+Anon'], ['PBMT', 'SNRG', 'Tree2Str'], ['BLEU', 'Graph2seq+charLSTM+copy', 'MSeq2seq+Anon'], ['BLEU', 'MSeq2seq+Anon', 'Graph2seq+copy'], ['MSeq2seq+Anon'], None, None, ['BLEU', 'Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)', 'MSeq2seq+Anon (200K)', 'MSeq2seq+Anon (2M)'], ['BLEU', 'Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)'], ['Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)', 'MSeq2seq+Anon']]
1
P18-1151table_4
Human evaluation results.
3
[['Model', 'Existing Models', 'BLSTM'], ['Model', 'Existing Models', 'SMT'], ['Model', 'Existing Models', 'TFF'], ['Model', 'Adapted Model', 'TLSTM'], ['Model', 'Our Proposed', 'GTR-LSTM']]
3
[['Dataset/Metric', 'Seen', 'Correctness'], ['Dataset/Metric', 'Seen', 'Grammar'], ['Dataset/Metric', 'Seen', 'Fluency'], ['Dataset/Metric', 'Unseen', 'Correctness'], ['Dataset/Metric', 'Unseen', 'Grammar'], ['Dataset/Metric', 'Unseen', 'Fluency'], ['Dataset/Metric', 'GKB', 'Correctness'], ['Dataset/Metric', 'GKB', 'Grammar'], ['Dataset/Metric', 'GKB', 'Fluency']]
[['2.25', '2.33', '2.29', '1.53', '1.71', '1.68', '1.54', '1.84', '1.84'], ['2.03', '2.11', '2.07', '1.36', '1.48', '1.44', '1.81', '1.99', '1.89'], ['1.77', '1.91', '1.88', '1.44', '1.69', '1.66', '1.71', '1.99', '1.96'], ['2.53', '2.61', '2.55', '1.75', '1.93', '1.86', '2.21', '2.38', '2.35'], ['2.64', '2.66', '2.57', '1.96', '2.04', '1.99', '2.29', '2.42', '2.41']]
column
['Correctness', 'Grammar', 'Fluency', 'Correctness', 'Grammar', 'Fluency', 'Correctness', 'Grammar', 'Fluency']
['GTR-LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset/Metric || Seen || Correctness</th> <th>Dataset/Metric || Seen || Grammar</th> <th>Dataset/Metric || Seen || Fluency</th> <th>Dataset/Metric || Unseen || Correctness</th> <th>Dataset/Metric || Unseen || Grammar</th> <th>Dataset/Metric || Unseen || Fluency</th> <th>Dataset/Metric || GKB || Correctness</th> <th>Dataset/Metric || GKB || Grammar</th> <th>Dataset/Metric || GKB || Fluency</th> </tr> </thead> <tbody> <tr> <td>Model || Existing Models || BLSTM</td> <td>2.25</td> <td>2.33</td> <td>2.29</td> <td>1.53</td> <td>1.71</td> <td>1.68</td> <td>1.54</td> <td>1.84</td> <td>1.84</td> </tr> <tr> <td>Model || Existing Models || SMT</td> <td>2.03</td> <td>2.11</td> <td>2.07</td> <td>1.36</td> <td>1.48</td> <td>1.44</td> <td>1.81</td> <td>1.99</td> <td>1.89</td> </tr> <tr> <td>Model || Existing Models || TFF</td> <td>1.77</td> <td>1.91</td> <td>1.88</td> <td>1.44</td> <td>1.69</td> <td>1.66</td> <td>1.71</td> <td>1.99</td> <td>1.96</td> </tr> <tr> <td>Model || Adapted Model || TLSTM</td> <td>2.53</td> <td>2.61</td> <td>2.55</td> <td>1.75</td> <td>1.93</td> <td>1.86</td> <td>2.21</td> <td>2.38</td> <td>2.35</td> </tr> <tr> <td>Model || Our Proposed || GTR-LSTM</td> <td>2.64</td> <td>2.66</td> <td>2.57</td> <td>1.96</td> <td>2.04</td> <td>1.99</td> <td>2.29</td> <td>2.42</td> <td>2.41</td> </tr> </tbody></table>
Table 4
table_4
P18-1151
9
acl2018
Table 4 shows the results of the human evaluations. The results confirm the automatic evaluation in which our proposed model achieves the best scores.
[1, 1]
['Table 4 shows the results of the human evaluations.', 'The results confirm the automatic evaluation in which our proposed model achieves the best scores.']
[None, ['GTR-LSTM']]
1
P18-1152table_4
Crowd-sourced ablation evaluation of generations on TripAdvisor. Each ablation uses only one discriminative communication model, and is compared to ADAPTIVELM.
2
[['Ablation vs. LM', 'REPETITION ONLY'], ['Ablation vs. LM', 'ENTAILMENT ONLY'], ['Ablation vs. LM', 'RELEVANCE ONLY'], ['Ablation vs. LM', 'LEXICAL STYLE ONLY'], ['Ablation vs. LM', 'ALL']]
1
[['Repetition'], ['Contradiction'], ['Relevance'], ['Clarity'], ['Better'], ['Neither'], ['Worse']]
[['+0.63', '+0.30', '+0.37', '+0.42', '50%', '23%', '27%'], ['+0.01', '+0.02', '+0.05', '-0.10', '39%', '20%', '41%'], ['-0.19', '+0.09', '+0.10', '+0.060', '36%', '22%', '42%'], ['+0.11', '+0.16', '+0.20', '+0.16', '38%', '25%', '38%'], ['+0.23', '-0.02', '+0.19', '-0.03', '47%', '19%', '34%']]
column
['Repetition', 'Contradiction', 'Relevance', 'Clarity', 'Better', 'Neither', 'Worse']
['Ablation vs. LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Repetition</th> <th>Contradiction</th> <th>Relevance</th> <th>Clarity</th> <th>Better</th> <th>Neither</th> <th>Worse</th> </tr> </thead> <tbody> <tr> <td>Ablation vs. LM || REPETITION ONLY</td> <td>+0.63</td> <td>+0.30</td> <td>+0.37</td> <td>+0.42</td> <td>50%</td> <td>23%</td> <td>27%</td> </tr> <tr> <td>Ablation vs. LM || ENTAILMENT ONLY</td> <td>+0.01</td> <td>+0.02</td> <td>+0.05</td> <td>-0.10</td> <td>39%</td> <td>20%</td> <td>41%</td> </tr> <tr> <td>Ablation vs. LM || RELEVANCE ONLY</td> <td>-0.19</td> <td>+0.09</td> <td>+0.10</td> <td>+0.060</td> <td>36%</td> <td>22%</td> <td>42%</td> </tr> <tr> <td>Ablation vs. LM || LEXICAL STYLE ONLY</td> <td>+0.11</td> <td>+0.16</td> <td>+0.20</td> <td>+0.16</td> <td>38%</td> <td>25%</td> <td>38%</td> </tr> <tr> <td>Ablation vs. LM || ALL</td> <td>+0.23</td> <td>-0.02</td> <td>+0.19</td> <td>-0.03</td> <td>47%</td> <td>19%</td> <td>34%</td> </tr> </tbody></table>
Table 4
table_4
P18-1152
8
acl2018
To investigate the effect of individual discriminators on the overall performance, we report the results of ablations of our model in Table 4. For each ablation we include only one of the communication modules, and train a single mixture coefficient for combining that module and the language model. The diagonal of Table 4 contains only positive numbers, indicating that each discriminator does help with the purpose it was designed for. Interestingly, most discriminators help with most aspects of writing, but all except repetition fail to actually improve the overall quality over ADAPTIVELM.
[1, 2, 2, 1]
['To investigate the effect of individual discriminators on the overall performance, we report the results of ablations of our model in Table 4.', 'For each ablation we include only one of the communication modules, and train a single mixture coefficient for combining that module and the language model.', 'The diagonal of Table 4 contains only positive numbers, indicating that each discriminator does help with the purpose it was designed for.', 'Interestingly, most discriminators help with most aspects of writing, but all except repetition fail to actually improve the overall quality over ADAPTIVELM.']
[None, None, None, ['Ablation vs. LM', 'Repetition', 'Contradiction', 'Relevance', 'Clarity']]
1
P18-1154table_3
Performance of baselines and our model with different subsets of features as per various quantitative measures. ( S = Score, M= Move, T = Threat features; ) On all data subsets, our model outperforms the TEMP and NN baselines. Among proposed models, GAC performs better than GACsparse & RAW in general. For NN, GAC-sparse and GAC methods, we experiment with multiple feature combinations and report only the best as per BLEU scores.
4
[['Dataset', 'MoveDesc', 'Features', 'TEMP'], ['Dataset', 'MoveDesc', 'Features', 'NN (M+T+S)'], ['Dataset', 'MoveDesc', 'Features', 'RAW'], ['Dataset', 'MoveDesc', 'Features', 'GAC-sparse'], ['Dataset', 'MoveDesc', 'Features', 'GAC (M+T)'], ['Dataset', 'Quality', 'Features', 'TEMP'], ['Dataset', 'Quality', 'Features', 'NN (M+T)'], ['Dataset', 'Quality', 'Features', 'RAW'], ['Dataset', 'Quality', 'Features', 'GAC-sparse'], ['Dataset', 'Quality', 'Features', 'GAC(M+T+S)'], ['Dataset', 'Comparative', 'Features', 'NN (M)'], ['Dataset', 'Comparative', 'Features', 'RAW'], ['Dataset', 'Comparative', 'Features', 'GAC-sparse'], ['Dataset', 'Comparative', 'Features', 'GAC(M+T)']]
1
[['BLEU'], ['BLEU-2'], ['Diversity']]
[['0.72', '20.77', '4.43'], ['1.28', '21.07', '7.85'], ['1.13', '13.74', '2.37'], ['1.76', '21.49', '4.29'], ['1.85', '23.35', '4.72'], ['16.17', '47.29', '1.16'], ['5.98', '42.97', '4.52'], ['16.92', '47.72', '1.07'], ['14.98', '51.46', '2.63'], ['16.94', '47.65', '1.01'], ['1.28', '24.49', '6.97'], ['2.80', '23.26', '3.03'], ['3.58', '25.28', '2.18'], ['3.51', '29.48', '3.64']]
column
['BLEU', 'BLEU-2', 'Diversity']
['GAC (M+T)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>BLEU-2</th> <th>Diversity</th> </tr> </thead> <tbody> <tr> <td>Dataset || MoveDesc || Features || TEMP</td> <td>0.72</td> <td>20.77</td> <td>4.43</td> </tr> <tr> <td>Dataset || MoveDesc || Features || NN (M+T+S)</td> <td>1.28</td> <td>21.07</td> <td>7.85</td> </tr> <tr> <td>Dataset || MoveDesc || Features || RAW</td> <td>1.13</td> <td>13.74</td> <td>2.37</td> </tr> <tr> <td>Dataset || MoveDesc || Features || GAC-sparse</td> <td>1.76</td> <td>21.49</td> <td>4.29</td> </tr> <tr> <td>Dataset || MoveDesc || Features || GAC (M+T)</td> <td>1.85</td> <td>23.35</td> <td>4.72</td> </tr> <tr> <td>Dataset || Quality || Features || TEMP</td> <td>16.17</td> <td>47.29</td> <td>1.16</td> </tr> <tr> <td>Dataset || Quality || Features || NN (M+T)</td> <td>5.98</td> <td>42.97</td> <td>4.52</td> </tr> <tr> <td>Dataset || Quality || Features || RAW</td> <td>16.92</td> <td>47.72</td> <td>1.07</td> </tr> <tr> <td>Dataset || Quality || Features || GAC-sparse</td> <td>14.98</td> <td>51.46</td> <td>2.63</td> </tr> <tr> <td>Dataset || Quality || Features || GAC(M+T+S)</td> <td>16.94</td> <td>47.65</td> <td>1.01</td> </tr> <tr> <td>Dataset || Comparative || Features || NN (M)</td> <td>1.28</td> <td>24.49</td> <td>6.97</td> </tr> <tr> <td>Dataset || Comparative || Features || RAW</td> <td>2.80</td> <td>23.26</td> <td>3.03</td> </tr> <tr> <td>Dataset || Comparative || Features || GAC-sparse</td> <td>3.58</td> <td>25.28</td> <td>2.18</td> </tr> <tr> <td>Dataset || Comparative || Features || GAC(M+T)</td> <td>3.51</td> <td>29.48</td> <td>3.64</td> </tr> </tbody></table>
Table 3
table_3
P18-1154
6
acl2018
Table 3 shows the BLEU and BLEU-2 scores for the proposed model under different subsets of features. Overall BLEU scores are low, likely due to the inherent variance in the language generation task (Novikova et al., 2017) , although a precursory examination of the outputs for data points selected randomly from test set indicated that they were reasonable.
[1, 1]
['Table 3 shows the BLEU and BLEU-2 scores for the proposed model under different subsets of features.', 'Overall BLEU scores are low, likely due to the inherent variance in the language generation task (Novikova et al., 2017) , although a precursory examination of the outputs for data points selected randomly from test set indicated that they were reasonable.']
[['BLEU', 'BLEU-2'], ['BLEU', 'BLEU-2']]
1
P18-1156table_1
Comparison between various RC datasets
2
[['Metrics for Comparative Analysis', 'Avg. word distance'], ['Metrics for Comparative Analysis', 'Avg. sentence distance'], ['Metrics for Comparative Analysis', 'Number of sentences for inferencing'], ['Metrics for Comparative Analysis', '% of instances where both Query&Answer entities were found in passage'], ['Metrics for Comparative Analysis', '% of instances where Only Query entities were found in passage'], ['Metrics for Comparative Analysis', '% Length of the Longest Common sequence of non-stop words in Query (w.r.t Query Length) and Plot']]
1
[['Movie QA'], ['NarrativeQA over plot-summaries'], ['SelfRC'], ['ParaphraseRC']]
[['20.67', '24.94', '13.4', '45.3'], ['1.67', '1.95', '1.34', '2.7'], ['2.3', '1.95', '1.51', '2.47'], ['67.96', '59.4', '58.79', '12.25'], ['59.61', '61.77', '63.39', '47.05'], ['25', '26.26', '38', '21']]
row
['Avg. word distance', 'Avg. sentence distance', 'Number of sentences for inferencing', '% of instances where both Query&Answer entities were found in passage', '% of instances where Only Query entities were found in passage', '% Length of the Longest Common sequence of non-stop words in Query (w.r.t Query Length) and Plot']
['SelfRC', 'ParaphraseRC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Movie QA</th> <th>NarrativeQA over plot-summaries</th> <th>SelfRC</th> <th>ParaphraseRC</th> </tr> </thead> <tbody> <tr> <td>Metrics for Comparative Analysis || Avg. word distance</td> <td>20.67</td> <td>24.94</td> <td>13.4</td> <td>45.3</td> </tr> <tr> <td>Metrics for Comparative Analysis || Avg. sentence distance</td> <td>1.67</td> <td>1.95</td> <td>1.34</td> <td>2.7</td> </tr> <tr> <td>Metrics for Comparative Analysis || Number of sentences for inferencing</td> <td>2.3</td> <td>1.95</td> <td>1.51</td> <td>2.47</td> </tr> <tr> <td>Metrics for Comparative Analysis || % of instances where both Query&amp;Answerentities were found in passage</td> <td>67.96</td> <td>59.4</td> <td>58.79</td> <td>12.25</td> </tr> <tr> <td>Metrics for Comparative Analysis || % of instances where Only Query entities were found in passage</td> <td>59.61</td> <td>61.77</td> <td>63.39</td> <td>47.05</td> </tr> <tr> <td>Metrics for Comparative Analysis || % Length of the Longest Common sequence of non-stop words in Query (w.r.t Query Length) and Plot</td> <td>25</td> <td>26.26</td> <td>38</td> <td>21</td> </tr> </tbody></table>
Table 1
table_1
P18-1156
5
acl2018
In Table 1, we compare various RC datasets with two embodiments of our dataset i.e. the SelfRC and ParaphraseRC. We use NER and noun phrase/verb phrase extraction over the entire dataset to identify key entities in the question, plot and answer which is in turn used to compute the metrics mentioned in the table. The metrics Avg word distance and Avg sentence distance indicate the average distance (in terms of words/sentences) between the occurrence of the question entities and closest occurrence of the answer entities in the passage. Number of sentences for inferencing is indicative of the minimum number of sentences required to cover all the question and answer entities. It is evident that tackling ParaphraseRC is much harder than the others on account of (i) larger distance between the query and answer, (ii) low word-overlap between query & passage, and (iii) higher number of sentences required to infer an answer.
[1, 1, 2, 2, 1]
['In Table 1, we compare various RC datasets with two embodiments of our dataset i.e. the SelfRC and ParaphraseRC.', 'We use NER and noun phrase/verb phrase extraction over the entire dataset to identify key entities in the question, plot and answer which is in turn used to compute the metrics mentioned in the table.', 'The metrics Avg word distance and Avg sentence distance indicate the average distance (in terms of words/sentences) between the occurrence of the question entities and closest occurrence of the answer entities in the passage.', 'Number of sentences for inferencing is indicative of the minimum number of sentences required to cover all the question and answer entities.', 'It is evident that tackling ParaphraseRC is much harder than the others on account of (i) larger distance between the query and answer, (ii) low word-overlap between query & passage, and (iii) higher number of sentences required to infer an answer.']
[['SelfRC', 'ParaphraseRC'], ['Movie QA', 'NarrativeQA over plot-summaries'], ['Avg. word distance', 'Avg. sentence distance'], ['Number of sentences for inferencing'], ['ParaphraseRC']]
1
P18-1156table_3
Performance of the SpanModel and GenModel on the Span Test subset and the Full Test Set of the Self and ParaphraseRC.
2
[['SelfRC', 'SpanModel'], ['SelfRC', 'GenModel (with augmented training data)'], ['ParaphraseRC', 'SpanModel'], ['ParaphraseRC', 'SpanModel with Preprocessed Data'], ['ParaphraseRC', 'GenModel (with augmented training data)']]
2
[['Span Test', 'Acc.'], ['Span Test', 'F1'], ['Span Test', 'BLEU'], ['Full Test', 'Acc.'], ['Full Test', 'F1'], ['Full Test', 'BLEU']]
[['46.14', '57.49', '22.98', '37.53', '50.56', '7.47'], ['16.45', '26.97', '7.61', '15.31', '24.05', '5.50'], ['17.93', '26.27', '9.39', '9.78', '16.33', '2.60'], ['27.49', '35.10', '12.78', '14.92', '21.53', '2.75'], ['12.66', '19.48', '4.41', '5.42', '9.64', '1.75']]
column
['Acc.', 'F1', 'BLEU', 'Acc.', 'F1', 'BLEU']
['SelfRC', 'ParaphraseRC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Span Test || Acc.</th> <th>Span Test || F1</th> <th>Span Test || BLEU</th> <th>Full Test || Acc.</th> <th>Full Test || F1</th> <th>Full Test || BLEU</th> </tr> </thead> <tbody> <tr> <td>SelfRC || SpanModel</td> <td>46.14</td> <td>57.49</td> <td>22.98</td> <td>37.53</td> <td>50.56</td> <td>7.47</td> </tr> <tr> <td>SelfRC || GenModel (with augmented training data)</td> <td>16.45</td> <td>26.97</td> <td>7.61</td> <td>15.31</td> <td>24.05</td> <td>5.50</td> </tr> <tr> <td>ParaphraseRC || SpanModel</td> <td>17.93</td> <td>26.27</td> <td>9.39</td> <td>9.78</td> <td>16.33</td> <td>2.60</td> </tr> <tr> <td>ParaphraseRC || SpanModel with Preprocessed Data</td> <td>27.49</td> <td>35.10</td> <td>12.78</td> <td>14.92</td> <td>21.53</td> <td>2.75</td> </tr> <tr> <td>ParaphraseRC || GenModel (with augmented training data)</td> <td>12.66</td> <td>19.48</td> <td>4.41</td> <td>5.42</td> <td>9.64</td> <td>1.75</td> </tr> </tbody></table>
Table 3
table_3
P18-1156
8
acl2018
SpanModel v/s GenModel:. Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of Table 3 we see that the SpanModel clearly outperforms the GenModel. This is not very surprising for two reasons. First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC) respectively, match an exact span in the document so the SpanModel still has scope to do well on these answers. On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem. For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer. However, surprisingly the GenModel fails to even do this. Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer. This demonstrates the clear scope for the GenModel to perform better. SelfRC v/s ParaphraseRC:. Comparing the SelfRC and ParaphraseRC numbers in Table 3, we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task. Finally, comparing the SpanModel with and without 1691 Paraphrasing in Table 3 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model.
[1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1]
['SpanModel v/s GenModel:.', 'Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of Table 3 we see that the SpanModel clearly outperforms the GenModel.', 'This is not very surprising for two reasons.', 'First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC)\nrespectively, match an exact span in the document\nso the SpanModel still has scope to do well on\nthese answers.', 'On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem.', 'For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer.', 'However, surprisingly the GenModel fails to even do this.', 'Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer.', 'This demonstrates the clear scope for the GenModel to perform better.', 'SelfRC v/s ParaphraseRC:.', 'Comparing the SelfRC and ParaphraseRC numbers in Table 3, we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task.', 'Finally, comparing the SpanModel with and without 1691 Paraphrasing in Table 3 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model.']
[None, ['SpanModel', 'GenModel (with augmented training data)', 'SpanModel with Preprocessed Data'], None, ['SpanModel', 'SpanModel with Preprocessed Data'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['SelfRC', 'ParaphraseRC'], ['SelfRC', 'ParaphraseRC'], ['ParaphraseRC', 'SpanModel', 'SpanModel with Preprocessed Data']]
1
P18-1157table_1
Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics.
2
[['Answer Module', 'Standard 1-step'], ['Answer Module', 'Fixed 5-step with Memory Network (prediction from final step)'], ['Answer Module', 'Fixed 5-step with Memory Network (prediction averaged from all steps)'], ['Answer Module', 'Dynamic steps (max 5) with ReasoNet'], ['Answer Module', 'Stochastic Answer Network (SAN) Fixed 5-step']]
1
[['EM'], ['F1']]
[['75.139', '83.367'], ['75.033', '83.327'], ['75.256', '83.215'], ['75.355', '83.360'], ['76.235', '84.056']]
column
['EM', 'F1']
['Stochastic Answer Network (SAN) Fixed 5-step']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Answer Module || Standard 1-step</td> <td>75.139</td> <td>83.367</td> </tr> <tr> <td>Answer Module || Fixed 5-step with Memory Network (prediction from final step)</td> <td>75.033</td> <td>83.327</td> </tr> <tr> <td>Answer Module || Fixed 5-step with Memory Network (prediction averaged from all steps)</td> <td>75.256</td> <td>83.215</td> </tr> <tr> <td>Answer Module || Dynamic steps (max 5) with ReasoNet</td> <td>75.355</td> <td>83.360</td> </tr> <tr> <td>Answer Module || Stochastic Answer Network (SAN) Fixed 5-step</td> <td>76.235</td> <td>84.056</td> </tr> </tbody></table>
Table 1
table_1
P18-1157
6
acl2018
The main results in terms of EM and F1 are shown in Table 1. We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models. Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM. SAN also outperforms a 5-step memory net with averaging, which implies averaging predictions is not the only thing that led to SANfs superior results; indeed, stochastic prediction dropout is an effective technique.
[1, 1, 1, 1]
['The main results in terms of EM and F1 are shown in Table 1.', 'We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models.', 'Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM.', 'SAN also outperforms a 5-step memory net with averaging, which implies averaging predictions is not the only thing that led to SAN\x81fs superior results; indeed, stochastic prediction dropout is an effective technique.']
[['EM', 'F1'], ['Stochastic Answer Network (SAN) Fixed 5-step', 'EM', 'F1'], ['Standard 1-step', 'EM', 'F1', 'Dynamic steps (max 5) with ReasoNet'], ['Stochastic Answer Network (SAN) Fixed 5-step']]
1
P18-1157table_4
Effect of number of steps: best and worst results are boldfaced.
2
[['SAN', '1 step'], ['SAN', '2 step'], ['SAN', '3 step'], ['SAN', '4 step'], ['SAN', '5 step'], ['SAN', '6 step'], ['SAN', '7 step'], ['SAN', '8 step'], ['SAN', '9 step'], ['SAN', '10 step']]
1
[['EM'], ['F1']]
[['75.38', '83.29'], ['75.43', '83.41'], ['75.89', '83.57'], ['75.92', '83.85'], ['76.24', '84.06'], ['75.99', '83.72'], ['76.04', '83.92'], ['76.03', '83.82'], ['75.95', '83.75'], ['76.04', '83.89']]
column
['EM', 'F1']
['SAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>SAN || 1 step</td> <td>75.38</td> <td>83.29</td> </tr> <tr> <td>SAN || 2 step</td> <td>75.43</td> <td>83.41</td> </tr> <tr> <td>SAN || 3 step</td> <td>75.89</td> <td>83.57</td> </tr> <tr> <td>SAN || 4 step</td> <td>75.92</td> <td>83.85</td> </tr> <tr> <td>SAN || 5 step</td> <td>76.24</td> <td>84.06</td> </tr> <tr> <td>SAN || 6 step</td> <td>75.99</td> <td>83.72</td> </tr> <tr> <td>SAN || 7 step</td> <td>76.04</td> <td>83.92</td> </tr> <tr> <td>SAN || 8 step</td> <td>76.03</td> <td>83.82</td> </tr> <tr> <td>SAN || 9 step</td> <td>75.95</td> <td>83.75</td> </tr> <tr> <td>SAN || 10 step</td> <td>76.04</td> <td>83.89</td> </tr> </tbody></table>
Table 4
table_4
P18-1157
7
acl2018
Table 4 shows the development set scores for T = 1 to T = 10. We observe that there is a gradual improvement as we increase T = 1 to T = 5, but after 5 steps the improvements have saturated. In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviation of 0.142 and a spread of 0.426 (for EM), we believe that the T = 10 result does not statistically differ from the T = 5 result. In summary, we think it is useful to perform some approximate hyper-parameter tuning for the number of steps, but it is not necessary to find the exact optimal value.
[1, 1, 1, 2]
['Table 4 shows the development set scores for T = 1 to T = 10.', 'We observe that there is a gradual improvement as we increase T = 1 to T = 5, but after 5 steps the improvements have saturated.', 'In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviation of 0.142 and a spread of 0.426 (for EM), we believe that the T = 10 result does not statistically differ from the T = 5 result.', 'In summary, we think it is useful to perform some approximate hyper-parameter tuning for the number of steps, but it is not necessary to find the exact optimal value.']
[['1 step', '2 step', '3 step', '4 step', '5 step', '6 step', '7 step', '8 step', '9 step', '10 step'], ['1 step', '2 step', '3 step', '4 step', '5 step'], ['EM', 'F1', '10 step', '5 step'], None]
1
P18-1157table_5
Test performance on the adversarial SQuAD dataset in F1 score.
2
[['Single model:', 'LR (Rajpurkar et al., 2016)'], ['Single model:', 'SEDT (Liu et al., 2017a)'], ['Single model:', 'BiDAF (Seo et al., 2016)'], ['Single model:', 'jNet (Zhang et al., 2017)'], ['Single model:', 'ReasoNet(Shen et al., 2017)'], ['Single model:', 'RaSoR(Lee et al., 2016)'], ['Single model:', 'Mnemonic(Hu et al., 2017)'], ['Single model:', 'QANet(Yu et al., 2018)'], ['Single model:', 'Standard 1-step in Table 1'], ['Single model:', 'SAN']]
1
[['AddSent'], ['AddOneSent']]
[['23.2', '30.3'], ['33.9', '44.8'], ['34.3', '45.7'], ['37.9', '47.0'], ['39.4', '50.3'], ['39.5', '49.5'], ['46.6', '56.0'], ['45.2', '55.7'], ['45.4', '55.8'], ['46.6', '56.5']]
column
['F1', 'F1']
['SAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AddSent</th> <th>AddOneSent</th> </tr> </thead> <tbody> <tr> <td>Single model: || LR (Rajpurkar et al., 2016)</td> <td>23.2</td> <td>30.3</td> </tr> <tr> <td>Single model: || SEDT (Liu et al., 2017a)</td> <td>33.9</td> <td>44.8</td> </tr> <tr> <td>Single model: || BiDAF (Seo et al., 2016)</td> <td>34.3</td> <td>45.7</td> </tr> <tr> <td>Single model: || jNet (Zhang et al., 2017)</td> <td>37.9</td> <td>47.0</td> </tr> <tr> <td>Single model: || ReasoNet(Shen et al., 2017)</td> <td>39.4</td> <td>50.3</td> </tr> <tr> <td>Single model: || RaSoR(Lee et al., 2016)</td> <td>39.5</td> <td>49.5</td> </tr> <tr> <td>Single model: || Mnemonic(Hu et al., 2017)</td> <td>46.6</td> <td>56.0</td> </tr> <tr> <td>Single model: || QANet(Yu et al., 2018)</td> <td>45.2</td> <td>55.7</td> </tr> <tr> <td>Single model: || Standard 1-step in Table 1</td> <td>45.4</td> <td>55.8</td> </tr> <tr> <td>Single model: || SAN</td> <td>46.6</td> <td>56.5</td> </tr> </tbody></table>
Table 5
table_5
P18-1157
8
acl2018
The results in Table 5 show that SAN achieves the new state-of-the-art performance and SANfs superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent.
[1]
['The results in Table 5 show that SAN achieves the new state-of-the-art performance and SAN\x81fs superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent.']
[['AddSent', 'AddOneSent', 'SAN', 'Standard 1-step in Table 1']]
1
P18-1157table_7
MS MARCO devset results.
2
[['SingleModel', 'ReasoNet++(Shen et al. 2017)'], ['SingleModel', 'V-Net(Wang et al. 2018)'], ['SingleModel', 'Standard 1-step in Table 1'], ['SingleModel', 'SAN']]
1
[['ROUGE'], ['BLEU']]
[['38.01', '38.62'], ['45.65', '-'], ['42.30', '42.39'], ['46.14', '43.85']]
column
['ROUGE', 'BLEU']
['SAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>SingleModel || ReasoNet++(Shen et al. 2017)</td> <td>38.01</td> <td>38.62</td> </tr> <tr> <td>SingleModel || V-Net(Wang et al. 2018)</td> <td>45.65</td> <td>-</td> </tr> <tr> <td>SingleModel || Standard 1-step in Table 1</td> <td>42.30</td> <td>42.39</td> </tr> <tr> <td>SingleModel || SAN</td> <td>46.14</td> <td>43.85</td> </tr> </tbody></table>
Table 7
table_7
P18-1157
9
acl2018
The results in Table 7 show that SAN outperforms V-Net (Wang et al., 2018) and becomes the new state of the art6.
[1]
['The results in Table 7 show that SAN outperforms V-Net (Wang et al., 2018) and becomes the new state of the art.']
[['V-Net(Wang et al. 2018)', 'SAN']]
1
P18-1160table_5
Results on the dev set of SQuAD (First two) and NewsQA (Last). For Top k, we use k = 1 and k = 3 for SQuAD and NewsQA, respectively. We compare with GNR (Raiman and Miller, 2017), FusionNet (Huang et al., 2018) and FastQA (Weissenborn et al., 2017), which are the model leveraging sentence selection for question answering, and the published state-of-the-art models on SQuAD and NewsQA, respectively.
1
[['FULL'], ['ORACLE'], ['MINIMAL(Top k)'], ['MINIMAL(Dyn)'], ['GNR'], ['FastQA'], ['FusionNet']]
2
[['SQuAD (with S-Reader)', 'F1'], ['SQuAD (with S-Reader)', 'EM'], ['SQuAD (with S-Reader)', 'Train Sp'], ['SQuAD (with S-Reader)', 'Infer Sp'], ['SQuAD (with DCN+)', 'F1'], ['SQuAD (with DCN+)', 'EM'], ['SQuAD (with DCN+)', 'Train Sp'], ['SQuAD (with DCN+)', 'Infer Sp'], ['NewsQA (with S-Reader)', 'F1'], ['NewsQA (with S-Reader)', 'EM'], ['NewsQA (with S-Reader)', 'Train Sp'], ['NewsQA (with S-Reader)', 'Infer Sp']]
[['79.9', '71', 'x1.0', 'x1.0', '83.1', '74.5', 'x1.0', 'x1.0', '63.8', '50.7', 'x1.0', 'x1.0'], ['84.3', '74.9', 'x6.7', 'x5.1', '85.1', '76', 'x3.0', 'x5.1', '75.5', '59.2', 'x18.8', 'x21.7'], ['78.7', '69.9', 'x6.7', 'x5.1', '79.2', '70.7', 'x3.0', 'x5.1', '62.3', '49.3', 'x15.0', 'x6.9'], ['79.8', '70.9', 'x6.7', 'x3.6', '80.6', '72', 'x3.0', 'x3.7', '63.2', '50.1', 'x15.0', 'x5.3'], ['-', '-', '-', '-', '85.0a', '66.6a', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '78.5', '70.3', '-', '-', '56.1', '43.7', '-', '-'], ['-', '-', '-', '-', '83.6', '75.3', '-', '-', '-', '-', '-', '-']]
column
['F1', 'EM', 'Train Sp', 'Infer Sp', 'F1', 'EM', 'Train Sp', 'Infer Sp', 'F1', 'EM', 'Train Sp', 'Infer Sp']
['MINIMAL(Top k)', 'MINIMAL(Dyn)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SQuAD (with S-Reader) || F1</th> <th>SQuAD (with S-Reader) || EM</th> <th>SQuAD (with S-Reader) || Train Sp</th> <th>SQuAD (with S-Reader) || Infer Sp</th> <th>SQuAD (with DCN+) || F1</th> <th>SQuAD (with DCN+) || EM</th> <th>SQuAD (with DCN+) || Train Sp</th> <th>SQuAD (with DCN+) || Infer Sp</th> <th>NewsQA (with S-Reader) || F1</th> <th>NewsQA (with S-Reader) || EM</th> <th>NewsQA (with S-Reader) || Train Sp</th> <th>NewsQA (with S-Reader) || Infer Sp</th> </tr> </thead> <tbody> <tr> <td>FULL</td> <td>79.9</td> <td>71</td> <td>x1.0</td> <td>x1.0</td> <td>83.1</td> <td>74.5</td> <td>x1.0</td> <td>x1.0</td> <td>63.8</td> <td>50.7</td> <td>x1.0</td> <td>x1.0</td> </tr> <tr> <td>ORACLE</td> <td>84.3</td> <td>74.9</td> <td>x6.7</td> <td>x5.1</td> <td>85.1</td> <td>76</td> <td>x3.0</td> <td>x5.1</td> <td>75.5</td> <td>59.2</td> <td>x18.8</td> <td>x21.7</td> </tr> <tr> <td>MINIMAL(Top k)</td> <td>78.7</td> <td>69.9</td> <td>x6.7</td> <td>x5.1</td> <td>79.2</td> <td>70.7</td> <td>x3.0</td> <td>x5.1</td> <td>62.3</td> <td>49.3</td> <td>x15.0</td> <td>x6.9</td> </tr> <tr> <td>MINIMAL(Dyn)</td> <td>79.8</td> <td>70.9</td> <td>x6.7</td> <td>x3.6</td> <td>80.6</td> <td>72</td> <td>x3.0</td> <td>x3.7</td> <td>63.2</td> <td>50.1</td> <td>x15.0</td> <td>x5.3</td> </tr> <tr> <td>GNR</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>85.0a</td> <td>66.6a</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>FastQA</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>78.5</td> <td>70.3</td> <td>-</td> <td>-</td> <td>56.1</td> <td>43.7</td> <td>-</td> <td>-</td> </tr> <tr> <td>FusionNet</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>83.6</td> <td>75.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 5
table_5
P18-1160
6
acl2018
Table 5 shows results in the task of QA on SQuAD and NewsQA. MINIMAL is more efficient in training and inference than FULL. On SQuAD, S-Reader achieves 6.7 training and 3.6 inference speedup on SQuAD, and 15.0 training and 6.9 inference speedup on NewsQA. In addition to the speedup, MINIMAL achieves comparable result to FULL (using S-Reader, 79.9 vs 79.8 F1 on SQuAD and 63.8 vs 63.2 F1 on NewsQA).
[1, 1, 1, 1]
['Table 5 shows results in the task of QA on SQuAD and NewsQA.', 'MINIMAL is more efficient in training and inference than FULL.', 'On SQuAD, S-Reader achieves 6.7 training and 3.6 inference speedup on SQuAD, and 15.0 training and 6.9 inference speedup on NewsQA.', 'In addition to the speedup, MINIMAL achieves comparable result to FULL (using S-Reader, 79.9 vs 79.8 F1 on SQuAD and 63.8 vs 63.2 F1 on NewsQA).']
[['SQuAD (with S-Reader)', 'SQuAD (with DCN+)', 'NewsQA (with S-Reader)'], ['MINIMAL(Top k)', 'MINIMAL(Dyn)', 'FULL'], ['SQuAD (with S-Reader)', 'NewsQA (with S-Reader)', 'MINIMAL(Top k)', 'MINIMAL(Dyn)', 'Train Sp', 'Infer Sp'], ['SQuAD (with S-Reader)', 'FULL', 'MINIMAL(Dyn)', 'NewsQA (with S-Reader)', 'F1']]
1
P18-1160table_8
Results on the dev-full set of TriviaQA (Wikipedia) and the dev set of SQuAD-Open. Full results (including the dev-verified set on TriviaQA) are in Appendix C. For training FULL and MINIMAL on TriviaQA, we use 10 paragraphs and 20 sentences, respectively. For training FULL and MINIMAL on SQuAD-Open, we use 20 paragraphs and 20 sentences, respectively. For evaluating FULL and MINIMAL, we use 40 paragraphs and 5-20 sentences, respectively. ‘n sent’ indicates the number of sentences used during inference. ‘Acc’ indicates accuracy of whether answer text is contained in selected context. ‘Sp’ indicates inference speed. We compare with the results from the sentences selected by TF-IDF method and our selector (Dyn). We also compare with published Rank1-3 models. For TriviaQA(Wikipedia), they are Neural Casecades (Swayamdipta et al., 2018), Reading Twice for Natural Language Understanding (Weissenborn, 2017) and Mnemonic Reader (Hu et al., 2017). For SQuAD-Open, they are DrQA (Chen et al., 2017) (Multitask), R3 (Wang et al., 2018) and DrQA (Plain).
2
[['FULL', '-'], ['MINIMAL', 'TF-IDF'], ['MINIMAL', 'TF-IDF'], ['MINIMAL', 'Our Selector'], ['MINIMAL', 'Our Selector'], ['Rank 1', '-'], ['Rank 2', '-'], ['Rank 3', '-']]
2
[['TriviaQA (Wikipedia)', 'n sent'], ['TriviaQA (Wikipedia)', 'Acc'], ['TriviaQA (Wikipedia)', 'Sp'], ['TriviaQA (Wikipedia)', 'F1'], ['TriviaQA (Wikipedia)', 'EM'], ['SQuAD-Open', 'n sent'], ['SQuAD-Open', 'Acc'], ['SQuAD-Open', 'Sp'], ['SQuAD-Open', 'F1'], ['SQuAD-Open', 'EM']]
[['69', '95.9', 'x1.0', '59.6', '53.5', '124', '76.9', 'x1.0', '41.0', '33.1'], ['5', '73.0', 'x13.8', '51.9', '45.8', '5', '46.1', 'x12.4', '36.6', '29.6'], ['10', '79.9', 'x6.9', '57.2', '51.5', '10', '54.3', 'x6.2', '39.8', '32.5'], ['5.0', '84.9', 'x13.8', '59.5', '54.0', '5.3', '58.9', 'x11.7', '42.3', '34.6'], ['10.5', '90.9', 'x6.6', '60.5', '54.9', '10.7', '64.0', 'x5.8', '42.5', '34.7'], ['-', '-', '-', '56.0a', '51.6a', '2376a', '77.8', '-', '-', '29.8'], ['-', '-', '-', '55.1a', '48.6a', '-', '-', '-', '37.5', '29.1'], ['-', '-', '-', '52.9b', '46.9a', '2376a', '77.8', '-', '-', '28.4']]
column
['n sent', 'Acc', 'Sp', 'F1', 'EM', 'n sent', 'Acc', 'Sp', 'F1', 'EM']
['Our Selector']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TriviaQA (Wikipedia) || n sent</th> <th>TriviaQA (Wikipedia) || Acc</th> <th>TriviaQA (Wikipedia) || Sp</th> <th>TriviaQA (Wikipedia) || F1</th> <th>TriviaQA (Wikipedia) || EM</th> <th>SQuAD-Open || n sent</th> <th>SQuAD-Open || Acc</th> <th>SQuAD-Open || Sp</th> <th>SQuAD-Open || F1</th> <th>SQuAD-Open || EM</th> </tr> </thead> <tbody> <tr> <td>FULL || -</td> <td>69</td> <td>95.9</td> <td>x1.0</td> <td>59.6</td> <td>53.5</td> <td>124</td> <td>76.9</td> <td>x1.0</td> <td>41.0</td> <td>33.1</td> </tr> <tr> <td>MINIMAL || TF-IDF</td> <td>5</td> <td>73.0</td> <td>x13.8</td> <td>51.9</td> <td>45.8</td> <td>5</td> <td>46.1</td> <td>x12.4</td> <td>36.6</td> <td>29.6</td> </tr> <tr> <td>MINIMAL || TF-IDF</td> <td>10</td> <td>79.9</td> <td>x6.9</td> <td>57.2</td> <td>51.5</td> <td>10</td> <td>54.3</td> <td>x6.2</td> <td>39.8</td> <td>32.5</td> </tr> <tr> <td>MINIMAL || Our Selector</td> <td>5.0</td> <td>84.9</td> <td>x13.8</td> <td>59.5</td> <td>54.0</td> <td>5.3</td> <td>58.9</td> <td>x11.7</td> <td>42.3</td> <td>34.6</td> </tr> <tr> <td>MINIMAL || Our Selector</td> <td>10.5</td> <td>90.9</td> <td>x6.6</td> <td>60.5</td> <td>54.9</td> <td>10.7</td> <td>64.0</td> <td>x5.8</td> <td>42.5</td> <td>34.7</td> </tr> <tr> <td>Rank 1 || -</td> <td>-</td> <td>-</td> <td>-</td> <td>56.0a</td> <td>51.6a</td> <td>2376a</td> <td>77.8</td> <td>-</td> <td>-</td> <td>29.8</td> </tr> <tr> <td>Rank 2 || -</td> <td>-</td> <td>-</td> <td>-</td> <td>55.1a</td> <td>48.6a</td> <td>-</td> <td>-</td> <td>-</td> <td>37.5</td> <td>29.1</td> </tr> <tr> <td>Rank 3 || -</td> <td>-</td> <td>-</td> <td>-</td> <td>52.9b</td> <td>46.9a</td> <td>2376a</td> <td>77.8</td> <td>-</td> <td>-</td> <td>28.4</td> </tr> </tbody></table>
Table 8
table_8
P18-1160
8
acl2018
Table 8 shows results on TriviaQA (Wikipedia) and SQuAD-Open. First, MINIMAL obtains higher F1 and EM over FULL, with the inference speedup of up to 13.8. Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector. For example, on the development-full set, with 5 sentences per question on average, the model with Dyn achieves 59.5 F1 while the model with TF-IDF method achieves 51.9 F1. Third, we outperforms the published state-of-the-art on both dataset.
[1, 1, 1, 1, 1]
['Table 8 shows results on TriviaQA (Wikipedia) and SQuAD-Open.', 'First, MINIMAL obtains higher F1 and EM over FULL, with the inference speedup of up to 13.8.', 'Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector.', 'For example, on the development-full set, with 5 sentences per question on average, the model with Dyn achieves 59.5 F1 while the model with TF-IDF method achieves 51.9 F1.', 'Third, we outperforms the published state-of-the-art on both dataset.']
[['TriviaQA (Wikipedia)', 'SQuAD-Open'], ['TriviaQA (Wikipedia)', 'SQuAD-Open', 'FULL', 'Our Selector', 'Sp', 'F1', 'EM'], ['Our Selector', 'TF-IDF', 'F1', 'EM'], ['TriviaQA (Wikipedia)', 'F1', 'Our Selector', 'TF-IDF'], ['Our Selector', 'Rank 1', 'Rank 2', 'Rank 3']]
1
P18-1160table_9
Results on the dev set of SQuADAdversarial. We compare with RaSOR (Lee et al., 2016), ReasoNet (Shen et al., 2017) and Mnemonic Reader (Hu et al., 2017), the previous state-of-the-art on SQuAD-Adversarial, where the numbers are from Jia and Liang (2017).
3
[['SQuAD-Adversarial', 'DCN+', 'FULL'], ['SQuAD-Adversarial', 'DCN+', 'ORACLE'], ['SQuAD-Adversarial', 'DCN+', 'MINIMAL'], ['SQuAD-Adversarial', 'S-Reader', 'FULL'], ['SQuAD-Adversarial', 'S-Reader', 'ORACLE'], ['SQuAD-Adversarial', 'S-Reader', 'MINIMAL'], ['SQuAD-Adversarial', 'RaSOR', '-'], ['SQuAD-Adversarial', 'ReasoNet', '-'], ['SQuAD-Adversarial', 'Mnemonic Reader', '-']]
2
[['AddSent', 'F1'], ['AddSent', 'EM'], ['AddSent', 'Sp'], ['AddOneSent', 'F1'], ['AddOneSent', 'EM'], ['AddOneSent', 'Sp']]
[['52.6', '46.2', 'x0.7', '63.5', '56.8', 'x0.7'], ['84.2', '75.3', 'x4.3', '84.5', '75.8', 'x4.3'], ['59.7', '52.2', 'x4.3', '67.5', '60.1', 'x4.3'], ['57.7', '51.1', 'x1.0', '66.5', '59.7', 'x1.0'], ['82.5', '74.1', 'x6.0', '82.9', '74.6', 'x6.0'], ['58.5', '51.5', 'x6.0', '66.5', '59.5', 'x6.0'], ['39.5', '-', '-', '49.5', '-', '-'], ['39.4', '-', '-', '50.3', '-', '-'], ['46.6', '-', '-', '56.0', '-', '-']]
column
['F1', 'EM', 'Sp', 'F1', 'EM', 'Sp']
['MINIMAL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AddSent || F1</th> <th>AddSent || EM</th> <th>AddSent || Sp</th> <th>AddOneSent || F1</th> <th>AddOneSent || EM</th> <th>AddOneSent || Sp</th> </tr> </thead> <tbody> <tr> <td>SQuAD-Adversarial || DCN+ || FULL</td> <td>52.6</td> <td>46.2</td> <td>x0.7</td> <td>63.5</td> <td>56.8</td> <td>x0.7</td> </tr> <tr> <td>SQuAD-Adversarial || DCN+ || ORACLE</td> <td>84.2</td> <td>75.3</td> <td>x4.3</td> <td>84.5</td> <td>75.8</td> <td>x4.3</td> </tr> <tr> <td>SQuAD-Adversarial || DCN+ || MINIMAL</td> <td>59.7</td> <td>52.2</td> <td>x4.3</td> <td>67.5</td> <td>60.1</td> <td>x4.3</td> </tr> <tr> <td>SQuAD-Adversarial || S-Reader || FULL</td> <td>57.7</td> <td>51.1</td> <td>x1.0</td> <td>66.5</td> <td>59.7</td> <td>x1.0</td> </tr> <tr> <td>SQuAD-Adversarial || S-Reader || ORACLE</td> <td>82.5</td> <td>74.1</td> <td>x6.0</td> <td>82.9</td> <td>74.6</td> <td>x6.0</td> </tr> <tr> <td>SQuAD-Adversarial || S-Reader || MINIMAL</td> <td>58.5</td> <td>51.5</td> <td>x6.0</td> <td>66.5</td> <td>59.5</td> <td>x6.0</td> </tr> <tr> <td>SQuAD-Adversarial || RaSOR || -</td> <td>39.5</td> <td>-</td> <td>-</td> <td>49.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>SQuAD-Adversarial || ReasoNet || -</td> <td>39.4</td> <td>-</td> <td>-</td> <td>50.3</td> <td>-</td> <td>-</td> </tr> <tr> <td>SQuAD-Adversarial || Mnemonic Reader || -</td> <td>46.6</td> <td>-</td> <td>-</td> <td>56.0</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 9
table_9
P18-1160
8
acl2018
Table 9 shows that MINIMAL outperforms FULL, achieving the new state-of-the-art by large margin (+11.1 and +11.5 F1 on AddSent and AddOneSent, respectively).
[1]
['Table 9 shows that MINIMAL outperforms FULL, achieving the new state-of-the-art by large margin (+11.1 and +11.5 F1 on AddSent and AddOneSent, respectively).']
[['MINIMAL', 'FULL', 'AddSent', 'AddOneSent', 'Mnemonic Reader']]
1
P18-1165table_4
Results on TED test data for training with estimated (E) and direct (D) rewards from simulation (S), humans (H) and filtered (F) human ratings. Significant (p ≤ 0.05) differences to the baseline are marked with (cid:63). For RL experiments we show three runs with different random seeds, mean and standard deviation in subscript.
5
[['Model', 'Baseline', 'Rewards', '-', '-'], ['Model', 'RL', 'Rewards', 'D', 'S'], ['Model', 'OPL', 'Rewards', 'D', 'S'], ['Model', 'RL+MSE', 'Rewards', 'E', 'S'], ['Model', 'RL+PW', 'Rewards', 'E', 'S'], ['Model', 'OPL', 'Rewards', 'D', 'H'], ['Model', 'RL+MSE', 'Rewards', 'E', 'H'], ['Model', 'RL+PW', 'Rewards', 'E', 'H'], ['Model', 'RL+MSE', 'Rewards', 'E', 'F']]
1
[['BLEU'], ['METEOR'], ['BEER']]
[['27.0', '30.7', '59.48'], ['32.5★ ±0.01', '33.7★ ±0.01', '63.47★ ±0.10'], ['27.5★', '30.9★', '59.62★'], ['28.2★ ±0.09', '31.6★ ±0.04', '60.23★ ±0.14'], ['27.8★ ±0.01', '31.2★ ±0.01', '59.83★ ±0.04'], ['27.5★', '30.9★', '59.72★'], ['28.1★ ±0.01', '31.5★ ±0.01', '60.21★ ±0.12'], ['27.8★ ±0.09', '31.3★ ±0.09', '59.88★ ±0.23'], ['28.1★ ±0.20', '31.6★ ±0.10', '60.29★ ±0.13']]
column
['BLEU', 'METEOR', 'BEER']
['RL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>METEOR</th> <th>BEER</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline || Rewards || - || -</td> <td>27.0</td> <td>30.7</td> <td>59.48</td> </tr> <tr> <td>Model || RL || Rewards || D || S</td> <td>32.5★ ±0.01</td> <td>33.7★ ±0.01</td> <td>63.47★ ±0.10</td> </tr> <tr> <td>Model || OPL || Rewards || D || S</td> <td>27.5★</td> <td>30.9★</td> <td>59.62★</td> </tr> <tr> <td>Model || RL+MSE || Rewards || E || S</td> <td>28.2★ ±0.09</td> <td>31.6★ ±0.04</td> <td>60.23★ ±0.14</td> </tr> <tr> <td>Model || RL+PW || Rewards || E || S</td> <td>27.8★ ±0.01</td> <td>31.2★ ±0.01</td> <td>59.83★ ±0.04</td> </tr> <tr> <td>Model || OPL || Rewards || D || H</td> <td>27.5★</td> <td>30.9★</td> <td>59.72★</td> </tr> <tr> <td>Model || RL+MSE || Rewards || E || H</td> <td>28.1★ ±0.01</td> <td>31.5★ ±0.01</td> <td>60.21★ ±0.12</td> </tr> <tr> <td>Model || RL+PW || Rewards || E || H</td> <td>27.8★ ±0.09</td> <td>31.3★ ±0.09</td> <td>59.88★ ±0.23</td> </tr> <tr> <td>Model || RL+MSE || Rewards || E || F</td> <td>28.1★ ±0.20</td> <td>31.6★ ±0.10</td> <td>60.29★ ±0.13</td> </tr> </tbody></table>
Table 4
table_4
P18-1165
9
acl2018
Table 4 lists the results for this simulation experiment in rows 2-5 (S). If unlimited clean feedback was given (RL with direct simulated rewards), improvements of over 5 BLEU can be achieved. When limiting the amount of feedback to a log of 800 translations, the improvements over the baseline are only marginal (OPL). When replacing the direct reward by the simulated reward estimators from ˜5, i.e. having unlimited amounts of approximately clean rewards, however, improvements of 1.2 BLEU for MSE estimators (RL+MSE) and 0.8 BLEU for pairwise estimators (RL+PW) are found. This suggests that the reward estimation model helps to tackle the challenge of generalization over a small set of ratings. . Table 4 shows the results for training with human rewards in rows 6-8: . The improvements for OPL are very similar to OPL with simulated rewards, both suffering from overfitting. For RL we observe that the MSEbased reward estimator (RL+MSE) leads to significantly higher improvements as a the pairwise reward estimator (RL+PW) ? the same trend as for simulated ratings. Finally, the improvement of 1.1 BLEU over the baseline showcases that we are able to improve NMT with only a small number of human rewards.
[1, 1, 1, 1, 2, 1, 1, 1, 1]
['Table 4 lists the results for this simulation experiment in rows 2-5 (S).', 'If unlimited clean feedback was given (RL with direct simulated rewards), improvements of over 5 BLEU can be achieved.', 'When limiting the amount of feedback to a log of 800 translations, the improvements over the baseline are only marginal (OPL).', 'When replacing the direct reward by the simulated reward estimators from \x81\x985, i.e. having unlimited amounts of approximately clean rewards, however, improvements of 1.2 BLEU for MSE estimators (RL+MSE) and 0.8 BLEU for pairwise estimators (RL+PW) are found.', 'This suggests that the reward estimation model helps to tackle the challenge of generalization over a small set of ratings.\n.', 'Table 4 shows the results for training with human rewards in rows 6-8: .', 'The improvements for OPL are very similar to OPL with simulated rewards, both suffering from overfitting.', 'For RL we observe that the MSEbased reward estimator (RL+MSE) leads to significantly higher improvements as a the pairwise reward estimator (RL+PW) ? the same trend as for simulated ratings.', 'Finally, the improvement of 1.1 BLEU over the baseline showcases that we are able to improve NMT with only a small number of human rewards.']
[['Baseline', 'RL', 'OPL', 'RL+MSE', 'RL+PW'], ['BLEU', 'RL'], ['BLEU', 'OPL'], ['BLEU', 'RL+MSE', 'RL+PW'], None, ['OPL', 'RL+MSE', 'RL+PW'], ['OPL'], ['RL+MSE', 'RL+PW'], ['Baseline', 'RL+MSE']]
1
P18-1166table_4
Detokenized BLEU scores for WMT17 translation tasks. Results are reported with multi-bleudetok.perl. “winner” denotes the translation results generated by the WMT17 winning systems. (cid:52)d indicates the difference between our model and the Transformer.
1
[['En→De'], ['De→En'], ['En→Fi'], ['Fi→En'], ['En→Lv'], ['Lv→En'], ['En→Ru'], ['Ru→En'], ['En→Tr'], ['Tr→En'], ['En→Cs'], ['Cs→En']]
2
[['Case-sensitive BLEU', 'winner'], ['Case-sensitive BLEU', 'Transformer'], ['Case-sensitive BLEU', 'Our Model'], ['Case-sensitive BLEU', 'Δd'], ['Case-insensitive BLEU', 'winner'], ['Case-insensitive BLEU', 'Transformer'], ['Case-insensitive BLEU', 'Our Model'], ['Case-insensitive BLEU', 'Δd']]
[['28.3', '27.33', '27.22', '-0.11', '28.9', '27.92', '27.80', '-0.12'], ['35.1', '32.63', '32.73', '+0.10', '36.5', '34.06', '34.13', '+0.07'], ['20.7', '21.00', '20.87', '-0.13', '21.1', '21.54', '21.47', '-0.07'], ['20.5', '25.19', '24.78', '-0.41', '21.4', '26.22', '25.74', '-0.48'], ['21.1', '16.83', '16.63', '-0.20', '21.6', '17.42', '17.23', '-0.19'], ['21.9', '17.57', '17.51', '-0.06', '22.9', '18.48', '18.30', '-0.18'], ['29.8', '27.82', '27.73', '-0.09', '29.8', '27.83', '27.74', '-0.09'], ['34.7', '31.51', '31.36', '-0.15', '35.6', '32.59', '32.36', '-0.23'], ['18.1', '12.11', '11.59', '-0.52', '18.4', '12.56', '12.03', '-0.53'], ['20.1', '16.19', '15.84', '-0.35', '20.9', '16.93', '16.57', '-0.36'], ['23.5', '21.53', '21.12', '-0.41', '24.1', '22.07', '21.66', '-0.41'], ['30.9', '27.49', '27.45', '-0.04', '31.9', '28.41', '28.33', '-0.08']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case-sensitive BLEU || winner</th> <th>Case-sensitive BLEU || Transformer</th> <th>Case-sensitive BLEU || Our Model</th> <th>Case-sensitive BLEU || Δd</th> <th>Case-insensitive BLEU || winner</th> <th>Case-insensitive BLEU || Transformer</th> <th>Case-insensitive BLEU || Our Model</th> <th>Case-insensitive BLEU || Δd</th> </tr> </thead> <tbody> <tr> <td>En→De</td> <td>28.3</td> <td>27.33</td> <td>27.22</td> <td>-0.11</td> <td>28.9</td> <td>27.92</td> <td>27.80</td> <td>-0.12</td> </tr> <tr> <td>De→En</td> <td>35.1</td> <td>32.63</td> <td>32.73</td> <td>+0.10</td> <td>36.5</td> <td>34.06</td> <td>34.13</td> <td>+0.07</td> </tr> <tr> <td>En→Fi</td> <td>20.7</td> <td>21.00</td> <td>20.87</td> <td>-0.13</td> <td>21.1</td> <td>21.54</td> <td>21.47</td> <td>-0.07</td> </tr> <tr> <td>Fi→En</td> <td>20.5</td> <td>25.19</td> <td>24.78</td> <td>-0.41</td> <td>21.4</td> <td>26.22</td> <td>25.74</td> <td>-0.48</td> </tr> <tr> <td>En→Lv</td> <td>21.1</td> <td>16.83</td> <td>16.63</td> <td>-0.20</td> <td>21.6</td> <td>17.42</td> <td>17.23</td> <td>-0.19</td> </tr> <tr> <td>Lv→En</td> <td>21.9</td> <td>17.57</td> <td>17.51</td> <td>-0.06</td> <td>22.9</td> <td>18.48</td> <td>18.30</td> <td>-0.18</td> </tr> <tr> <td>En→Ru</td> <td>29.8</td> <td>27.82</td> <td>27.73</td> <td>-0.09</td> <td>29.8</td> <td>27.83</td> <td>27.74</td> <td>-0.09</td> </tr> <tr> <td>Ru→En</td> <td>34.7</td> <td>31.51</td> <td>31.36</td> <td>-0.15</td> <td>35.6</td> <td>32.59</td> <td>32.36</td> <td>-0.23</td> </tr> <tr> <td>En→Tr</td> <td>18.1</td> <td>12.11</td> <td>11.59</td> <td>-0.52</td> <td>18.4</td> <td>12.56</td> <td>12.03</td> <td>-0.53</td> </tr> <tr> <td>Tr→En</td> <td>20.1</td> <td>16.19</td> <td>15.84</td> <td>-0.35</td> <td>20.9</td> <td>16.93</td> <td>16.57</td> <td>-0.36</td> </tr> <tr> <td>En→Cs</td> <td>23.5</td> <td>21.53</td> <td>21.12</td> <td>-0.41</td> <td>24.1</td> <td>22.07</td> <td>21.66</td> <td>-0.41</td> </tr> <tr> <td>Cs→En</td> <td>30.9</td> <td>27.49</td> <td>27.45</td> <td>-0.04</td> <td>31.9</td> <td>28.41</td> <td>28.33</td> <td>-0.08</td> </tr> </tbody></table>
Table 4
table_4
P18-1166
8
acl2018
Table 4 shows the overall results on 12 translation directions. We also provide the results from WMT17 winning systems4. Notice that unlike the Transformer and our model, these winner systems typically use model ensemble, system combination and large-scale monolingual corpus. Although different languages have different linguistic and syntactic structures, our model consistently yields rather competitive results against the Transformer on all language pairs in both directions. Particularly, on the De¨En translation task, our model achieves a slight improvement of 0.10/0.07 case-sensitive/case-insensitive BLEU points over the Transformer. The largest performance gap between our model and the Transformer occurs on the En→Tr translation task, where our model is lower than the Transformer by 0.52/0.53 case-sensitive/case-insensitive BLEU points. We conjecture that this difference may be due to the small training corpus of the En-Tr task. In all, these results suggest that our AAN is able to perform comparably to Transformer on different language pairs with different scales of training data.
[1, 1, 2, 1, 1, 1, 2, 1]
['Table 4 shows the overall results on 12 translation directions.', 'We also provide the results from WMT17 winning systems4.', 'Notice that unlike the Transformer and our model, these winner systems typically use model ensemble, system combination and large-scale monolingual corpus.', 'Although different languages have different linguistic and syntactic structures, our model consistently yields rather competitive results against the Transformer on all language pairs in both directions.', 'Particularly, on the De\x81¨En translation task, our model achieves a slight improvement of 0.10/0.07 case-sensitive/case-insensitive BLEU points over the Transformer.', 'The largest performance gap between our model and the Transformer occurs on the En→Tr translation task, where our model is lower than the Transformer by 0.52/0.53 case-sensitive/case-insensitive BLEU points.', 'We conjecture that this difference may be due to the small training corpus of the En-Tr task.', 'In all, these results suggest that our AAN is able to perform comparably to Transformer on different language pairs with different scales of training data.']
[None, ['winner'], ['Transformer', 'Our Model', 'winner'], ['Our Model', 'Transformer'], ['Our Model', 'De→En', 'Δd', 'Transformer'], ['Our Model', 'Transformer', 'En→Tr'], ['En→Tr'], ['Our Model', 'Transformer']]
1
P18-1171table_1
Performance breakdown of each transition phase.
1
[['Peng et al. (2018)'], ['Soft+feats'], ['Hard+feats']]
1
[['ShiftOrPop'], ['PushIndex'], ['ArcBinary'], ['ArcLabel']]
[['0.87', '0.87', '0.83', '0.81'], ['0.93', '0.84', '0.91', '0.75'], ['0.94', '0.85', '0.93', '0.77']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Soft+feats', 'Hard+feats']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ShiftOrPop</th> <th>PushIndex</th> <th>ArcBinary</th> <th>ArcLabel</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2018)</td> <td>0.87</td> <td>0.87</td> <td>0.83</td> <td>0.81</td> </tr> <tr> <td>Soft+feats</td> <td>0.93</td> <td>0.84</td> <td>0.91</td> <td>0.75</td> </tr> <tr> <td>Hard+feats</td> <td>0.94</td> <td>0.85</td> <td>0.93</td> <td>0.77</td> </tr> </tbody></table>
Table 1
table_1
P18-1171
7
acl2018
Table 1 shows the phase-wise accuracy of our sequence-to-sequence model. Peng et al. (2018) use a separate feedforward network to predict each phase independently. We use the same alignment from the SemEval dataset as in Peng et al. (2018) to avoid differences resulting from the aligner. Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention. We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset. The sequence-to-sequence models perform better than the feedforward model of Peng et al. (2018) on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases. On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.
[1, 2, 2, 2, 1, 1, 1]
['Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.', 'Peng et al. (2018) use a separate feedforward network to predict each phase independently.', 'We use the same alignment from the SemEval dataset as in Peng et al. (2018) to avoid differences resulting from the aligner.', 'Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.', 'We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.', 'The sequence-to-sequence models perform better than the feedforward model of Peng et al. (2018) on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.', 'On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.']
[None, ['Peng et al. (2018)'], ['Peng et al. (2018)'], ['Soft+feats', 'Hard+feats'], ['Hard+feats', 'Soft+feats', 'ShiftOrPop', 'PushIndex', 'ArcBinary', 'ArcLabel'], ['Soft+feats', 'Hard+feats', 'Peng et al. (2018)', 'ShiftOrPop', 'ArcBinary'], ['Soft+feats', 'Hard+feats', 'Peng et al. (2018)', 'PushIndex', 'ArcLabel']]
1
P18-1171table_4
Comparison to other AMR parsers. *Model has been trained on the previous release of the corpus (LDC2014T12).
2
[['System', 'Buys and Blunsom (2017)'], ['System', 'Konstas et al. (2017)'], ['System', 'Ballesteros and Al-Onaizan (2017)*'], ['System', 'Damonte et al. (2017)'], ['System', 'Peng et al. (2018)'], ['System', 'Wang et al. (2015b)'], ['System', 'Wang et al. (2015a)'], ['System', 'Flanigan et al. (2016)'], ['System', 'Wang and Xue (2017)'], ['System', 'Ours soft attention'], ['System', 'Ours hard attention']]
1
[['P'], ['R'], ['F']]
[['-', '-', '0.60'], ['0.60', '0.65', '0.62'], ['-', '-', '0.64'], ['-', '-', '0.64'], ['0.69', '0.59', '0.64'], ['0.64', '0.62', '0.63'], ['0.70', '0.63', '0.66'], ['0.70', '0.65', '0.67'], ['0.72', '0.65', '0.68'], ['0.68', '0.63', '0.65'], ['0.69', '0.64', '0.66']]
column
['P', 'R', 'F']
['Ours soft attention', 'Ours hard attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || Buys and Blunsom (2017)</td> <td>-</td> <td>-</td> <td>0.60</td> </tr> <tr> <td>System || Konstas et al. (2017)</td> <td>0.60</td> <td>0.65</td> <td>0.62</td> </tr> <tr> <td>System || Ballesteros and Al-Onaizan (2017)*</td> <td>-</td> <td>-</td> <td>0.64</td> </tr> <tr> <td>System || Damonte et al. (2017)</td> <td>-</td> <td>-</td> <td>0.64</td> </tr> <tr> <td>System || Peng et al. (2018)</td> <td>0.69</td> <td>0.59</td> <td>0.64</td> </tr> <tr> <td>System || Wang et al. (2015b)</td> <td>0.64</td> <td>0.62</td> <td>0.63</td> </tr> <tr> <td>System || Wang et al. (2015a)</td> <td>0.70</td> <td>0.63</td> <td>0.66</td> </tr> <tr> <td>System || Flanigan et al. (2016)</td> <td>0.70</td> <td>0.65</td> <td>0.67</td> </tr> <tr> <td>System || Wang and Xue (2017)</td> <td>0.72</td> <td>0.65</td> <td>0.68</td> </tr> <tr> <td>System || Ours soft attention</td> <td>0.68</td> <td>0.63</td> <td>0.65</td> </tr> <tr> <td>System || Ours hard attention</td> <td>0.69</td> <td>0.64</td> <td>0.66</td> </tr> </tbody></table>
Table 4
table_4
P18-1171
8
acl2018
Table 4 shows the comparison with other AMR parsers. The first three systems are some competitive neural models. We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017). Konstas et al.(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences. Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction. Our model also outperforms the stack-LSTM model by Ballesteros and Al-Onaizan (2017), while their model is evaluated on the previous release of LDC2014T12. We also show the performance of some of the best-performing models. While our hard attention achieves slightly lower performance in comparison with Wang et al. (2015a) and Wang and Xue (2017), it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours. The alignment from the aligner and the concept identification identifier also play an important role for improving the performance. Wang and Xue (2017) propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.
[1, 1, 1, 2, 2, 1, 1, 1, 2, 2]
['Table 4 shows the comparison with other AMR parsers.', 'The first three systems are some competitive neural models.', 'We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017).', 'Konstas et al.(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.', 'Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.', 'Our model also outperforms the stack-LSTM model by Ballesteros and Al-Onaizan (2017), while their model is evaluated on the previous release of LDC2014T12.', 'We also show the performance of some of the best-performing models.', 'While our hard attention achieves slightly lower performance in comparison with Wang et al. (2015a) and Wang and Xue (2017), it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.', 'The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.', 'Wang and Xue (2017) propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.']
[None, ['Buys and Blunsom (2017)', 'Konstas et al. (2017)', 'Ballesteros and Al-Onaizan (2017)*'], ['Ours soft attention', 'Ours hard attention', 'Buys and Blunsom (2017)', 'F'], ['Konstas et al. (2017)'], ['Ours soft attention', 'Ours hard attention'], ['Ours soft attention', 'Ours hard attention', 'Ballesteros and Al-Onaizan (2017)*', 'F'], None, ['Ours soft attention', 'Ours hard attention', 'Wang et al. (2015a)', 'Wang and Xue (2017)', 'P', 'R', 'F'], None, ['Wang and Xue (2017)']]
1
P18-1173table_2
Test accuracy of sentiment classification on Stanford Sentiment Treebank. Bold font indicates the best performance.
2
[['Model', 'BILSTM'], ['Model', 'PIPELINE'], ['Model', 'STE'], ['Model', 'SPIGOT']]
1
[['Accuracy (%)']]
[['84.8'], ['85.7'], ['85.4'], ['86.3']]
column
['Accuracy (%)']
['SPIGOT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || BILSTM</td> <td>84.8</td> </tr> <tr> <td>Model || PIPELINE</td> <td>85.7</td> </tr> <tr> <td>Model || STE</td> <td>85.4</td> </tr> <tr> <td>Model || SPIGOT</td> <td>86.3</td> </tr> </tbody></table>
Table 2
table_2
P18-1173
8
acl2018
Table 2 compares our SPIGOT method to three baselines. Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines. In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE.
[1, 1, 1]
['Table 2 compares our SPIGOT method to three baselines.', 'Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines.', 'In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE.']
[['SPIGOT', 'BILSTM', 'PIPELINE', 'STE'], ['PIPELINE', 'BILSTM', 'SPIGOT', 'Accuracy (%)'], ['STE', 'PIPELINE']]
1
P18-1173table_3
Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F1) on different groups of the development data. Both systems predict the same syntactic parses for instances from SAME, and they disagree on instances from DIFF (§5). tree, we consider three cases: (a) h(cid:48) is a head of m in the semantic graph; (b) h(cid:48) is a modifier of m in the semantic graph; (c) h is the modifier of m in the semantic graph. The first two reflect modifications to the syntactic parse that rearrange semantically linked words to be neighbors. Under (c), the semantic parser removes a syntactic dependency that reverses the direction of a semantic dependency. These cases account for 17.6%, 10.9%, and 12.8%, respectively (41.2% combined) of the total changes. Making these changes, of course, is complicated, since they often require other modifications to maintain well-formedness of the tree. Figure 2 gives an example.
6
[['Split', 'SAME', '# Sent.', '1011', 'Model', 'PIPELINE'], ['Split', 'SAME', '# Sent.', '1011', 'Model', 'SPIGOT'], ['Split', 'DIFF', '# Sent.', '681', 'Model', 'PIPELINE'], ['Split', 'DIFF', '# Sent.', '681', 'Model', 'SPIGOT']]
1
[['UAS'], ['DM']]
[['97.4', '94.0'], ['97.4', '94.3'], ['91.3', '88.1'], ['89.6', '89.2']]
column
['UAS', 'DM']
['SPIGOT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>DM</th> </tr> </thead> <tbody> <tr> <td>Split || SAME || # Sent. || 1011 || Model || PIPELINE</td> <td>97.4</td> <td>94.0</td> </tr> <tr> <td>Split || SAME || # Sent. || 1011 || Model || SPIGOT</td> <td>97.4</td> <td>94.3</td> </tr> <tr> <td>Split || DIFF || # Sent. || 681 || Model || PIPELINE</td> <td>91.3</td> <td>88.1</td> </tr> <tr> <td>Split || DIFF || # Sent. || 681 || Model || SPIGOT</td> <td>89.6</td> <td>89.2</td> </tr> </tbody></table>
Table 3
table_3
P18-1173
8
acl2018
Table 3 compares a pipelined system to one jointly trained using SPIGOT. We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systemsfsyntactic predictions agree (SAME), or not (DIFF). The second group includes sentences with much lower syntactic parsing accuracy (91.3 vs. 97.4 UAS), and SPIGOT further reduces this to 89.6. Even though these changes hurt syntactic parsing accuracy, they lead to a 1.1% absolute gain in labeled F1 for semantic parsing.
[1, 2, 1, 1]
['Table 3 compares a pipelined system to one jointly trained using SPIGOT.', 'We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systems\x81fsyntactic predictions agree (SAME), or not (DIFF).', 'The second group includes sentences with much lower syntactic parsing accuracy (91.3 vs. 97.4 UAS), and SPIGOT further reduces this to 89.6.', 'Even though these changes hurt syntactic parsing accuracy, they lead to a 1.1% absolute gain in labeled F1 for semantic parsing.']
[['PIPELINE', 'SPIGOT'], ['SAME', 'DIFF'], ['PIPELINE', 'SPIGOT', 'UAS'], ['PIPELINE', 'SPIGOT', 'DM']]
1
P18-1177table_2
Evaluation results for question generation.
2
[['Models', 'Baseline (Du et al. 2017) (w/o answer)'], ['Models', 'Seq2seq + copy (w/ answer)'], ['Models', 'ContextNQG: Seq2seq + copy (w/ full context + answer)'], ['Models', 'CorefNQG'], ['Models', 'CorefNQG - gating'], ['Models', 'CorefNQG - mention-pair score']]
2
[['Training set', 'BLEU-3'], ['Training set', 'BLEU-4'], ['Training set', 'METEOR'], ['Training set w/ noisy examples', 'BLEU-3'], ['Training set w/ noisy examples', 'BLEU-4'], ['Training set w/ noisy examples', 'METEOR']]
[['17.50', '12.28', '16.62', '15.81', '10.78', '15.31'], ['20.01', '14.31', '18.50', '19.61', '13.96', '18.19'], ['20.31', '14.58', '18.84', '19.57', '14.05', '18.19'], ['20.90', '15.16', '19.12', '20.19', '14.52', '18.59'], ['20.68', '14.84', '18.98', '20.08', '14.40', '18.64'], ['20.56', '14.75', '18.85', '19.73', '14.13', '18.38']]
column
['BLEU-3', 'BLEU-4', 'METEOR', 'BLEU-3', 'BLEU-4', 'METEOR']
['CorefNQG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training set || BLEU-3</th> <th>Training set || BLEU-4</th> <th>Training set || METEOR</th> <th>Training set w/ noisy examples || BLEU-3</th> <th>Training set w/ noisy examples || BLEU-4</th> <th>Training set w/ noisy examples || METEOR</th> </tr> </thead> <tbody> <tr> <td>Models || Baseline (Du et al. 2017) (w/o answer)</td> <td>17.50</td> <td>12.28</td> <td>16.62</td> <td>15.81</td> <td>10.78</td> <td>15.31</td> </tr> <tr> <td>Models || Seq2seq + copy (w/ answer)</td> <td>20.01</td> <td>14.31</td> <td>18.50</td> <td>19.61</td> <td>13.96</td> <td>18.19</td> </tr> <tr> <td>Models || ContextNQG: Seq2seq + copy (w/ full context + answer)</td> <td>20.31</td> <td>14.58</td> <td>18.84</td> <td>19.57</td> <td>14.05</td> <td>18.19</td> </tr> <tr> <td>Models || CorefNQG</td> <td>20.90</td> <td>15.16</td> <td>19.12</td> <td>20.19</td> <td>14.52</td> <td>18.59</td> </tr> <tr> <td>Models || CorefNQG - gating</td> <td>20.68</td> <td>14.84</td> <td>18.98</td> <td>20.08</td> <td>14.40</td> <td>18.64</td> </tr> <tr> <td>Models || CorefNQG - mention-pair score</td> <td>20.56</td> <td>14.75</td> <td>18.85</td> <td>19.73</td> <td>14.13</td> <td>18.38</td> </tr> </tbody></table>
Table 2
table_2
P18-1177
7
acl2018
Table 2 shows the BLEU-{3, 4} and METEOR scores of different models. Our CorefNQG outperforms the seq2seq baseline of Du et al. (2017) by a large margin. This shows that the copy mechanism, answer features and coreference resolution all aid question generation. In addition, CorefNQG outperforms both Seq2seq+Copy models significantly, whether or not they have access to the full context. This demonstrates that the coreference knowledge encoded with the gating network explicitly helps with the training and generation: it is more difficult for the neural sequence model to learn the coreference knowledge in a latent way. (See input 1 in Figure 3 for an example.). We also show in Table 2 the results of the QG models trained on the training set augmented with noisy examples with predicted answer spans. There is a consistent but acceptable drop for each model on this new training set, given the inaccuracy of predicted answer spans. We see that CorefNQG still outperforms the baseline models across all metrics.
[1, 1, 2, 1, 2, 1, 1, 1]
['Table 2 shows the BLEU-{3, 4} and METEOR scores of different models.', 'Our CorefNQG outperforms the seq2seq baseline of Du et al. (2017) by a large margin.', 'This shows that the copy mechanism, answer features and coreference resolution all aid question generation.', 'In addition, CorefNQG outperforms both Seq2seq+Copy models significantly, whether or not they have access to the full context.', 'This demonstrates that the coreference knowledge encoded with the gating network explicitly helps with the training and generation: it is more difficult for the neural sequence model to learn the coreference knowledge in a latent way. (See input 1 in Figure 3 for an example.).', 'We also show in Table 2 the results of the QG models trained on the training set augmented with noisy examples with predicted answer spans.', 'There is a consistent but acceptable drop for each model on this new training set, given the inaccuracy of predicted answer spans.', 'We see that CorefNQG still outperforms the baseline models across all metrics.']
[['BLEU-3', 'BLEU-4', 'METEOR'], ['CorefNQG', 'Baseline (Du et al. 2017) (w/o answer)', 'Training set'], ['CorefNQG'], ['CorefNQG', 'Seq2seq + copy (w/ answer)', 'ContextNQG: Seq2seq + copy (w/ full context + answer)', 'Training set'], ['CorefNQG'], ['Training set w/ noisy examples'], ['Training set', 'Training set w/ noisy examples'], ['CorefNQG', 'Baseline (Du et al. 2017) (w/o answer)']]
1
P18-1177table_6
Performance of the neural machine reading comprehension model (no initialization with pretrained embeddings) on our generated corpus.
1
[['DocReader (Chen et al. 2017)']]
2
[['Exact Match', 'Dev'], ['Exact Match', 'Test'], ['F-1', 'Dev'], ['F-1', 'Test']]
[['82.33', '81.65', '88.20', '87.79']]
column
['Exact Match', 'Exact Match', 'F-1', 'F-1']
['DocReader (Chen et al. 2017)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || Dev</th> <th>Exact Match || Test</th> <th>F-1 || Dev</th> <th>F-1 || Test</th> </tr> </thead> <tbody> <tr> <td>DocReader (Chen et al. 2017)</td> <td>82.33</td> <td>81.65</td> <td>88.20</td> <td>87.79</td> </tr> </tbody></table>
Table 6
table_6
P18-1177
9
acl2018
Table 6 shows the performance of a topperforming system for the SQuAD dataset (Document Reader (Chen et al., 2017)) when applied to the development and test set portions of our generated dataset. The system was trained on the training set portion of our dataset. We use the SQuAD evaluation scripts, which calculate exact match (EM) and F-1 scores.2 . Performance of the neural machine reading model is reasonable.
[1, 2, 1, 1]
['Table 6 shows the performance of a topperforming system for the SQuAD dataset (Document Reader (Chen et al., 2017)) when applied to the development and test set portions of our generated dataset.', 'The system was trained on the training set portion of our dataset.', 'We use the SQuAD evaluation scripts, which calculate exact match (EM) and F-1 scores.2 .', 'Performance of the neural machine reading model is reasonable.']
[['DocReader (Chen et al. 2017)'], None, ['Exact Match', 'F-1'], ['DocReader (Chen et al. 2017)']]
1
P18-1178table_3
Performance of our method and competing models on the MS-MARCO test set
2
[['Model', 'FastQA Ext (Weissenborn et al. 2017)'], ['Model', 'Prediction (Wang and Jiang 2016)'], ['Model', 'ReasoNet (Shen et al. 2017)'], ['Model', 'R-Net (Wang et al. 2017c)'], ['Model', 'S-Net (Tan et al. 2017)'], ['Model', 'Our Model'], ['Model', 'S-Net (Ensemble)'], ['Model', 'Our Model (Ensemble)'], ['Model', 'Human']]
1
[['ROUGE-L'], ['BLEU-1']]
[['33.67', '33.93'], ['37.33', '40.72'], ['38.81', '39.86'], ['42.89', '42.22'], ['45.23', '43.78'], ['46.15', '44.47'], ['46.65', '44.78'], ['46.66', '45.41'], ['47', '46']]
column
['ROUGE-L', 'BLEU-1']
['Our Model', 'Our Model (Ensemble)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>BLEU-1</th> </tr> </thead> <tbody> <tr> <td>Model || FastQA Ext (Weissenborn et al. 2017)</td> <td>33.67</td> <td>33.93</td> </tr> <tr> <td>Model || Prediction (Wang and Jiang 2016)</td> <td>37.33</td> <td>40.72</td> </tr> <tr> <td>Model || ReasoNet (Shen et al. 2017)</td> <td>38.81</td> <td>39.86</td> </tr> <tr> <td>Model || R-Net (Wang et al. 2017c)</td> <td>42.89</td> <td>42.22</td> </tr> <tr> <td>Model || S-Net (Tan et al. 2017)</td> <td>45.23</td> <td>43.78</td> </tr> <tr> <td>Model || Our Model</td> <td>46.15</td> <td>44.47</td> </tr> <tr> <td>Model || S-Net (Ensemble)</td> <td>46.65</td> <td>44.78</td> </tr> <tr> <td>Model || Our Model (Ensemble)</td> <td>46.66</td> <td>45.41</td> </tr> <tr> <td>Model || Human</td> <td>47</td> <td>46</td> </tr> </tbody></table>
Table 3
table_3
P18-1178
6
acl2018
Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set. We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002). As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human peformance. If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al. (2017), especially in terms of the BLEU-1.
[1, 1, 1, 1]
['Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.', 'We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002).', 'As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human peformance.', 'If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al. (2017), especially in terms of the BLEU-1.']
[['Our Model', 'FastQA Ext (Weissenborn et al. 2017)', 'Prediction (Wang and Jiang 2016)', 'ReasoNet (Shen et al. 2017)', 'R-Net (Wang et al. 2017c)', 'S-Net (Tan et al. 2017)'], ['ROUGE-L', 'BLEU-1'], ['Our Model', 'ROUGE-L', 'BLEU-1'], ['Our Model (Ensemble)', 'ROUGE-L', 'BLEU-1', 'S-Net (Ensemble)']]
1
P18-1181table_2
Component evaluation for the language model (“Ppl” = perplexity), pentameter model (“Stress Acc”), and rhyme model (“Rhyme F1”). Each number is an average across 10 runs.
2
[['Model', 'LM'], ['Model', 'LM*'], ['Model', 'LM**'], ['Model', 'LM**-C'], ['Model', 'LM**+PM+RM'], ['Model', 'Stress-BL'], ['Model', 'Rhyme-BL'], ['Model', 'Rhyme-EM']]
1
[['Ppl'], ['Stress Acc'], ['Rhyme F1']]
[['90.13', '-', '-'], ['84.23', '-', '-'], ['80.41', '-', '-'], ['83.68', '-', '-'], ['80.22', '0.74', '0.91'], ['-', '0.80', '-'], ['-', '-', '0.74'], ['-', '-', '0.71']]
column
['Ppl', 'Stress Acc', 'Rhyme F1']
['LM**+PM+RM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ppl</th> <th>Stress Acc</th> <th>Rhyme F1</th> </tr> </thead> <tbody> <tr> <td>Model || LM</td> <td>90.13</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LM*</td> <td>84.23</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LM**</td> <td>80.41</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LM**-C</td> <td>83.68</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LM**+PM+RM</td> <td>80.22</td> <td>0.74</td> <td>0.91</td> </tr> <tr> <td>Model || Stress-BL</td> <td>-</td> <td>0.80</td> <td>-</td> </tr> <tr> <td>Model || Rhyme-BL</td> <td>-</td> <td>-</td> <td>0.74</td> </tr> <tr> <td>Model || Rhyme-EM</td> <td>-</td> <td>-</td> <td>0.71</td> </tr> </tbody></table>
Table 2
table_2
P18-1181
7
acl2018
Perplexity on the test partition is detailed in Table 2. Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM??. The inferior performance of LM??-C compared to LM?? demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks. The full model LM??+PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly. We present stress accuracy in Table 2. LM??+PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors. Table 2 details the rhyming results. The rhyme model performs very strongly at F1 > 0.90, well above both baselines. Rhyme-EM performs poorly because it operates at the word level (i.e.it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.26.
[1, 1, 1, 1, 1, 1, 1, 1, 2]
['Perplexity on the test partition is detailed in Table 2.', 'Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM??.', 'The inferior performance of LM??-C compared to LM?? demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.', 'The full model LM??+PM+RM, which learns stress and rhyme patterns simultaneously, also appears\r\nto improve the language model slightly.', 'We present stress accuracy in Table 2.', 'LM??+PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.', 'Table 2 details the rhyming results.', 'The rhyme model performs very strongly at F1 > 0.90, well above both baselines.', 'Rhyme-EM performs poorly because it operates at the word level (i.e.it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.26.']
[['Ppl'], ['LM', 'LM**', 'Ppl'], ['LM**-C', 'LM**', 'Ppl'], ['LM**+PM+RM', 'Ppl'], ['Stress Acc'], ['LM**+PM+RM'], ['Rhyme F1'], ['LM**+PM+RM', 'Rhyme-BL', 'Rhyme-EM'], ['Rhyme-EM']]
1
P18-1182table_1
(1) Accuracy (Acc.) and String Edit Distance (SED) results in the prediction of all referring expressions; (2) Accuracy (Acc.), Precision (Prec.), Recall (Rec.) and F-Score results in the prediction of pronominal forms; and (3) Accuracy (Acc.) and BLEU score results of the texts with the generated referring expressions. Rankings were determined by statistical significance.
1
[['OnlyNames'], ['Ferreira'], ['NeuralREG+Seq2Seq'], ['NeuralREG+CAtt'], ['NeuralREG+HierAtt']]
2
[['All References', 'Acc.'], ['All References', 'SED'], ['Pronouns', 'Acc.'], ['Pronouns', 'Prec.'], ['Pronouns', 'Rec.'], ['Pronouns', 'F-Score'], ['Text', 'Acc.'], ['Text', 'BLEU']]
[['0.53D', '4.05D', '-', '-', '-', '-', '0.15D', '69.03D'], ['0.61C', '3.18C', '0.43B', '0.57', '0.54', '0.55', '0.19C', '72.78C'], ['0.74A,B', '2.32A,B', '0.75A', '0.77', '0.78', '0.78', '0.28B', '79.27A,B'], ['0.74A', '2.25A', '0.75A', '0.73', '0.78', '0.75', '0.30A', '79.39A'], ['0.73B', '2.36B', '0.73A', '0.74', '0.77', '0.75', '0.28A,B', '79.01B']]
column
['Acc.', 'SED', 'Acc.', 'Prec.', 'Rec.', 'F-Score', 'Acc.', 'BLEU']
['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All References || Acc.</th> <th>All References || SED</th> <th>Pronouns || Acc.</th> <th>Pronouns || Prec.</th> <th>Pronouns || Rec.</th> <th>Pronouns || F-Score</th> <th>Text || Acc.</th> <th>Text || BLEU</th> </tr> </thead> <tbody> <tr> <td>OnlyNames</td> <td>0.53D</td> <td>4.05D</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.15D</td> <td>69.03D</td> </tr> <tr> <td>Ferreira</td> <td>0.61C</td> <td>3.18C</td> <td>0.43B</td> <td>0.57</td> <td>0.54</td> <td>0.55</td> <td>0.19C</td> <td>72.78C</td> </tr> <tr> <td>NeuralREG+Seq2Seq</td> <td>0.74A,B</td> <td>2.32A,B</td> <td>0.75A</td> <td>0.77</td> <td>0.78</td> <td>0.78</td> <td>0.28B</td> <td>79.27A,B</td> </tr> <tr> <td>NeuralREG+CAtt</td> <td>0.74A</td> <td>2.25A</td> <td>0.75A</td> <td>0.73</td> <td>0.78</td> <td>0.75</td> <td>0.30A</td> <td>79.39A</td> </tr> <tr> <td>NeuralREG+HierAtt</td> <td>0.73B</td> <td>2.36B</td> <td>0.73A</td> <td>0.74</td> <td>0.77</td> <td>0.75</td> <td>0.28A,B</td> <td>79.01B</td> </tr> </tbody></table>
Table 1
table_1
P18-1182
9
acl2018
Table 1 summarizes the results for all models on all metrics on the test set and Table 2 depicts a text example lexicalized by each model. The first thing to note in the results of the first table is that the baselines in the top two rows performed quite strong on this task, generating more than half of the referring expressions exactly as in the goldstandard. The method based on Castro Ferreira et al.(2016) performed statistically better than OnlyNames on all metrics due to its capability, albeit to a limited extent, to predict pronominal references (which OnlyNames obviously cannot). We reported results on the test set for NeuralREG+Seq2Seq and NeuralREG+CAtt using dropout probability 0.3 and beam size 5, and NeuralREG+HierAtt with dropout probability of 0.3 and beam size of 1 selected based on the highest accuracy on the development set. Importantly, the three NeuralREG variant models statistically outperformed the two baseline systems. They achieved BLEU scores, text and referential accuracies as well as string edit distances in the range of 79.01-79.39, 28%-30%, 73%-74% and 2.25- 2.36, respectively. This means that NeuralREG predicted 3 out of 4 references completely correct, whereas the incorrect ones needed an average of 2 post-edition operations in character level to be equal to the gold-standard. When considering the texts lexicalized with the referring expressions produced by NeuralREG, at least 28% of them are similar to the original texts. Especially noteworthy was the score on pronoun accuracy, indicating that the model was well capable of predicting when to generate a pronominal reference in our dataset. The results for the different decoding methods for NeuralREG were similar, with the NeuralREG+CAtt performing slightly better in terms of the BLEU score, text accuracy and String Edit Distance. The more complex NeuralREG+HierAtt yielded the lowest results, eventhough the differences with the other two models were small and not even statistically significant in many of the cases.
[1, 1, 1, 2, 1, 1, 2, 2, 2, 1, 1]
['Table 1 summarizes the results for all models on all metrics on the test set and Table 2 depicts a text example lexicalized by each model.', 'The first thing to note in the results of the first table is that the baselines in the top two rows performed quite strong on this task, generating more than half of the referring expressions exactly as in the goldstandard.', 'The method based on Castro Ferreira et al.(2016) performed statistically better than OnlyNames on all metrics due to its capability, albeit to a limited extent, to predict pronominal references (which OnlyNames obviously cannot).', 'We reported results on the test set for NeuralREG+Seq2Seq and NeuralREG+CAtt using dropout probability 0.3 and beam size 5, and NeuralREG+HierAtt with dropout probability of 0.3 and beam size of 1 selected based on the highest accuracy on the development set.', 'Importantly, the three NeuralREG variant models statistically outperformed the two baseline systems.', 'They achieved BLEU scores, text and referential accuracies as well as string edit distances in the range of 79.01-79.39, 28%-30%, 73%-74% and 2.25- 2.36, respectively.', 'This means that NeuralREG predicted 3 out of 4 references completely correct, whereas the incorrect ones needed an average of 2 post-edition operations in character level to be equal to the gold-standard.', 'When considering the texts lexicalized with the referring expressions produced by NeuralREG, at least 28% of them are similar to the original texts.', 'Especially noteworthy was the score on pronoun accuracy, indicating that the model was well capable of predicting when to generate a pronominal reference in our dataset.', 'The results for the different decoding methods for NeuralREG were similar, with the NeuralREG+CAtt performing slightly better in terms of the BLEU score, text accuracy and String Edit Distance.', 'The more complex NeuralREG+HierAtt yielded the lowest results, eventhough the differences with the other two models\r\nwere small and not even statistically significant in many of the cases.']
[None, ['OnlyNames', 'Ferreira'], ['OnlyNames', 'Ferreira'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt', 'Text', 'BLEU', 'All References', 'Acc.', 'SED'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+CAtt'], ['NeuralREG+HierAtt']]
1
P18-1182table_3
Fluency, Grammaticality and Clarity results obtained in the human evaluation. Rankings were determined by statistical significance.
1
[['OnlyNames'], ['Ferreira'], ['NeuralREG+Seq2Seq'], ['NeuralREG+CAtt'], ['NeuralREG+HierAtt'], ['Original']]
1
[['Fluency'], ['Grammar'], ['Clarity']]
[['4.74C', '4.68B', '4.90B'], ['4.74C', '4.58B', '4.93B'], ['4.95B,C', '4.82A,B', '4.97B'], ['5.23A,B', '4.95A,B', '5.26A,B'], ['5.07B,C', '4.90A,B', '5.13A,B'], ['5.41A', '5.17A', '5.42A']]
column
['Fluency', 'Grammar', 'Clarity']
['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Grammar</th> <th>Clarity</th> </tr> </thead> <tbody> <tr> <td>OnlyNames</td> <td>4.74C</td> <td>4.68B</td> <td>4.90B</td> </tr> <tr> <td>Ferreira</td> <td>4.74C</td> <td>4.58B</td> <td>4.93B</td> </tr> <tr> <td>NeuralREG+Seq2Seq</td> <td>4.95B,C</td> <td>4.82A,B</td> <td>4.97B</td> </tr> <tr> <td>NeuralREG+CAtt</td> <td>5.23A,B</td> <td>4.95A,B</td> <td>5.26A,B</td> </tr> <tr> <td>NeuralREG+HierAtt</td> <td>5.07B,C</td> <td>4.90A,B</td> <td>5.13A,B</td> </tr> <tr> <td>Original</td> <td>5.41A</td> <td>5.17A</td> <td>5.42A</td> </tr> </tbody></table>
Table 3
table_3
P18-1182
9
acl2018
Table 3 summarizes the results. Inspection of the Table reveals a clear pattern: all three neural models scored higher than the baselines on all metrics, with especially NeuralREG+CAtt approaching the ratings for the original sentences, although? again ?differences between the neural models were small. Concerning the size of the triple sets, we did not find any clear pattern. To test the statistical significance of the pairwise comparisons, we used the Wilcoxon signedrank test corrected for multiple comparisons using the Bonferroni method. Different from the automatic evaluation, the results of both baselines were not statistically significant for the three metrics. In comparison with the neural models, NeuralREG+CAtt significantly outperformed the baselines in terms of fluency, whereas the other comparisons between baselines and neural models were not statistically significant. The results for the 3 different decoding methods of NeuralREG also did not reveal a significant difference. Finally, the original texts were rated significantly higher than both baselines in terms of the three metrics, also than NeuralREG+Seq2Seq and NeuralREG+HierAtt in terms of fluency, and than NeuralREG+Seq2Seq in terms of clarity.
[1, 1, 1, 2, 2, 1, 1, 1]
['Table 3 summarizes the results.', 'Inspection of the Table reveals a clear pattern: all three neural models scored higher than the baselines on all metrics, with especially NeuralREG+CAtt approaching the ratings for the original sentences, although? again ?differences between the neural models were small.', 'Concerning the size of the triple sets, we did not find any clear pattern.', 'To test the statistical significance of the pairwise comparisons, we used the Wilcoxon signedrank test corrected for multiple comparisons using the Bonferroni method.', 'Different from the automatic evaluation, the results of both baselines were not statistically significant for the three metrics.', 'In comparison with the neural models, NeuralREG+CAtt significantly outperformed the baselines in terms of fluency, whereas the other comparisons between baselines and neural models were not statistically significant.', 'The results for the 3 different decoding methods of NeuralREG\nalso did not reveal a significant difference.', 'Finally, the original texts were rated significantly higher than both baselines in terms of the three metrics, also than NeuralREG+Seq2Seq and NeuralREG+HierAtt in terms of fluency, and than NeuralREG+Seq2Seq in terms of clarity.']
[None, ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt', 'Fluency', 'Grammar', 'Clarity'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], None, None, ['NeuralREG+CAtt', 'OnlyNames', 'Ferreira'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['Original', 'NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt', 'Fluency', 'Clarity']]
1
P18-1186table_1
NED performance on the SnapCaptionsKB dataset at Top-1, 3, 5, 10, 50 accuracies. The classification is over 1M entities. Candidates generation methods: N/A, or over a fixed number of candidates generated with methods: m→e hash list and kNN (lexical neighbors).
6
[['Modalities', 'W', 'Model', 'ARNN (Eshel et al. 2017)', 'Candidates Generation', 'm→e list'], ['Modalities', 'W', 'Model', 'ARNN', 'Candidates Generation', '5-NN (lexical)'], ['Modalities', 'W', 'Model', 'ARNN', 'Candidates Generation', '10-NN (lexical)'], ['Modalities', 'W', 'Model', 'sDA-NED (He et al. 2013)', 'Candidates Generation', 'm→e list'], ['Modalities', 'W', 'Model', 'Zeroshot', 'Candidates Generation', 'N/A'], ['Modalities', 'W + C', 'Model', 'DZMNED', 'Candidates Generation', 'N/A'], ['Modalities', 'W + C', 'Model', 'DZMNED + Modality Attention', 'Candidates Generation', 'N/A'], ['Modalities', 'W + C + V', 'Model', 'DZMNED', 'Candidates Generation', 'N/A'], ['Modalities', 'W + C + V', 'Model', 'DZMNED + Modality Attention', 'Candidates Generation', 'N/A']]
2
[['Accuracy (%)', 'Top-1'], ['Accuracy (%)', 'Top-3'], ['Accuracy (%)', 'Top-5'], ['Accuracy (%)', 'Top-10'], ['Accuracy (%)', 'Top-50']]
[['51.2', '60.4', '66.5', '66.9', '66.9'], ['35.2', '43.3', '45.0', '-', '-'], ['31.9', '40.1', '44.5', '50.7', '-'], ['48.7', '57.3', '66.3', '66.9', '66.9'], ['43.6', '63.8', '67.1', '70.5', '77.2'], ['67.0', '72.7', '74.8', '76.8', '85'], ['67.8', '73.5', '74.8', '76.2', '84.6'], ['67.2', '74.6', '77.7', '80.5', '88.1'], ['68.1', '75.5', '78.2', '80.9', '87.9']]
column
['Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)']
['DZMNED + Modality Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || Top-1</th> <th>Accuracy (%) || Top-3</th> <th>Accuracy (%) || Top-5</th> <th>Accuracy (%) || Top-10</th> <th>Accuracy (%) || Top-50</th> </tr> </thead> <tbody> <tr> <td>Modalities || W || Model || ARNN (Eshel et al. 2017) || Candidates Generation || m→e list</td> <td>51.2</td> <td>60.4</td> <td>66.5</td> <td>66.9</td> <td>66.9</td> </tr> <tr> <td>Modalities || W || Model || ARNN || Candidates Generation || 5-NN (lexical)</td> <td>35.2</td> <td>43.3</td> <td>45.0</td> <td>-</td> <td>-</td> </tr> <tr> <td>Modalities || W || Model || ARNN || Candidates Generation || 10-NN (lexical)</td> <td>31.9</td> <td>40.1</td> <td>44.5</td> <td>50.7</td> <td>-</td> </tr> <tr> <td>Modalities || W || Model || sDA-NED (He et al. 2013) || Candidates Generation || m→e list</td> <td>48.7</td> <td>57.3</td> <td>66.3</td> <td>66.9</td> <td>66.9</td> </tr> <tr> <td>Modalities || W || Model || Zeroshot || Candidates Generation || N/A</td> <td>43.6</td> <td>63.8</td> <td>67.1</td> <td>70.5</td> <td>77.2</td> </tr> <tr> <td>Modalities || W + C || Model || DZMNED || Candidates Generation || N/A</td> <td>67.0</td> <td>72.7</td> <td>74.8</td> <td>76.8</td> <td>85</td> </tr> <tr> <td>Modalities || W + C || Model || DZMNED + Modality Attention || Candidates Generation || N/A</td> <td>67.8</td> <td>73.5</td> <td>74.8</td> <td>76.2</td> <td>84.6</td> </tr> <tr> <td>Modalities || W + C + V || Model || DZMNED || Candidates Generation || N/A</td> <td>67.2</td> <td>74.6</td> <td>77.7</td> <td>80.5</td> <td>88.1</td> </tr> <tr> <td>Modalities || W + C + V || Model || DZMNED + Modality Attention || Candidates Generation || N/A</td> <td>68.1</td> <td>75.5</td> <td>78.2</td> <td>80.9</td> <td>87.9</td> </tr> </tbody></table>
Table 1
table_1
P18-1186
7
acl2018
Table 1 shows the Top-1, 3, 5, 10, and 50 candidates retrieval accuracy results on the Snap Captions dataset. We see that the proposed approach significantly outperforms the baselines which use fixed candidates generation method. Note that m→e hash list-based methods, which retrieve as candidates the KB entities that appear in the training set of captions only, has upper performance limit at 66.9%, showing the limitance of fixed candidates generation method for unseen entities in social media posts. k-NN methods which retrieve lexical neighbors of mention (in an attempt to perform soft normalization on mentions) also do not perform well. Our proposed zeroshot approaches, however, do not fixate candidate generation, and instead compares combined contextual and lexical similarities among all 1M KB entities, achieving higher upper performance limit (Top-50 retrieval accuracy reaches 88.1%). This result indicates that the proposed zeroshot model is capable of predicting for unseen entities as well. The lexical sub-model can also be interpreted as functioning as soft neural mapping of mention to potential candidates, rather than heuristic matching to fixed candidates. In addition, when visual context is available (W+C+V), the performance generally improves over the textual models (W+C), showing that visual information can provide additional contexts for disambiguation. The modality attention module also adds performance gain by re-weighting the modalities based on their informativeness.
[1, 1, 2, 2, 2, 1, 2, 1, 1]
['Table 1 shows the Top-1, 3, 5, 10, and 50 candidates retrieval accuracy results on the Snap Captions dataset.', 'We see that the proposed approach significantly outperforms the baselines which use fixed candidates generation method.', 'Note that m→e hash list-based methods, which retrieve as candidates the KB entities that appear in the training set of captions only, has upper performance limit at 66.9%, showing the limitance of fixed candidates generation method for unseen entities in social media posts.', 'k-NN methods which retrieve lexical neighbors of mention (in an attempt to perform soft normalization on mentions) also do not perform well.', 'Our proposed zeroshot approaches, however, do not fixate candidate generation, and instead compares combined contextual and lexical similarities among all 1M KB entities, achieving higher upper performance limit (Top-50 retrieval accuracy reaches 88.1%).', 'This result indicates that the proposed zeroshot model is capable of predicting for unseen entities as well.', 'The lexical sub-model can also be interpreted as functioning as soft neural mapping of mention to potential candidates, rather than heuristic matching to fixed candidates.', 'In addition, when visual context is available (W+C+V), the performance generally improves over the textual models (W+C), showing that visual information can provide additional contexts for disambiguation.', 'The modality attention module also adds performance gain by re-weighting the modalities based on their informativeness.']
[['Accuracy (%)', 'Top-1', 'Top-3', 'Top-5', 'Top-10', 'Top-50'], ['DZMNED'], ['m→e list'], ['5-NN (lexical)', '10-NN (lexical)'], ['Zeroshot'], ['Zeroshot'], None, ['W + C + V', 'W + C'], ['DZMNED + Modality Attention']]
1
P18-1186table_2
MNED performance (Top-1, 5, 10 accuracies) on SnapCaptionsKB with varying qualities of KB embeddings. Model: DZMNED (W+C+V) method. Note that m → e hash list-based methods, which retrieve as candidates the KB entities that appear in the training set of captions only, has upper performance limit at 66.9%, showing the limitance of fixed candidates generation method for unseen entities in social media posts. k-NN methods which retrieve lexical neighbors of mention (in an attempt to perform soft normalization on mentions) also do not perform well. Our proposed zeroshot approaches, however, do not fixate candidate generation, and instead compares combined contextual and lexical similarities among all 1M KB entities, achieving higher upper performance limit (Top-50 retrieval accuracy reaches 88.1%). This result indicates that the proposed zeroshot model is capable of predicting for unseen entities as well. The lexical sub-model can also be interpreted as functioning as soft neural mapping of mention to potential candidates, rather than heuristic matching to fixed candidates.
2
[['KB Embeddings', 'Trained with 1M entities'], ['KB Embeddings', 'Trained with 10K entities'], ['KB Embeddings', 'Random embeddings']]
1
[['Top-1'], ['Top-5'], ['Top-10']]
[['68.1', '78.2', '80.9'], ['60.3', '72.5', '75.9'], ['41.4', '45.8', '48.0']]
column
['accuracy', 'accuracy', 'accuracy']
['KB Embeddings']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Top-1</th> <th>Top-5</th> <th>Top-10</th> </tr> </thead> <tbody> <tr> <td>KB Embeddings || Trained with 1M entities</td> <td>68.1</td> <td>78.2</td> <td>80.9</td> </tr> <tr> <td>KB Embeddings || Trained with 10K entities</td> <td>60.3</td> <td>72.5</td> <td>75.9</td> </tr> <tr> <td>KB Embeddings || Random embeddings</td> <td>41.4</td> <td>45.8</td> <td>48.0</td> </tr> </tbody></table>
Table 2
table_2
P18-1186
7
acl2018
To characterize this aspect, we provide Table 2 which shows MNED performance with varying quality of embeddings as follows: KB embeddings learned from 1M knowledge graph entities (same as in the main experiments), from 10K subset of entities (less triplets to train with in Eq.3, hence lower quality), and random embeddings (poorest) while all the other parameters are kept the same. It can be seen that the performance notably drops with lower quality of KB embeddings. When KB embeddings are replaced by random embeddings, the network effectively prevents the contextual zeroshot matching to KB entities and relies only on lexical similarities, achieving the poorest performance.
[1, 1, 1]
['To characterize this aspect, we provide Table 2 which shows MNED performance with varying quality of embeddings as follows: KB embeddings learned from 1M knowledge graph entities (same as in the main experiments), from 10K subset of entities (less triplets to train with in Eq.3, hence lower quality), and random embeddings (poorest) while all the other parameters are kept the same.', 'It can be seen that the performance notably drops with lower quality of KB embeddings.', 'When KB embeddings are replaced by random embeddings, the network effectively prevents the contextual zeroshot matching to KB entities and relies only on lexical similarities, achieving the poorest performance.']
[['Trained with 1M entities', 'Trained with 10K entities', 'Random embeddings'], ['Trained with 1M entities', 'Trained with 10K entities', 'Random embeddings'], ['Random embeddings']]
1
P18-1188table_1
Ablation results on the validation set. We report R1, R2, R3, R4, RL and their average (Avg.). The first block of the table presents LEAD and POINTERNET which do not use any external information. LEAD is the baseline system selecting first three sentences. POINTERNET is the sentence extraction system of Cheng and Lapata. XNET is our model. The second and third blocks of the table present different variants of XNET. We experimented with three types of external information: title (TITLE), image captions (CAPTION) and the first sentence (FS) of the document. The bottom block of the table presents models with more than one type of external information. The best performing model (highlighted in boldface) is used on the test set.
3
[['MODELS', 'LEAD', '-'], ['MODELS', 'POINTERNET', '-'], ['MODELS', 'XNET+TITLE', '-'], ['MODELS', 'XNET+CAPTION', '-'], ['MODELS', 'XNET+FS', '-'], ['MODELS', 'Combination Models (XNET+)', 'TITLE+CAPTION'], ['MODELS', 'Combination Models (XNET+)', 'TITLE+FS'], ['MODELS', 'Combination Models (XNET+)', 'CAPTION+FS'], ['MODELS', 'Combination Models (XNET+)', 'TITLE+CAPTION+FS']]
1
[['R1'], ['R2'], ['R3'], ['R4'], ['RL'], ['Avg.']]
[['49.2', '18.9', '9.8', '6.0', '43.8', '25.5'], ['53.3', '19.7', '10.4', '6.4', '47.2', '27.4'], ['55.0', '21.6', '11.7', '7.5', '48.9', '28.9'], ['55.3', '21.3', '11.4', '7.2', '49.0', '28.8'], ['54.8', '21.1', '11.3', '7.2', '48.6', '28.6'], ['55.4', '21.8', '11.8', '7.5', '49.2', '29.2'], ['55.1', '21.6', '11.6', '7.4', '48.9', '28.9'], ['55.3', '21.5', '11.5', '7.3', '49.0', '28.9'], ['55.4', '21.5', '11.6', '7.4', '49.1', '29.0']]
column
['R1', 'R2', 'R3', 'R4', 'RL', 'Avg.']
['TITLE+CAPTION']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>R3</th> <th>R4</th> <th>RL</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>MODELS || LEAD || -</td> <td>49.2</td> <td>18.9</td> <td>9.8</td> <td>6.0</td> <td>43.8</td> <td>25.5</td> </tr> <tr> <td>MODELS || POINTERNET || -</td> <td>53.3</td> <td>19.7</td> <td>10.4</td> <td>6.4</td> <td>47.2</td> <td>27.4</td> </tr> <tr> <td>MODELS || XNET+TITLE || -</td> <td>55.0</td> <td>21.6</td> <td>11.7</td> <td>7.5</td> <td>48.9</td> <td>28.9</td> </tr> <tr> <td>MODELS || XNET+CAPTION || -</td> <td>55.3</td> <td>21.3</td> <td>11.4</td> <td>7.2</td> <td>49.0</td> <td>28.8</td> </tr> <tr> <td>MODELS || XNET+FS || -</td> <td>54.8</td> <td>21.1</td> <td>11.3</td> <td>7.2</td> <td>48.6</td> <td>28.6</td> </tr> <tr> <td>MODELS || Combination Models (XNET+) || TITLE+CAPTION</td> <td>55.4</td> <td>21.8</td> <td>11.8</td> <td>7.5</td> <td>49.2</td> <td>29.2</td> </tr> <tr> <td>MODELS || Combination Models (XNET+) || TITLE+FS</td> <td>55.1</td> <td>21.6</td> <td>11.6</td> <td>7.4</td> <td>48.9</td> <td>28.9</td> </tr> <tr> <td>MODELS || Combination Models (XNET+) || CAPTION+FS</td> <td>55.3</td> <td>21.5</td> <td>11.5</td> <td>7.3</td> <td>49.0</td> <td>28.9</td> </tr> <tr> <td>MODELS || Combination Models (XNET+) || TITLE+CAPTION+FS</td> <td>55.4</td> <td>21.5</td> <td>11.6</td> <td>7.4</td> <td>49.1</td> <td>29.0</td> </tr> </tbody></table>
Table 1
table_1
P18-1188
5
acl2018
We report the performance of several variants of XNET on the validation set in Table 1. We also compare them against the LEAD baseline and POINTERNET. These two systems do not use any additional information. Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET. When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information. Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001). The performance with TITLE and CAPTION is better than that with FS. We also tried possible combinations of TITLE, CAPTION and FS. All XNET models are superior to the ones without any external information. XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively). I. It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.
[1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1]
['We report the performance of several variants of XNET on the validation set in Table 1.', 'We also compare them against the LEAD baseline and POINTERNET.', 'These two systems do not use any additional information.', 'Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.', 'When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.', 'Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001).', 'The performance with TITLE and CAPTION is better than that with FS.', 'We also tried possible combinations of TITLE, CAPTION and FS.', 'All XNET models are superior to the ones without any external information.', 'XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively). I.', 'It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.']
[['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS', 'TITLE+CAPTION', 'TITLE+FS', 'CAPTION+FS', 'TITLE+CAPTION+FS'], ['LEAD', 'POINTERNET'], None, ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['XNET+TITLE'], ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['TITLE+CAPTION', 'TITLE+FS', 'CAPTION+FS', 'TITLE+CAPTION+FS'], ['TITLE+CAPTION', 'TITLE+FS', 'CAPTION+FS', 'TITLE+CAPTION+FS'], ['TITLE+CAPTION', 'R1', 'R2', 'R3', 'R4', 'RL'], ['TITLE+CAPTION', 'TITLE+FS', 'CAPTION+FS', 'TITLE+CAPTION+FS', 'LEAD', 'POINTERNET']]
1
P18-1188table_4
Results (in percentage) for answer selection comparing our approaches (bottom part) to baselines (top): AP-CNN (dos Santos et al., 2016), ABCNN (Yin et al., 2016), L.D.C (Wang and Jiang, 2017), KV-MemNN (Miller et al., 2016), and COMPAGGR, a state-of-the-art system by Wang et al. (2017). (WGT) WRD CNT stands for the (weighted) word count baseline. See text for more details.
1
[['WRD CNT'], ['WGT WRD CNT'], ['AP-CNN'], ['ABCNN'], ['L.D.C'], ['KV-MemNN'], ['LOCALISF'], ['ISF'], ['PAIRCNN'], ['COMPAGGR'], ['XNET'], ['XNETTOPK'], ['LRXNET'], ['XNET+']]
2
[['SQuAD', 'ACC'], ['SQuAD', 'MAP'], ['SQuAD', 'MRR'], ['WikiQA', 'ACC'], ['WikiQA', 'MAP'], ['WikiQA', 'MRR'], ['NewsQA', 'ACC'], ['NewsQA', 'MAP'], ['NewsQA', 'MRR'], ['MSMarco', 'ACC'], ['MSMarco', 'MAP'], ['MSMarco', 'MRR']]
[['77.84', '27.50', '27.77', '51.05', '48.91', '49.24', '44.67', '46.48', '46.91', '20.16', '19.37', '19.51'], ['78.43', '28.10', '28.38', '49.79', '50.99', '51.32', '45.24', '48.20', '48.64', '20.50', '20.06', '20.23'], ['-', '-', '-', '-', '68.86', '69.57', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '69.21', '71.08', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '70.58', '72.26', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '70.69', '72.65', '-', '-', '-', '-', '-', '-'], ['79.50', '27.78', '28.05', '49.79', '49.57', '50.11', '44.69', '48.40', '46.48', '20.21', '20.22', '20.39'], ['78.85', '28.09', '28.36', '48.52', '46.53', '46.72', '45.61', '48.57', '48.99', '20.52', '20.07', '20.23'], ['32.53', '46.34', '46.35', '32.49', '39.87', '38.71', '25.67', '40.16', '39.89', '14.92', '34.62', '35.14'], ['85.52', '91.05', '91.05', '60.76', '73.12', '74.06', '54.54', '67.63', '68.21', '32.05', '52.82', '53.43'], ['35.50', '58.46', '58.84', '54.43', '69.12', '70.22', '26.18', '42.28', '42.43', '15.45', '35.42', '35.97'], ['36.09', '59.70', '59.32', '55.00', '68.66', '70.24', '29.41', '46.69', '46.97', '17.04', '37.60', '38.16'], ['85.63', '91.10', '91.85', '63.29', '76.57', '75.10', '55.17', '68.92', '68.43', '32.92', '31.15', '30.41'], ['79.39', '87.32', '88.00', '57.08', '70.25', '71.28', '47.23', '61.81', '61.42', '23.07', '42.88', '43.42']]
column
['ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR']
['LRXNET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SQuAD || ACC</th> <th>SQuAD || MAP</th> <th>SQuAD || MRR</th> <th>WikiQA || ACC</th> <th>WikiQA || MAP</th> <th>WikiQA || MRR</th> <th>NewsQA || ACC</th> <th>NewsQA || MAP</th> <th>NewsQA || MRR</th> <th>MSMarco || ACC</th> <th>MSMarco || MAP</th> <th>MSMarco || MRR</th> </tr> </thead> <tbody> <tr> <td>WRD CNT</td> <td>77.84</td> <td>27.50</td> <td>27.77</td> <td>51.05</td> <td>48.91</td> <td>49.24</td> <td>44.67</td> <td>46.48</td> <td>46.91</td> <td>20.16</td> <td>19.37</td> <td>19.51</td> </tr> <tr> <td>WGT WRD CNT</td> <td>78.43</td> <td>28.10</td> <td>28.38</td> <td>49.79</td> <td>50.99</td> <td>51.32</td> <td>45.24</td> <td>48.20</td> <td>48.64</td> <td>20.50</td> <td>20.06</td> <td>20.23</td> </tr> <tr> <td>AP-CNN</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>68.86</td> <td>69.57</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>ABCNN</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>69.21</td> <td>71.08</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>L.D.C</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>70.58</td> <td>72.26</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>KV-MemNN</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>70.69</td> <td>72.65</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>LOCALISF</td> <td>79.50</td> <td>27.78</td> <td>28.05</td> <td>49.79</td> <td>49.57</td> <td>50.11</td> <td>44.69</td> <td>48.40</td> <td>46.48</td> <td>20.21</td> <td>20.22</td> <td>20.39</td> </tr> <tr> <td>ISF</td> <td>78.85</td> <td>28.09</td> <td>28.36</td> <td>48.52</td> <td>46.53</td> <td>46.72</td> <td>45.61</td> <td>48.57</td> <td>48.99</td> <td>20.52</td> <td>20.07</td> <td>20.23</td> </tr> <tr> <td>PAIRCNN</td> <td>32.53</td> <td>46.34</td> <td>46.35</td> <td>32.49</td> <td>39.87</td> <td>38.71</td> <td>25.67</td> <td>40.16</td> <td>39.89</td> <td>14.92</td> <td>34.62</td> <td>35.14</td> </tr> <tr> <td>COMPAGGR</td> <td>85.52</td> <td>91.05</td> <td>91.05</td> <td>60.76</td> <td>73.12</td> <td>74.06</td> <td>54.54</td> <td>67.63</td> <td>68.21</td> <td>32.05</td> <td>52.82</td> <td>53.43</td> </tr> <tr> <td>XNET</td> <td>35.50</td> <td>58.46</td> <td>58.84</td> <td>54.43</td> <td>69.12</td> <td>70.22</td> <td>26.18</td> <td>42.28</td> <td>42.43</td> <td>15.45</td> <td>35.42</td> <td>35.97</td> </tr> <tr> <td>XNETTOPK</td> <td>36.09</td> <td>59.70</td> <td>59.32</td> <td>55.00</td> <td>68.66</td> <td>70.24</td> <td>29.41</td> <td>46.69</td> <td>46.97</td> <td>17.04</td> <td>37.60</td> <td>38.16</td> </tr> <tr> <td>LRXNET</td> <td>85.63</td> <td>91.10</td> <td>91.85</td> <td>63.29</td> <td>76.57</td> <td>75.10</td> <td>55.17</td> <td>68.92</td> <td>68.43</td> <td>32.92</td> <td>31.15</td> <td>30.41</td> </tr> <tr> <td>XNET+</td> <td>79.39</td> <td>87.32</td> <td>88.00</td> <td>57.08</td> <td>70.25</td> <td>71.28</td> <td>47.23</td> <td>61.81</td> <td>61.42</td> <td>23.07</td> <td>42.88</td> <td>43.42</td> </tr> </tbody></table>
Table 4
table_4
P18-1188
8
acl2018
Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco. Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation. Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET. This means that just reading the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering. Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR. Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets. This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection. Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique. Using it as a hard constraint, with XNETTOPK, does not achieve the best result. We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself. As such, XNET+ is capable of using this feature in datasets with richer context. It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern. For the SQuAD dataset, the results are comparable (less than 1%). However, the improvement for WikiQA reaches ?3% and then the gap shrinks again for NewsQA, with an improvement of ?1%. This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA. Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises. Interestingly, our model lags behind COMPAGGR on the MSMarco dataset. It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets. As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly. This can be observed by the fact that XNET and PAIRCNN obtain comparable results. COMPAGGR performs better because comparing each candidate independently is a better strategy.
[1, 1, 1, 2, 1, 1, 0, 0, 1, 2, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 1]
['Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.', 'Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.', 'Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.', 'This means that just reading the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.', 'Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.', 'Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.', 'This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.', 'Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.', 'Using it as a hard constraint, with XNETTOPK, does not achieve the best result.', 'We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.', 'As such, XNET+ is capable of using this feature in datasets with richer context.', 'It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.', 'For the SQuAD dataset, the results are comparable (less than 1%).', 'However, the improvement for WikiQA reaches ?3% and then the gap shrinks again for NewsQA, with an improvement of ?1%.', 'This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.', 'Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.', 'Interestingly, our model lags behind COMPAGGR on the MSMarco dataset.', 'It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.', 'As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.', 'This can be observed by the fact that XNET and PAIRCNN obtain comparable results.', 'COMPAGGR performs better because comparing each candidate independently is a better strategy.']
[['SQuAD', 'WikiQA', 'NewsQA', 'MSMarco'], ['XNET', 'PAIRCNN'], ['ISF', 'XNET'], ['XNET'], ['XNET+'], ['LRXNET', 'COMPAGGR'], None, None, ['XNETTOPK'], ['ISF'], ['XNET+'], ['LRXNET'], ['SQuAD', 'LRXNET', 'COMPAGGR'], ['LRXNET', 'COMPAGGR', 'WikiQA', 'NewsQA'], ['WikiQA', 'NewsQA', 'SQuAD'], None, ['MSMarco', 'LRXNET', 'COMPAGGR'], ['MSMarco', 'WikiQA', 'NewsQA', 'SQuAD'], ['XNET+', 'LRXNET'], ['XNET+', 'PAIRCNN'], ['COMPAGGR']]
1
P18-1189table_1
Performance of our various models in an unsupervised setting (i.e., without labels or covariates) on the IMDB dataset using a 5,000-word vocabulary and 50 topics. The supplementary materials contain additional results for 20 newsgroups and Yahoo answers.
2
[['Model', 'LDA'], ['Model', 'SAGE'], ['Model', 'NVDM'], ['Model', 'SCHOLAR - B.G.'], ['Model', 'SCHOLAR'], ['Model', 'SCHOLAR + W.V.'], ['Model', 'SCHOLAR + REG.']]
1
[['Ppl.'], ['NPMI (int.)'], ['NPMI (ext.)'], ['Sparsity']]
[['1508', '0.13', '0.14', '0'], ['1767', '0.12', '0.12', '0.79'], ['1748', '0.06', '0.04', '0'], ['1889', '0.09', '0.13', '0'], ['1905', '0.14', '0.13', '0'], ['1991', '0.18', '0.17', '0'], ['2185', '0.10', '0.12', '0.58']]
column
['Ppl.', 'NPMI (int.)', 'NPMI (ext.)', 'Sparsity']
['SCHOLAR', 'SCHOLAR + W.V.', 'SCHOLAR + REG.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ppl.</th> <th>NPMI (int.)</th> <th>NPMI (ext.)</th> <th>Sparsity</th> </tr> </thead> <tbody> <tr> <td>Model || LDA</td> <td>1508</td> <td>0.13</td> <td>0.14</td> <td>0</td> </tr> <tr> <td>Model || SAGE</td> <td>1767</td> <td>0.12</td> <td>0.12</td> <td>0.79</td> </tr> <tr> <td>Model || NVDM</td> <td>1748</td> <td>0.06</td> <td>0.04</td> <td>0</td> </tr> <tr> <td>Model || SCHOLAR - B.G.</td> <td>1889</td> <td>0.09</td> <td>0.13</td> <td>0</td> </tr> <tr> <td>Model || SCHOLAR</td> <td>1905</td> <td>0.14</td> <td>0.13</td> <td>0</td> </tr> <tr> <td>Model || SCHOLAR + W.V.</td> <td>1991</td> <td>0.18</td> <td>0.17</td> <td>0</td> </tr> <tr> <td>Model || SCHOLAR + REG.</td> <td>2185</td> <td>0.10</td> <td>0.12</td> <td>0.58</td> </tr> </tbody></table>
Table 1
table_1
P18-1189
7
acl2018
We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity. SAGE, by contrast, attains very high levels of sparsity, but at the cost of worse perplexity and coherence than LDA. As expected, the NVDM produces relatively low perplexity, but very poor coherence, due to its lack of constraints on Į. Further experimentation revealed that the VAE framework involves a tradeoff among the scores; running for more epochs tends to result in better perplexity on held-out data, but at the cost of worse coherence. Adding regularization to encourage sparse topics has a similar effect as in SAGE, leading to worse perplexity and coherence, but it does create sparse topics. Interestingly, initializing the encoder with pretrained word2vec embeddings, and not updating them returned a model with the best internal coherence of any model we considered for IMDB and Yahoo answers, and the second-best for 20 newsgroups.
[1, 1, 1, 2, 1, 1]
['We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity.', 'SAGE, by contrast, attains very high levels of sparsity, but at the cost of worse perplexity and coherence than LDA.', 'As expected, the NVDM produces relatively low perplexity, but very poor coherence, due to its lack of constraints on Į.', 'Further experimentation revealed that the VAE framework involves a tradeoff among the scores; running for more epochs tends to result in better perplexity on held-out data, but at the cost of worse coherence.', 'Adding regularization to encourage sparse topics has a similar effect as in SAGE, leading to worse perplexity and coherence, but it does create sparse topics.', 'Interestingly, initializing the encoder with pretrained word2vec embeddings, and not updating them returned a model with the best internal coherence of any model we considered for IMDB and Yahoo answers, and the second-best for 20 newsgroups.']
[['LDA'], ['SAGE', 'Sparsity'], ['NVDM', 'NPMI (int.)', 'NPMI (ext.)'], None, ['SCHOLAR + REG.'], ['SCHOLAR + W.V.']]
1
P18-1191table_4
Results for Span Detection on the dense development dataset. Span detection results are given with the cutoff threshold τ at 0.5, and at the value which maximizes F-score. The top chart lists precision, recall and F-score with exact span match, while the bottom reports matches where the intersection over union (IOU) is ≥ 0.5.
1
[['BIO'], ['Span (tau = 0.5)'], ['Span (tau = tau*)']]
2
[['Exact Match', 'P'], ['Exact Match', 'R'], ['Exact Match', 'F'], ['IOU ≥ 0.5', 'P'], ['IOU ≥ 0.5', 'R'], ['IOU ≥ 0.5', 'F']]
[['69.0', '75.9', '72.2', '80.4', '86.0', '83.1'], ['81.7', '80.9', '81.3', '87.5', '84.2', '85.8'], ['80.0', '84.7', '82.2', '83.8', '93.0', '88.1']]
column
['P', 'R', 'F', 'P', 'R', 'F']
['Span (tau = 0.5)', 'Span (tau = tau*)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || P</th> <th>Exact Match || R</th> <th>Exact Match || F</th> <th>IOU ≥ 0.5 || P</th> <th>IOU ≥ 0.5 || R</th> <th>IOU ≥ 0.5 || F</th> </tr> </thead> <tbody> <tr> <td>BIO</td> <td>69.0</td> <td>75.9</td> <td>72.2</td> <td>80.4</td> <td>86.0</td> <td>83.1</td> </tr> <tr> <td>Span (tau = 0.5)</td> <td>81.7</td> <td>80.9</td> <td>81.3</td> <td>87.5</td> <td>84.2</td> <td>85.8</td> </tr> <tr> <td>Span (tau = tau*)</td> <td>80.0</td> <td>84.7</td> <td>82.2</td> <td>83.8</td> <td>93.0</td> <td>88.1</td> </tr> </tbody></table>
Table 4
table_4
P18-1191
6
acl2018
Table 4 shows span detection results on the development set. We report results for the span-based models at two threshold values tau : tau = 0.5, and tau = tau* maximizing F1. The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced under IOU matching.
[1, 1, 1]
['Table 4 shows span detection results on the development set.', 'We report results for the span-based models at two threshold values tau : tau = 0.5, and tau = tau* maximizing F1.', 'The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced under IOU matching.']
[None, ['Span (tau = 0.5)', 'Span (tau = tau*)'], ['Span (tau = 0.5)', 'Span (tau = tau*)', 'P', 'R']]
1
P18-1191table_5
Question Generation results on the dense development set. EM Exact Match accuracy, PM Partial Match Accuracy, SA Slot-level accuracy
1
[['Local'], ['Seq.']]
1
[['EM'], ['PM'], ['SA']]
[['44.2', '62.0', '83.2'], ['47.2', '62.3', '82.9']]
column
['EM', 'PM', 'SA']
['Seq.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>PM</th> <th>SA</th> </tr> </thead> <tbody> <tr> <td>Local</td> <td>44.2</td> <td>62.0</td> <td>83.2</td> </tr> <tr> <td>Seq.</td> <td>47.2</td> <td>62.3</td> <td>82.9</td> </tr> </tbody></table>
Table 5
table_5
P18-1191
6
acl2018
Table 5 shows the results for question generation on the development set. The sequential model exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors.
[1, 1]
['Table 5 shows the results for question generation on the development set.', 'The sequential model exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors.']
[None, ['Seq.', 'EM', 'PM', 'SA']]
1
P18-1192table_4
Results on the Chinese test set.
2
[['System (syntax-aware)', 'Zhao et al. (2009a)'], ['System (syntax-aware)', 'Bjorkelund et al. (2009)'], ['System (syntax-aware)', 'Roth and Lapata (2016)'], ['System (syntax-aware)', 'Marcheggiani and Titov (2017)'], ['System (syntax-aware)', 'Ours'], ['System (syntax-agnostic)', 'Marcheggiani et al. (2017)'], ['System (syntax-agnostic)', 'Ours']]
1
[['P'], ['R'], ['F1']]
[['80.4', '75.2', '77.7'], ['82.4', '75.1', '78.6'], ['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['83.4', '79.1', '81.2'], ['84.5', '79.3', '81.8']]
column
['P', 'R', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System (syntax-aware) || Zhao et al. (2009a)</td> <td>80.4</td> <td>75.2</td> <td>77.7</td> </tr> <tr> <td>System (syntax-aware) || Bjorkelund et al. (2009)</td> <td>82.4</td> <td>75.1</td> <td>78.6</td> </tr> <tr> <td>System (syntax-aware) || Roth and Lapata (2016)</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr> <td>System (syntax-aware) || Marcheggiani and Titov (2017)</td> <td>84.6</td> <td>80.4</td> <td>82.5</td> </tr> <tr> <td>System (syntax-aware) || Ours</td> <td>84.2</td> <td>81.5</td> <td>82.8</td> </tr> <tr> <td>System (syntax-agnostic) || Marcheggiani et al. (2017)</td> <td>83.4</td> <td>79.1</td> <td>81.2</td> </tr> <tr> <td>System (syntax-agnostic) || Ours</td> <td>84.5</td> <td>79.3</td> <td>81.8</td> </tr> </tbody></table>
Table 4
table_4
P18-1192
5
acl2018
Table 4 presents the results on Chinese test set. Even though we use the same parameters as for English, our model also outperforms the best reported results by 0.3% (syntax-aware) and 0.6% (syntax-agnostic) in F1 scores.
[1, 1]
['Table 4 presents the results on Chinese test set.', 'Even though we use the same parameters as for English, our model also outperforms the best reported results by 0.3% (syntax-aware) and 0.6% (syntax-agnostic) in F1 scores.']
[None, ['Ours', 'Marcheggiani and Titov (2017)', 'System (syntax-aware)', 'F1', 'System (syntax-agnostic)']]
1
P18-1192table_5
SRL results without predicate sense.
2
[['System(without predicate sense)', '1st-order'], ['System(without predicate sense)', '2nd-order'], ['System(without predicate sense)', '3rd-order'], ['System(without predicate sense)', 'Marcheggiani and Titov (2017)']]
1
[['P'], ['R'], ['F1']]
[['84.4', '82.6', '83.5'], ['84.8', '83.0', '83.9'], ['85.1', '83.3', '84.2'], ['85.2', '81.6', '83.3']]
column
['P', 'R', 'F1']
['1st-order', '2nd-order', '3rd-order']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System(without predicate sense) || 1st-order</td> <td>84.4</td> <td>82.6</td> <td>83.5</td> </tr> <tr> <td>System(without predicate sense) || 2nd-order</td> <td>84.8</td> <td>83.0</td> <td>83.9</td> </tr> <tr> <td>System(without predicate sense) || 3rd-order</td> <td>85.1</td> <td>83.3</td> <td>84.2</td> </tr> <tr> <td>System(without predicate sense) || Marcheggiani and Titov (2017)</td> <td>85.2</td> <td>81.6</td> <td>83.3</td> </tr> </tbody></table>
Table 5
table_5
P18-1192
6
acl2018
Table 5 shows the results from our syntax-aware model with lower order argument pruning. Compared to the best previous model, our system still yields an increment in recall by more than 1%, leading to improvements in F1 score. It demonstrates that refining syntactic parser tree based candidate pruning does help in argument recognition.
[1, 1, 2]
['Table 5 shows the results from our syntax-aware model with lower order argument pruning.', 'Compared to the best previous model, our system still yields an increment in recall by more than 1%, leading to improvements in F1 score.', 'It demonstrates that refining syntactic parser tree based candidate pruning does help in argument recognition.']
[['1st-order', '2nd-order', '3rd-order'], ['Marcheggiani and Titov (2017)', '1st-order', '2nd-order', '3rd-order', 'R', 'F1'], ['1st-order', '2nd-order', '3rd-order']]
1
P18-1192table_9
Results on English test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), the ratio SemF1/LAS. A superscript * indicates LAS results from our personal communication with the authors.
2
[['System', 'Zhao et al. (2009c) [SRL-only]'], ['System', 'Zhao et al. (2009a) [Joint]'], ['System', 'Bjorkelund et al. (2010)'], ['System', 'Lei et al. (2015)'], ['System', 'Roth and Lapata (2016)'], ['System', 'Marcheggiani and Titov (2017)'], ['System', 'Ours + CoNLL-2009 predicted'], ['System', 'Ours + Auto syntax'], ['System', 'Ours + Gold syntax']]
1
[['LAS (%)'], ['P (%)'], ['R (%)'], ['Sem-F1 (%)'], ['Sem-F1/LAS (%)']]
[['86.0', '-', '-', '85.4', '99.3'], ['89.2', '-', '-', '86.2', '96.6'], ['89.8', '87.1', '84.5', '85.8', '95.6'], ['90.4', '-', '-', '86.6', '95.8'], ['89.8', '88.1', '85.3', '86.7', '96.5'], ['90.3*', '89.1', '86.8', '88.0', '97.5'], ['86.0', '89.7', '89.3', '89.5', '104.0'], ['90.0', '90.5', '89.3', '89.9', '99.9'], ['100', '91.0', '89.7', '90.3', '90.3']]
column
['LAS (%)', 'P (%)', 'R (%)', 'Sem-F1 (%)', 'Sem-F1/LAS (%)']
['Ours + CoNLL-2009 predicted', 'Ours + Auto syntax']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS (%)</th> <th>P (%)</th> <th>R (%)</th> <th>Sem-F1 (%)</th> <th>Sem-F1/LAS (%)</th> </tr> </thead> <tbody> <tr> <td>System || Zhao et al. (2009c) [SRL-only]</td> <td>86.0</td> <td>-</td> <td>-</td> <td>85.4</td> <td>99.3</td> </tr> <tr> <td>System || Zhao et al. (2009a) [Joint]</td> <td>89.2</td> <td>-</td> <td>-</td> <td>86.2</td> <td>96.6</td> </tr> <tr> <td>System || Bjorkelund et al. (2010)</td> <td>89.8</td> <td>87.1</td> <td>84.5</td> <td>85.8</td> <td>95.6</td> </tr> <tr> <td>System || Lei et al. (2015)</td> <td>90.4</td> <td>-</td> <td>-</td> <td>86.6</td> <td>95.8</td> </tr> <tr> <td>System || Roth and Lapata (2016)</td> <td>89.8</td> <td>88.1</td> <td>85.3</td> <td>86.7</td> <td>96.5</td> </tr> <tr> <td>System || Marcheggiani and Titov (2017)</td> <td>90.3*</td> <td>89.1</td> <td>86.8</td> <td>88.0</td> <td>97.5</td> </tr> <tr> <td>System || Ours + CoNLL-2009 predicted</td> <td>86.0</td> <td>89.7</td> <td>89.3</td> <td>89.5</td> <td>104.0</td> </tr> <tr> <td>System || Ours + Auto syntax</td> <td>90.0</td> <td>90.5</td> <td>89.3</td> <td>89.9</td> <td>99.9</td> </tr> <tr> <td>System || Ours + Gold syntax</td> <td>100</td> <td>91.0</td> <td>89.7</td> <td>90.3</td> <td>90.3</td> </tr> </tbody></table>
Table 9
table_9
P18-1192
8
acl2018
Table 9 reports the performance of existing models7 in term of Sem-F1/LAS ratio on CoNLL2009 English test set. Interestingly, even though our system has significantly lower scores than others by 3.8% LAS in syntactic components, we obtain the highest results both on Sem-F1 and the Sem-F1/LAS ratio, respectively. These results show that our SRL component is relatively much stronger. Moreover, the ratio comparison in Table 9 also shows that since the CoNLL-2009 shared task, most SRL works actually benefit from the enhanced syntactic component rather than the improved SRL component itself. All post-CoNLL SRL systems, either traditional or neural types, did not exceed the top systems of CoNLL-2009 shared task, (Zhao et al., 2009c) (SRL-only track using the provided predicated syntax) and (Zhao et al., 2009a) (Joint track using self-developed parser). We believe that this work for the first time reports both higher Sem-F1 and higher Sem-F1/LAS ratio since CoNLL-2009 shared task.
[1, 1, 2, 2, 2, 1]
['Table 9 reports the performance of existing models7 in term of Sem-F1/LAS ratio on CoNLL2009 English test set.', 'Interestingly, even though our system has significantly lower scores than others by 3.8% LAS in syntactic components, we obtain the highest results both on Sem-F1 and\nthe Sem-F1/LAS ratio, respectively.', 'These results show that our SRL component is relatively much stronger.', 'Moreover, the ratio comparison in Table 9 also shows that since the CoNLL-2009 shared task, most SRL works actually benefit from the enhanced syntactic component rather than the improved SRL component itself.', 'All post-CoNLL SRL systems, either traditional or neural types, did not exceed the top systems of CoNLL-2009 shared task, (Zhao et al., 2009c) (SRL-only track using the provided predicated syntax) and (Zhao et al., 2009a) (Joint track using self-developed parser).', 'We believe that this work for the first time reports both higher Sem-F1 and higher Sem-F1/LAS ratio since CoNLL-2009 shared task.']
[['Sem-F1 (%)', 'Sem-F1/LAS (%)'], ['Ours + CoNLL-2009 predicted', 'Ours + Auto syntax', 'LAS (%)', 'Sem-F1 (%)', 'Sem-F1/LAS (%)'], None, None, None, ['Sem-F1 (%)', 'Sem-F1/LAS (%)']]
1
P18-1195table_1
MS-COCO ’s test set evaluation measures.
6
[['Loss', 'MLE', 'Reward', '-', 'Vsub', '-'], ['Loss', 'MLE + lambda H', 'Reward', '-', 'Vsub', '-'], ['Loss', 'Tok', 'Reward', 'Glove sim', 'Vsub', '-'], ['Loss', 'Tok', 'Reward', 'Glove sim rfreq', 'Vsub', '-'], ['Loss', 'Seq', 'Reward', 'Hamming', 'Vsub', 'V'], ['Loss', 'Seq', 'Reward', 'Hamming', 'Vsub', 'Vbatch'], ['Loss', 'Seq', 'Reward', 'Hamming', 'Vsub', 'Vrefs'], ['Loss', 'Seq lazy', 'Reward', 'Hamming', 'Vsub', 'V'], ['Loss', 'Seq lazy', 'Reward', 'Hamming', 'Vsub', 'Vbatch'], ['Loss', 'Seq lazy', 'Reward', 'Hamming', 'Vsub', 'Vrefs'], ['Loss', 'Seq', 'Reward', 'CIDER', 'Vsub', 'V'], ['Loss', 'Seq', 'Reward', 'CIDER', 'Vsub', 'Vbatch'], ['Loss', 'Seq', 'Reward', 'CIDER', 'Vsub', 'Vrefs'], ['Loss', 'Seq lazy', 'Reward', 'CIDER', 'Vsub', 'V'], ['Loss', 'Seq lazy', 'Reward', 'CIDER', 'Vsub', 'Vbatch'], ['Loss', 'Seq lazy', 'Reward', 'CIDER', 'Vsub', 'Vrefs'], ['Loss', 'Tok-Seq', 'Reward', 'Hamming', 'Vsub', 'V'], ['Loss', 'Tok-Seq', 'Reward', 'Hamming', 'Vsub', 'Vbatch'], ['Loss', 'Tok-Seq', 'Reward', 'Hamming', 'Vsub', 'Vrefs'], ['Loss', 'Tok-Seq', 'Reward', 'CIDER', 'Vsub', 'V'], ['Loss', 'Tok-Seq', 'Reward', 'CIDER', 'Vsub', 'Vbatch'], ['Loss', 'Tok-Seq', 'Reward', 'CIDER', 'Vsub', 'Vrefs']]
2
[['Without attention', 'BLEU-1'], ['Without attention', 'BLEU-4'], ['Without attention', 'CIDER'], ['With attention', 'BLEU-1'], ['With attention', 'BLEU-4'], ['With attention', 'CIDER']]
[['70.63', '30.14', '93.59', '73.40', '33.11', '101.63'], ['70.79', '30.29', '93.61', '72.68', '32.15', '99.77'], ['71.94', '31.27', '95.79', '73.49', '32.93', '102.33'], ['72.39', '31.76', '97.47', '74.01', '33.25', '102.81'], ['71.76', '31.16', '96.37', '73.12', '32.71', '101.25'], ['71.46', '31.15', '96.53', '73.26', '32.73', '101.90'], ['71.80', '31.63', '96.22', '73.53', '32.59', '102.33'], ['70.81', '30.43', '94.26', '73.29', '32.81', '101.58'], ['71.85', '31.13', '96.65', '73.43', '32.95', '102.03'], ['71.96', '31.23', '95.34', '73.53', '33.09', '101.89'], ['71.05', '30.46', '94.40', '73.08', '32.51', '101.84'], ['71.51', '31.17', '95.78', '73.50', '33.04', '102.98'], ['71.93', '31.41', '96.81', '73.42', '32.91', '102.23'], ['71.43', '31.18', '96.32', '73.55', '33.19', '102.94'], ['71.47', '31.00', '95.56', '73.18', '32.60', '101.30'], ['71.82', '31.06', '95.66', '73.92', '33.10', '102.64'], ['70.79', '30.43', '96.34', '73.68', '32.87', '101.11'], ['72.28', '31.65', '96.73', '73.86', '33.32', '102.90'], ['72.69', '32.30', '98.01', '73.56', '33.00', '101.72'], ['70.80', '30.55', '96.89', '73.31', '32.40', '100.33'], ['72.13', '31.71', '96.92', '73.61', '32.67', '101.41'], ['73.08', '32.82', '99.92', '74.28', '33.34', '103.81']]
column
['BLEU-1', 'BLEU-4', 'CIDER', 'BLEU-1', 'BLEU-4', 'CIDER']
['Tok-Seq']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Without attention || BLEU-1</th> <th>Without attention || BLEU-4</th> <th>Without attention || CIDER</th> <th>With attention || BLEU-1</th> <th>With attention || BLEU-4</th> <th>With attention || CIDER</th> </tr> </thead> <tbody> <tr> <td>Loss || MLE || Reward || - || Vsub || -</td> <td>70.63</td> <td>30.14</td> <td>93.59</td> <td>73.40</td> <td>33.11</td> <td>101.63</td> </tr> <tr> <td>Loss || MLE + lambda H || Reward || - || Vsub || -</td> <td>70.79</td> <td>30.29</td> <td>93.61</td> <td>72.68</td> <td>32.15</td> <td>99.77</td> </tr> <tr> <td>Loss || Tok || Reward || Glove sim || Vsub || -</td> <td>71.94</td> <td>31.27</td> <td>95.79</td> <td>73.49</td> <td>32.93</td> <td>102.33</td> </tr> <tr> <td>Loss || Tok || Reward || Glove sim rfreq || Vsub || -</td> <td>72.39</td> <td>31.76</td> <td>97.47</td> <td>74.01</td> <td>33.25</td> <td>102.81</td> </tr> <tr> <td>Loss || Seq || Reward || Hamming || Vsub || V</td> <td>71.76</td> <td>31.16</td> <td>96.37</td> <td>73.12</td> <td>32.71</td> <td>101.25</td> </tr> <tr> <td>Loss || Seq || Reward || Hamming || Vsub || Vbatch</td> <td>71.46</td> <td>31.15</td> <td>96.53</td> <td>73.26</td> <td>32.73</td> <td>101.90</td> </tr> <tr> <td>Loss || Seq || Reward || Hamming || Vsub || Vrefs</td> <td>71.80</td> <td>31.63</td> <td>96.22</td> <td>73.53</td> <td>32.59</td> <td>102.33</td> </tr> <tr> <td>Loss || Seq lazy || Reward || Hamming || Vsub || V</td> <td>70.81</td> <td>30.43</td> <td>94.26</td> <td>73.29</td> <td>32.81</td> <td>101.58</td> </tr> <tr> <td>Loss || Seq lazy || Reward || Hamming || Vsub || Vbatch</td> <td>71.85</td> <td>31.13</td> <td>96.65</td> <td>73.43</td> <td>32.95</td> <td>102.03</td> </tr> <tr> <td>Loss || Seq lazy || Reward || Hamming || Vsub || Vrefs</td> <td>71.96</td> <td>31.23</td> <td>95.34</td> <td>73.53</td> <td>33.09</td> <td>101.89</td> </tr> <tr> <td>Loss || Seq || Reward || CIDER || Vsub || V</td> <td>71.05</td> <td>30.46</td> <td>94.40</td> <td>73.08</td> <td>32.51</td> <td>101.84</td> </tr> <tr> <td>Loss || Seq || Reward || CIDER || Vsub || Vbatch</td> <td>71.51</td> <td>31.17</td> <td>95.78</td> <td>73.50</td> <td>33.04</td> <td>102.98</td> </tr> <tr> <td>Loss || Seq || Reward || CIDER || Vsub || Vrefs</td> <td>71.93</td> <td>31.41</td> <td>96.81</td> <td>73.42</td> <td>32.91</td> <td>102.23</td> </tr> <tr> <td>Loss || Seq lazy || Reward || CIDER || Vsub || V</td> <td>71.43</td> <td>31.18</td> <td>96.32</td> <td>73.55</td> <td>33.19</td> <td>102.94</td> </tr> <tr> <td>Loss || Seq lazy || Reward || CIDER || Vsub || Vbatch</td> <td>71.47</td> <td>31.00</td> <td>95.56</td> <td>73.18</td> <td>32.60</td> <td>101.30</td> </tr> <tr> <td>Loss || Seq lazy || Reward || CIDER || Vsub || Vrefs</td> <td>71.82</td> <td>31.06</td> <td>95.66</td> <td>73.92</td> <td>33.10</td> <td>102.64</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || Hamming || Vsub || V</td> <td>70.79</td> <td>30.43</td> <td>96.34</td> <td>73.68</td> <td>32.87</td> <td>101.11</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || Hamming || Vsub || Vbatch</td> <td>72.28</td> <td>31.65</td> <td>96.73</td> <td>73.86</td> <td>33.32</td> <td>102.90</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || Hamming || Vsub || Vrefs</td> <td>72.69</td> <td>32.30</td> <td>98.01</td> <td>73.56</td> <td>33.00</td> <td>101.72</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || CIDER || Vsub || V</td> <td>70.80</td> <td>30.55</td> <td>96.89</td> <td>73.31</td> <td>32.40</td> <td>100.33</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || CIDER || Vsub || Vbatch</td> <td>72.13</td> <td>31.71</td> <td>96.92</td> <td>73.61</td> <td>32.67</td> <td>101.41</td> </tr> <tr> <td>Loss || Tok-Seq || Reward || CIDER || Vsub || Vrefs</td> <td>73.08</td> <td>32.82</td> <td>99.92</td> <td>74.28</td> <td>33.34</td> <td>103.81</td> </tr> </tbody></table>
Table 1
table_1
P18-1195
6
acl2018
For reference, we include in Table 1 baseline results obtained using MLE, and our implementation of MLE with entropy regularization (MLE + lambda H) (Pereyra et al., 2017), as well as the RAML approach of Norouzi et al. (2016) which corresponds to sequence-level smoothing based on the Hamming reward and sampling replacements from the full vocabulary (Seq, Hamming, V). We observe that entropy smoothing is not able to improve performance much over MLE for the model without attention, and even deteriorates for the attention model. We improve upon RAML by choosing an adequate subset of vocabulary for substitutions. We also report the performances of token-level smoothing, where the promotion of rare tokens boosted the scores in both attentive and nonattentive models. For sequence-level smoothing, choosing a taskrelevant reward with importance sampling yielded better results than plain Hamming distance. Moreover, we used the two smoothing schemes (Tok-Seq) and achieved the best results with CIDER as a reward for sequence-level smoothing combined with a token-level smoothing that promotes rare tokens improving CIDER from 93.59 (MLE) to 99.92 for the model without attention, and improving from 101.63 to 103.81 with attention.
[1, 1, 2, 1, 1, 1]
['For reference, we include in Table 1 baseline results obtained using MLE, and our implementation of MLE with entropy regularization (MLE + lambda H) (Pereyra et al., 2017), as well as the RAML approach of Norouzi et al. (2016) which corresponds to sequence-level smoothing based on the Hamming reward and sampling replacements from the full vocabulary (Seq, Hamming, V).', 'We observe that entropy smoothing is not able to improve performance much over MLE for the model without attention, and even deteriorates for the attention model.', 'We improve upon RAML by choosing an adequate subset of vocabulary for substitutions.', 'We also report the performances of token-level smoothing, where the promotion of rare tokens boosted the scores in both attentive and nonattentive models.', 'For sequence-level smoothing, choosing a taskrelevant reward with importance sampling yielded better results than plain Hamming distance.', 'Moreover, we used the two smoothing schemes (Tok-Seq) and achieved the best results with CIDER as a reward for sequence-level smoothing combined with a token-level smoothing that promotes rare tokens improving CIDER from 93.59 (MLE) to 99.92 for the model without attention, and improving from 101.63 to 103.81 with attention.']
[['MLE', 'MLE + lambda H', 'Seq', 'Hamming', 'V'], ['MLE', 'MLE + lambda H', 'Without attention', 'With attention'], None, ['Tok', 'Without attention', 'With attention'], ['Seq', 'CIDER', 'Hamming'], ['Tok-Seq', 'CIDER', 'Without attention', 'With attention']]
1
P18-1196table_3
Test set regression evaluation for the clinical and scientific data. Mean absolute percentage error (MAPE) is scale independent and allows for comparison across data, whereas root mean square and mean absolute errors (RMSE, MAE) are scale dependent. Medians (MdAE, MdAPE) are informative of the distribution of errors.
2
[['Model', 'mean'], ['Model', 'median'], ['Model', 'softmax'], ['Model', 'softmax+rnn'], ['Model', 'h-softmax'], ['Model', 'h-softmax+rnn'], ['Model', 'd-RNN'], ['Model', 'MoG'], ['Model', 'combination']]
2
[['Clinical', 'RMSE'], ['Clinical', 'MAE'], ['Clinical', 'MdAE'], ['Clinical', 'MAPE%'], ['Clinical', 'MdAPE%'], ['Scientific', 'MdAE'], ['Scientific', 'MAPE%'], ['Scientific', 'MdAPE%']]
[['1043.68', '294.95', '245.59', '2353.11', '409.47', '?10 20', '?10 23', '?10 22'], ['1036.18', '120.24', '34.52', '425.81', '52.05', '4.20', '8039.15', '98.65'], ['997.84', '80.29', '12.70', '621.78', '22.41', '3.00', '1947.44', '80.62'], ['991.38', '74.44', '13.00', '503.57', '23.91', '3.50', '15208.37', '80.00'], ['1095.01', '167.19', '14.00', '746.50', '25.00', '3.00', '1652.21', '80.00'], ['1001.04', '83.19', '12.30', '491.85', '23.44', '3.00', '2703.49', '80.00'], ['1009.34', '70.21', '9.00', '513.81', '17.90', '3.00', '1287.27', '52.45'], ['998.78', '57.11', '6.92', '348.10', '13.64', '2.10', '590.42', '90.00'], ['989.84', '69.47', '9.00', '552.06', '17.86', '3.00', '2332.50', '88.89']]
column
['RMSE', 'MAE', 'MdAE', 'MAPE%', 'MdAPE%', 'MdAE', 'MAPE%', 'MdAPE%']
['d-RNN', 'MoG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Clinical || RMSE</th> <th>Clinical || MAE</th> <th>Clinical || MdAE</th> <th>Clinical || MAPE%</th> <th>Clinical || MdAPE%</th> <th>Scientific || MdAE</th> <th>Scientific || MAPE%</th> <th>Scientific || MdAPE%</th> </tr> </thead> <tbody> <tr> <td>Model || mean</td> <td>1043.68</td> <td>294.95</td> <td>245.59</td> <td>2353.11</td> <td>409.47</td> <td>?10 20</td> <td>?10 23</td> <td>?10 22</td> </tr> <tr> <td>Model || median</td> <td>1036.18</td> <td>120.24</td> <td>34.52</td> <td>425.81</td> <td>52.05</td> <td>4.20</td> <td>8039.15</td> <td>98.65</td> </tr> <tr> <td>Model || softmax</td> <td>997.84</td> <td>80.29</td> <td>12.70</td> <td>621.78</td> <td>22.41</td> <td>3.00</td> <td>1947.44</td> <td>80.62</td> </tr> <tr> <td>Model || softmax+rnn</td> <td>991.38</td> <td>74.44</td> <td>13.00</td> <td>503.57</td> <td>23.91</td> <td>3.50</td> <td>15208.37</td> <td>80.00</td> </tr> <tr> <td>Model || h-softmax</td> <td>1095.01</td> <td>167.19</td> <td>14.00</td> <td>746.50</td> <td>25.00</td> <td>3.00</td> <td>1652.21</td> <td>80.00</td> </tr> <tr> <td>Model || h-softmax+rnn</td> <td>1001.04</td> <td>83.19</td> <td>12.30</td> <td>491.85</td> <td>23.44</td> <td>3.00</td> <td>2703.49</td> <td>80.00</td> </tr> <tr> <td>Model || d-RNN</td> <td>1009.34</td> <td>70.21</td> <td>9.00</td> <td>513.81</td> <td>17.90</td> <td>3.00</td> <td>1287.27</td> <td>52.45</td> </tr> <tr> <td>Model || MoG</td> <td>998.78</td> <td>57.11</td> <td>6.92</td> <td>348.10</td> <td>13.64</td> <td>2.10</td> <td>590.42</td> <td>90.00</td> </tr> <tr> <td>Model || combination</td> <td>989.84</td> <td>69.47</td> <td>9.00</td> <td>552.06</td> <td>17.86</td> <td>3.00</td> <td>2332.50</td> <td>88.89</td> </tr> </tbody></table>
Table 3
table_3
P18-1196
6
acl2018
Table 3 shows evaluation results, where we also include two naive baselines of constant predictions: with the mean and median of the training data. For both datasets, RMSE and MAE were too sensitive to extreme errors to allow drawing safe conclusions, particularly for the scientific dataset, where both metrics were in the order of 109. MdAE can be of some use, as 50% of the errors are absolutely smaller than that. Along percentage metrics, MoG achieved the best MAPE in both datasets (18% and 54% better that the second best) and was the only model to perform better than the median baseline for the clinical data. However, it had the worst MdAPE, which means that MoG mainly reduced larger percentage errors. The d-RNN model came third and second in the clinical and scientific datasets, respectively. In the latter it achieved the best MdAPE, i.e.it was effective at reducing errors for 50% of the numbers. The combination model did not perform better than its constituents. This is possibly because MoG is the only strategy that takes into account the numerical magnitudes of the numerals.
[1, 1, 1, 1, 1, 1, 1, 1, 2]
['Table 3 shows evaluation results, where we also include two naive baselines of constant predictions: with the mean and median of the training data.', 'For both datasets, RMSE and MAE were too sensitive to extreme errors to allow drawing safe conclusions, particularly for the scientific dataset, where both metrics were in the order of 109.', 'MdAE can be of some use, as 50% of the errors are absolutely smaller than that.', 'Along percentage metrics, MoG achieved the best MAPE in both datasets (18% and 54% better that the second best) and was the only model to perform better than the median baseline for the clinical data.', 'However, it had the worst MdAPE, which means that MoG mainly reduced larger percentage errors.', 'The d-RNN model came third and second in the clinical and scientific datasets, respectively.', 'In the latter it achieved the best MdAPE, i.e.it was effective at reducing errors for 50% of the numbers.', 'The combination model did not perform better than its constituents.', 'This is possibly because MoG is the only strategy that takes into account the numerical magnitudes of the numerals.']
[['mean', 'median'], ['Clinical', 'Scientific', 'RMSE', 'MAE'], ['MdAE'], ['MoG', 'Clinical', 'Scientific', 'MAPE%'], ['MoG', 'Scientific', 'MdAPE%'], ['d-RNN', 'Clinical', 'Scientific'], ['d-RNN', 'Scientific', 'MdAPE%'], ['combination'], ['MoG']]
1
P18-1197table_1
Results on test dataset for SICK and MSRpar semantic relatedness task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Previous models (2) Dependency structure (3) Constituency structure (4) Linear structure
4
[['Dataset', 'SICK', 'Model', 'Illinois-LH (2014)'], ['Dataset', 'SICK', 'Model', 'UNAL-NLP (2014)'], ['Dataset', 'SICK', 'Model', 'Meaning factory (2014)'], ['Dataset', 'SICK', 'Model', 'ECNU (2014)'], ['Dataset', 'SICK', 'Model', 'Dependency Tree-LSTM (2015)'], ['Dataset', 'SICK', 'Model', 'Decomp-Attn (Dependency)'], ['Dataset', 'SICK', 'Model', 'Progressive-Attn (Dependency)'], ['Dataset', 'SICK', 'Model', 'Constituency Tree-LSTM (2015)'], ['Dataset', 'SICK', 'Model', 'Decomp-Attn (Constituency)'], ['Dataset', 'SICK', 'Model', 'Progressive-Attn (Constituency)'], ['Dataset', 'SICK', 'Model', 'Linear Bi-LSTM'], ['Dataset', 'SICK', 'Model', 'Decomp-Attn (Linear)'], ['Dataset', 'SICK', 'Model', 'Progressive-Attn (Linear)'], ['Dataset', 'MSRpar', 'Model', 'ParagramPhrase (2015)'], ['Dataset', 'MSRpar', 'Model', 'Projection (2015)'], ['Dataset', 'MSRpar', 'Model', 'GloVe (2015)'], ['Dataset', 'MSRpar', 'Model', 'PSL (2015)'], ['Dataset', 'MSRpar', 'Model', 'ParagramPhrase-XXL (2015)'], ['Dataset', 'MSRpar', 'Model', 'Dependency Tree-LSTM'], ['Dataset', 'MSRpar', 'Model', 'Decomp-Attn (Dependency)'], ['Dataset', 'MSRpar', 'Model', 'Progressive-Attn (Dependency)'], ['Dataset', 'MSRpar', 'Model', 'Constituency Tree-LSTM'], ['Dataset', 'MSRpar', 'Model', 'Decomp-Attn (Constituency)'], ['Dataset', 'MSRpar', 'Model', 'Progressive-Attn (Constituency)'], ['Dataset', 'MSRpar', 'Model', 'Linear Bi-LSTM'], ['Dataset', 'MSRpar', 'Model', 'Decomp-Attn (Linear)'], ['Dataset', 'MSRpar', 'Model', 'Progressive-Attn (Linear)']]
1
[['Pearson r'], ['Spearman rho'], ['MSE']]
[['0.7993', '0.7538', '0.3692'], ['0.8070', '0.7489', '0.3550'], ['0.8268', '0.7721', '0.3224'], ['0.8414', '-', '-'], ['0.8676 (0.0030)', '0.8083 (0.0042)', '0.2532 (0.0052)'], ['0.8239 (0.0120)', '0.7614 (0.0103)', '0.3326 (0.0223)'], ['0.8424 (0.0042)', '0.7733 (0.0066)', '0.2963 (0.0077)'], ['0.8582 (0.0038)', '0.7966 (0.0053)', '0.2734 (0.0108)'], ['0.7790 (0.0076)', '0.7074 (0.0091)', '0.4044 (0.0152)'], ['0.8625 (0.0032)', '0.7997 (0.0035)', '0.2610 (0.0057)'], ['0.8398 (0.0020)', '0.7782 (0.0041)', '0.3024 (0.0044)'], ['0.7899 (0.0055)', '0.7173 (0.0097)', '0.3897 (0.0115)'], ['0.8550 (0.0017)', '0.7873 (0.0020)', '0.2761 (0.0038)'], ['0.426', '-', '-'], ['0.437', '-', '-'], ['0.477', '-', '-'], ['0.416', '-', '-'], ['0.448', '-', '-'], ['0.4921 (0.0112)', '0.4519 (0.0128)', '0.6611 (0.0219)'], ['0.4016 (0.0124)', '0.3310 (0.0118)', '0.7243 (0.0099)'], ['0.4727 (0.0112)', '0.4216 (0.0092)', '0.6823 (0.0159)'], ['0.3981 (0.0176)', '0.3150 (0.0204)', '0.7407 (0.0170)'], ['0.3991 (0.0147)', '0.3237 (0.0355)', '0.7220 (0.0185)'], ['0.5104 (0.0191)', '0.4764 (0.0112)', '0.6436 (0.0346)'], ['0.3270 (0.0303)', '0.2205 (0.0111)', '0.8098 (0.0579)'], ['0.3763 (0.0332)', '0.3025 (0.0587)', '0.7290 (0.0206)'], ['0.4773 (0.0206)', '0.4453 (0.0250)', '0.6758 (0.0260)']]
column
['Pearson r', 'Spearman rho', 'MSE']
['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pearson r</th> <th>Spearman rho</th> <th>MSE</th> </tr> </thead> <tbody> <tr> <td>Dataset || SICK || Model || Illinois-LH (2014)</td> <td>0.7993</td> <td>0.7538</td> <td>0.3692</td> </tr> <tr> <td>Dataset || SICK || Model || UNAL-NLP (2014)</td> <td>0.8070</td> <td>0.7489</td> <td>0.3550</td> </tr> <tr> <td>Dataset || SICK || Model || Meaning factory (2014)</td> <td>0.8268</td> <td>0.7721</td> <td>0.3224</td> </tr> <tr> <td>Dataset || SICK || Model || ECNU (2014)</td> <td>0.8414</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || SICK || Model || Dependency Tree-LSTM (2015)</td> <td>0.8676 (0.0030)</td> <td>0.8083 (0.0042)</td> <td>0.2532 (0.0052)</td> </tr> <tr> <td>Dataset || SICK || Model || Decomp-Attn (Dependency)</td> <td>0.8239 (0.0120)</td> <td>0.7614 (0.0103)</td> <td>0.3326 (0.0223)</td> </tr> <tr> <td>Dataset || SICK || Model || Progressive-Attn (Dependency)</td> <td>0.8424 (0.0042)</td> <td>0.7733 (0.0066)</td> <td>0.2963 (0.0077)</td> </tr> <tr> <td>Dataset || SICK || Model || Constituency Tree-LSTM (2015)</td> <td>0.8582 (0.0038)</td> <td>0.7966 (0.0053)</td> <td>0.2734 (0.0108)</td> </tr> <tr> <td>Dataset || SICK || Model || Decomp-Attn (Constituency)</td> <td>0.7790 (0.0076)</td> <td>0.7074 (0.0091)</td> <td>0.4044 (0.0152)</td> </tr> <tr> <td>Dataset || SICK || Model || Progressive-Attn (Constituency)</td> <td>0.8625 (0.0032)</td> <td>0.7997 (0.0035)</td> <td>0.2610 (0.0057)</td> </tr> <tr> <td>Dataset || SICK || Model || Linear Bi-LSTM</td> <td>0.8398 (0.0020)</td> <td>0.7782 (0.0041)</td> <td>0.3024 (0.0044)</td> </tr> <tr> <td>Dataset || SICK || Model || Decomp-Attn (Linear)</td> <td>0.7899 (0.0055)</td> <td>0.7173 (0.0097)</td> <td>0.3897 (0.0115)</td> </tr> <tr> <td>Dataset || SICK || Model || Progressive-Attn (Linear)</td> <td>0.8550 (0.0017)</td> <td>0.7873 (0.0020)</td> <td>0.2761 (0.0038)</td> </tr> <tr> <td>Dataset || MSRpar || Model || ParagramPhrase (2015)</td> <td>0.426</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || MSRpar || Model || Projection (2015)</td> <td>0.437</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || MSRpar || Model || GloVe (2015)</td> <td>0.477</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || MSRpar || Model || PSL (2015)</td> <td>0.416</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || MSRpar || Model || ParagramPhrase-XXL (2015)</td> <td>0.448</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dataset || MSRpar || Model || Dependency Tree-LSTM</td> <td>0.4921 (0.0112)</td> <td>0.4519 (0.0128)</td> <td>0.6611 (0.0219)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Decomp-Attn (Dependency)</td> <td>0.4016 (0.0124)</td> <td>0.3310 (0.0118)</td> <td>0.7243 (0.0099)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Progressive-Attn (Dependency)</td> <td>0.4727 (0.0112)</td> <td>0.4216 (0.0092)</td> <td>0.6823 (0.0159)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Constituency Tree-LSTM</td> <td>0.3981 (0.0176)</td> <td>0.3150 (0.0204)</td> <td>0.7407 (0.0170)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Decomp-Attn (Constituency)</td> <td>0.3991 (0.0147)</td> <td>0.3237 (0.0355)</td> <td>0.7220 (0.0185)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Progressive-Attn (Constituency)</td> <td>0.5104 (0.0191)</td> <td>0.4764 (0.0112)</td> <td>0.6436 (0.0346)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Linear Bi-LSTM</td> <td>0.3270 (0.0303)</td> <td>0.2205 (0.0111)</td> <td>0.8098 (0.0579)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Decomp-Attn (Linear)</td> <td>0.3763 (0.0332)</td> <td>0.3025 (0.0587)</td> <td>0.7290 (0.0206)</td> </tr> <tr> <td>Dataset || MSRpar || Model || Progressive-Attn (Linear)</td> <td>0.4773 (0.0206)</td> <td>0.4453 (0.0250)</td> <td>0.6758 (0.0260)</td> </tr> </tbody></table>
Table 1
table_1
P18-1197
8
acl2018
Table 1 summarizes our results. According to (Marelli et al., 2014), we compute three evaluation metrics: Pearson r, Spearman rho and Mean Squared Error (MSE). We compare our attention models against the original Tree-LSTM (Tai et al., 2015), instantiated on both constituency trees and dependency trees. We also compare earlier baselines with our models, and the best results are in bold. Since Tree-LSTM is a generalization of Linear LSTM, we also implemented our attention models on Linear Bidirectional LSTM (BiLSTM). All results are average of 5 runs. It is witnessed that the Progressive-Attn mechanism combined with Constituency Tree-LSTM is overall the strongest contender, but PA failed to yield any performance gain on Dependency Tree-LSTM in either dataset.
[1, 1, 1, 1, 2, 2, 1]
['Table 1 summarizes our results.', 'According to (Marelli et al., 2014), we compute three evaluation metrics: Pearson r, Spearman rho and Mean Squared Error (MSE).', 'We compare our attention models against the original Tree-LSTM (Tai et al., 2015), instantiated on both constituency trees and dependency trees.', 'We also compare earlier baselines with our models, and the best results are in bold.', 'Since Tree-LSTM is a generalization of Linear LSTM, we also implemented our attention models on Linear Bidirectional LSTM (BiLSTM).', 'All results are average of 5 runs.', 'It is witnessed that the Progressive-Attn mechanism combined with Constituency Tree-LSTM is overall the strongest contender, but PA failed to yield any performance gain on Dependency Tree-LSTM in either dataset.']
[None, ['Pearson r', 'Spearman rho', 'MSE'], ['Dependency Tree-LSTM (2015)', 'Constituency Tree-LSTM (2015)', 'Decomp-Attn (Dependency)', 'Progressive-Attn (Dependency)', 'Decomp-Attn (Constituency)', 'Progressive-Attn (Constituency)', 'Decomp-Attn (Linear)', 'Progressive-Attn (Linear)'], ['Illinois-LH (2014)', 'UNAL-NLP (2014)', 'Meaning factory (2014)', 'ECNU (2014)', 'ParagramPhrase (2015)', 'Projection (2015)', 'GloVe (2015)', 'PSL (2015)', 'ParagramPhrase-XXL (2015)'], ['Linear Bi-LSTM'], None, ['Progressive-Attn (Constituency)', 'Dependency Tree-LSTM', 'Progressive-Attn (Dependency)', 'SICK', 'MSRpar']]
1
P18-1197table_2
Results on test dataset for Quora paraphrase detection task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Dependency structure (2) Constituency structure (3) Linear structure
2
[['Model', 'Dependency Tree-LSTM'], ['Model', 'Decomp-Attn (Dependency)'], ['Model', 'Progressive-Attn (Dependency)'], ['Model', 'Constituency Tree-LSTM'], ['Model', 'Decomp-Attn (Constituency)'], ['Model', 'Progressive-Attn (Constituency)'], ['Model', 'Linear Bi-LSTM'], ['Model', 'Decomp-Attn (Linear)'], ['Model', 'Progressive-Attn (Linear)']]
1
[['Accuracy'], ['F-1 score (class=1)'], ['Precision (class=1)'], ['Recall (class=1)']]
[['0.7897 (0.0009)', '0.7060 (0.0050)', '0.7298 (0.0055)', '0.6840 (0.0139)'], ['0.7803 (0.0026)', '0.6977 (0.0074)', '0.7095 (0.0083)', '0.6866 (0.0199)'], ['0.7896 (0.0025)', '0.7113 (0.0087)', '0.7214 (0.0117)', '0.7025 (0.0266)'], ['0.7881 (0.0042)', '0.7065 (0.0034)', '0.7192 (0.0216)', '0.6846 (0.0380)'], ['0.7776 (0.0004)', '0.6942 (0.0050)', '0.7055 (0.0069)', '0.6836 (0.0164)'], ['0.7956 (0.0020)', '0.7192 (0.0024)', '0.7300 (0.0079)', '0.7089 (0.0104)'], ['0.7859 (0.0024)', '0.7097 (0.0047)', '0.7112 (0.0129)', '0.7089 (0.0219)'], ['0.7861 (0.0034)', '0.7074 (0.0109)', '0.7151 (0.0135)', '0.7010 (0.0315)'], ['0.7949 (0.0031)', '0.7182 (0.0162)', '0.7298 (0.0115)', '0.7092 (0.0469)']]
column
['Accuracy', 'F-1 score (class=1)', 'Precision (class=1)', 'Recall (class=1)']
['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>F-1 score (class=1)</th> <th>Precision (class=1)</th> <th>Recall (class=1)</th> </tr> </thead> <tbody> <tr> <td>Model || Dependency Tree-LSTM</td> <td>0.7897 (0.0009)</td> <td>0.7060 (0.0050)</td> <td>0.7298 (0.0055)</td> <td>0.6840 (0.0139)</td> </tr> <tr> <td>Model || Decomp-Attn (Dependency)</td> <td>0.7803 (0.0026)</td> <td>0.6977 (0.0074)</td> <td>0.7095 (0.0083)</td> <td>0.6866 (0.0199)</td> </tr> <tr> <td>Model || Progressive-Attn (Dependency)</td> <td>0.7896 (0.0025)</td> <td>0.7113 (0.0087)</td> <td>0.7214 (0.0117)</td> <td>0.7025 (0.0266)</td> </tr> <tr> <td>Model || Constituency Tree-LSTM</td> <td>0.7881 (0.0042)</td> <td>0.7065 (0.0034)</td> <td>0.7192 (0.0216)</td> <td>0.6846 (0.0380)</td> </tr> <tr> <td>Model || Decomp-Attn (Constituency)</td> <td>0.7776 (0.0004)</td> <td>0.6942 (0.0050)</td> <td>0.7055 (0.0069)</td> <td>0.6836 (0.0164)</td> </tr> <tr> <td>Model || Progressive-Attn (Constituency)</td> <td>0.7956 (0.0020)</td> <td>0.7192 (0.0024)</td> <td>0.7300 (0.0079)</td> <td>0.7089 (0.0104)</td> </tr> <tr> <td>Model || Linear Bi-LSTM</td> <td>0.7859 (0.0024)</td> <td>0.7097 (0.0047)</td> <td>0.7112 (0.0129)</td> <td>0.7089 (0.0219)</td> </tr> <tr> <td>Model || Decomp-Attn (Linear)</td> <td>0.7861 (0.0034)</td> <td>0.7074 (0.0109)</td> <td>0.7151 (0.0135)</td> <td>0.7010 (0.0315)</td> </tr> <tr> <td>Model || Progressive-Attn (Linear)</td> <td>0.7949 (0.0031)</td> <td>0.7182 (0.0162)</td> <td>0.7298 (0.0115)</td> <td>0.7092 (0.0469)</td> </tr> </tbody></table>
Table 2
table_2
P18-1197
8
acl2018
Table 2 summarizes our results where best results are highlighted in bold within each category. It should be noted that Quora is a new dataset and we have done our analysis on only 50,000 samples. Therefore, to the best of our knowledge, there is no published baseline result yet. For this task, we considered four standard evaluation metrics: Accuracy, F1-score, Precision and Recall. The Progressive-Attn + Constituency Tree-LSTM model still exhibits the best performance by a small margin, but the Progressive-Attn mechanism works surprisingly well on the linear bi-LSTM.
[1, 2, 2, 1, 1]
['Table 2 summarizes our results where best results are highlighted in bold within each category.', 'It should be noted that Quora is a new dataset and we have done our analysis on only 50,000 samples.', 'Therefore, to the best of our knowledge, there is no published baseline result yet.', 'For this task, we considered four standard evaluation metrics: Accuracy, F1-score, Precision and Recall.', 'The Progressive-Attn + Constituency Tree-LSTM model still exhibits the best performance by a small margin, but the Progressive-Attn mechanism works surprisingly well on the linear bi-LSTM.']
[None, None, None, ['Accuracy', 'F-1 score (class=1)', 'Precision (class=1)', 'Recall (class=1)'], ['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)']]
1
P18-1201table_6
Performance on Various Types Using Justice Subtypes for Training
4
[['Type', 'Justice', 'Subtype', 'Sentence'], ['Type', 'Justice', 'Subtype', 'Appeal'], ['Type', 'Justice', 'Subtype', 'Release-Parole'], ['Type', 'Conflict', 'Subtype', 'Attack'], ['Type', 'Transaction', 'Subtype', 'Transfer-Money'], ['Type', 'Business', 'Subtype', 'Start-Org'], ['Type', 'Movement', 'Subtype', 'Transport'], ['Type', 'Personnel', 'Subtype', 'End-Position'], ['Type', 'Contact', 'Subtype', 'Phone-Write'], ['Type', 'Life', 'Subtype', 'Injure']]
2
[['Hit@k Trigger Classification', '1'], ['Hit@k Trigger Classification', '3'], ['Hit@k Trigger Classification', '5']]
[['68.3', '68.3', '69.5'], ['67.5', '97.5', '97.5'], ['73.9', '73.9', '73.9'], ['26.5', '44.5', '46.7'], ['48.4', '68.9', '79.5'], ['0', '33.3', '66.7'], ['2.6', '3.7', '7.8'], ['9.1', '50.4', '53.7'], ['60.8', '88.2', '90.2'], ['87.6', '91.0', '91.0']]
column
['accuracy', 'accuracy', 'accuracy']
['Type', 'Subtype']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hit@k Trigger Classification || 1</th> <th>Hit@k Trigger Classification || 3</th> <th>Hit@k Trigger Classification || 5</th> </tr> </thead> <tbody> <tr> <td>Type || Justice || Subtype || Sentence</td> <td>68.3</td> <td>68.3</td> <td>69.5</td> </tr> <tr> <td>Type || Justice || Subtype || Appeal</td> <td>67.5</td> <td>97.5</td> <td>97.5</td> </tr> <tr> <td>Type || Justice || Subtype || Release-Parole</td> <td>73.9</td> <td>73.9</td> <td>73.9</td> </tr> <tr> <td>Type || Conflict || Subtype || Attack</td> <td>26.5</td> <td>44.5</td> <td>46.7</td> </tr> <tr> <td>Type || Transaction || Subtype || Transfer-Money</td> <td>48.4</td> <td>68.9</td> <td>79.5</td> </tr> <tr> <td>Type || Business || Subtype || Start-Org</td> <td>0</td> <td>33.3</td> <td>66.7</td> </tr> <tr> <td>Type || Movement || Subtype || Transport</td> <td>2.6</td> <td>3.7</td> <td>7.8</td> </tr> <tr> <td>Type || Personnel || Subtype || End-Position</td> <td>9.1</td> <td>50.4</td> <td>53.7</td> </tr> <tr> <td>Type || Contact || Subtype || Phone-Write</td> <td>60.8</td> <td>88.2</td> <td>90.2</td> </tr> <tr> <td>Type || Life || Subtype || Injure</td> <td>87.6</td> <td>91.0</td> <td>91.0</td> </tr> </tbody></table>
Table 6
table_6
P18-1201
7
acl2018
We further evaluated the performance of our transfer approach on similar and distinct unseen types. The 33 subtypes defined in ACE fall within 8 coarse-grained main types, such as Life and Justice. Each subtype belongs to one main type. Subtypes that belong to the same main type tend to have similar structures. For example, TrialHearing and Charge-Indict have the same set of argument roles. For training our transfer model, we selected 4 subtypes of Justice: Arrest-Jail, Convict, Charge-Indict, Execute. For testing, we selected 3 other subtypes of Justice: Sentence, Appeal, Release-Parole. Additionally, we selected one subtype from each of the other seven main types for comparison. Table 6 shows that, when testing on a new unseen type, the more similar it is to the seen types, the better performance is achieved.
[2, 2, 2, 2, 2, 2, 2, 2, 1]
['We further evaluated the performance of our transfer approach on similar and distinct unseen types.', 'The 33 subtypes defined in ACE fall within 8 coarse-grained main types, such as Life and Justice.', 'Each subtype belongs to one main type.', 'Subtypes that belong to the same main type tend to have similar structures.', 'For example, TrialHearing and Charge-Indict have the same set of argument roles.', 'For training our transfer model, we selected 4 subtypes of Justice: Arrest-Jail, Convict, Charge-Indict, Execute.', 'For testing, we selected 3 other subtypes of Justice: Sentence, Appeal, Release-Parole.', 'Additionally, we selected one subtype from each of the other seven main types for comparison.', 'Table 6 shows that, when testing on a new unseen type, the more similar it is to the seen types, the better performance is achieved.']
[None, ['Justice', 'Conflict', 'Transaction', 'Business', 'Movement', 'Personnel', 'Contact', 'Life'], ['Subtype'], ['Subtype', 'Type'], ['Subtype', 'Type'], None, ['Justice', 'Sentence', 'Appeal', 'Release-Parole'], ['Subtype', 'Type'], ['Subtype', 'Type']]
1
P18-1201table_7
Event Trigger and Argument Extraction Performance (%) on Unseen ACE Types.
2
[['Method', 'Supervised LSTM'], ['Method', 'Supervised Joint'], ['Method', 'Transfer']]
2
[['Trigger Identification', 'P'], ['Trigger Identification', 'R'], ['Trigger Identification', 'F'], ['Trigger Identification + Classification', 'P'], ['Trigger Identification + Classification', 'R'], ['Trigger Identification + Classification', 'F'], ['Arg Identification', 'P'], ['Arg Identification', 'R'], ['Arg Identification', 'F'], ['Arg Identification + Classification', 'P'], ['Arg Identification + Classification', 'R'], ['Arg Identification + Classification', 'F']]
[['94.7', '41.8', '58.0', '89.4', '39.5', '54.8', '47.8', '22.6', '30.6', '28.9', '13.7', '18.6'], ['55.8', '67.4', '61.1', '50.6', '61.2', '55.4', '36.4', '28.1', '31.7', '33.3', '25.7', '29.0'], ['85.7', '41.2', '55.6', '75.5', '36.3', '49.1', '28.2', '27.3', '27.8', '16.1', '15.6', '15.8']]
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['Transfer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification || P</th> <th>Trigger Identification || R</th> <th>Trigger Identification || F</th> <th>Trigger Identification + Classification || P</th> <th>Trigger Identification + Classification || R</th> <th>Trigger Identification + Classification || F</th> <th>Arg Identification || P</th> <th>Arg Identification || R</th> <th>Arg Identification || F</th> <th>Arg Identification + Classification || P</th> <th>Arg Identification + Classification || R</th> <th>Arg Identification + Classification || F</th> </tr> </thead> <tbody> <tr> <td>Method || Supervised LSTM</td> <td>94.7</td> <td>41.8</td> <td>58.0</td> <td>89.4</td> <td>39.5</td> <td>54.8</td> <td>47.8</td> <td>22.6</td> <td>30.6</td> <td>28.9</td> <td>13.7</td> <td>18.6</td> </tr> <tr> <td>Method || Supervised Joint</td> <td>55.8</td> <td>67.4</td> <td>61.1</td> <td>50.6</td> <td>61.2</td> <td>55.4</td> <td>36.4</td> <td>28.1</td> <td>31.7</td> <td>33.3</td> <td>25.7</td> <td>29.0</td> </tr> <tr> <td>Method || Transfer</td> <td>85.7</td> <td>41.2</td> <td>55.6</td> <td>75.5</td> <td>36.3</td> <td>49.1</td> <td>28.2</td> <td>27.3</td> <td>27.8</td> <td>16.1</td> <td>15.6</td> <td>15.8</td> </tr> </tbody></table>
Table 7
table_7
P18-1201
8
acl2018
We first identified the candidate triggers and arguments, then mapped each of these to the target event ontology. We evaluated our model on their extracting of event mentions which were classified into 23 testing ACE types. Table 7 shows the performance. To further demonstrate the effectiveness of zero-shot learning in our framework and its impact in saving human annotation effort, we used the supervised LSTM approach for comparison. The training data of LSTM contained 3,464 sentences with 905 annotated event mentions for the 23 unseen event types. We divided these event annotations into 10 subsets and successively added one subset at a time (10% of annotations) into the training data of LSTM. Figure 4 shows the LSTM learning curve. By contrast, without any annotated mentions on the 23 unseen test event types in its training set, our transfer learning approach achieved performance comparable to that of the LSTM, which was trained on 3,000 sentences5 with 500 annotated event mentions.
[2, 2, 1, 1, 2, 2, 0, 1]
['We first identified the candidate triggers and arguments, then mapped each of these to the target event ontology.', 'We evaluated our model on their extracting of event mentions which were classified into 23 testing ACE types.', 'Table 7 shows the performance.', 'To further demonstrate the effectiveness of zero-shot learning in our framework and its impact in saving human annotation effort, we used the supervised LSTM approach for comparison.', 'The training data of LSTM contained 3,464 sentences with 905 annotated event mentions for the 23 unseen event types.', 'We divided these event annotations into 10 subsets and successively added one subset at a time (10% of annotations) into the training data of LSTM.', 'Figure 4 shows the LSTM learning curve.', 'By contrast, without any annotated mentions on the 23 unseen test event types in its training set, our transfer learning approach achieved performance comparable to that of the LSTM, which was trained on 3,000 sentences5 with 500 annotated event mentions.']
[None, None, None, ['Supervised LSTM'], ['Supervised LSTM'], ['Supervised LSTM'], None, ['Transfer', 'Supervised LSTM']]
1
P18-1202table_2
Comparisons with different baselines.
2
[['Models', 'CrossCRF'], ['Models\t', 'CrossCRF'], ['Models\t', 'RAP'], ['Models', 'RAP'], ['Models\t', 'Hier-Joint'], ['Models\t', 'Hier-Joint'], ['Models\t', 'RNCRF'], ['Models\t', 'RNCRF'], ['Models\t', 'RNGRU'], ['Models\t', 'RNGRU'], ['Models\t', 'RNSCN-CRF'], ['Models', 'RNSCN-CRF'], ['Models\t', 'RNSCN-GRU'], ['Models\t', 'RNSCN-GRU'], ['Models\t', 'RNSCN+-GRU'], ['Models\t', 'RNSCN+-GRU']]
2
[['R\x81¨L', 'AS'], ['R\x81¨L', 'OP'], ['R\x81¨D', 'AS'], ['R\x81¨D', 'OP'], ['L\x81¨R', 'AS'], ['L\x81¨R', 'OP'], ['L\x81¨D', 'AS'], ['L\x81¨D', 'OP'], ['D\x81¨R', 'AS'], ['D\x81¨R', 'OP'], ['D\x81¨L', 'AS'], ['D\x81¨L', 'OP']]
[['19.72', '59.2', '21.07', '52.05', '28.19', '65.52', '29.96', '56.17', '6.59', '39.38', '24.22', '46.67'], ['-1.82', '-1.34', '-0.44', '-1.67', '-0.58', '-0.89', '-1.69', '-1.49', '-0.49', '-3.06', '-2.54', '-2.43'], ['25.92', '62.72', '22.63', '54.44', '46.9', '67.98', '34.54', '54.25', '45.44', '60.67', '28.22', '59.79'], ['-2.75', '-0.49', '-0.52', '-2.2', '-1.64', '-1.05', '-0.64', '-1.65', '-1.61', '-2.15', '-2.42', '-4.18'], ['33.66', '-', '33.2', '-', '48.1', '-', '31.25', '-', '47.97', '-', '34.74', '-'], ['-1.47', '-', '-0.52', '-', '-1.45', '-', '-0.49', '-', '-0.46', '-', '-2.27', ''], ['24.26', '60.86', '24.31', '51.28', '40.88', '66.5', '31.52', '55.85', '34.59', '63.89', '40.59', '60.17'], ['-3.97', '-3.35', '-2.57', '-1.78', '-2.09', '-1.48', '-1.4', '-1.09', '-1.34', '-1.59', '-0.8', '-1.2'], ['24.23', '60.65', '20.49', '52.28', '39.78', '62.99', '32.51', '52.24', '38.15', '64.21', '39.44', '60.85'], ['-2.41', '-1.04', '-2.68', '-2.69', '-0.61', '-0.95', '-1.12', '-2.37', '-2.82', '-1.11', '-2.79', '-1.25'], ['35.26', '61.67', '32', '52.81', '53.38', '67.6', '34.63', '56.22', '48.13', '65.06', '46.71', '61.88'], ['-1.31', '-1.35', '-1.48', '-1.29', '-1.49', '-0.99', '-1.38', '-1.1', '-0.71', '-0.66', '-1.16', '-1.52'], ['37.77', '62.35', '33.02', '57.54', '53.18', '71.44', '35.65', '60.02', '49.62', '69.42', '45.92', '63.85'], ['-0.45', '-1.85', '-0.58', '-1.27', '-0.75', '-0.97', '-0.77', '-0.8', '-0.34', '-2.27', '-1.14', '-1.97'], ['40.43', '65.85', '35.1', '60.17', '52.91', '72.51', '40.42', '61.15', '48.36', '73.75', '51.14', '71.18'], ['-0.96', '-1.5', '-0.62', '-0.75', '-1.82', '-1.03', '-0.7', '-0.6', '-1.14', '-1.76', '-1.68', '-1.58']]
column
['AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP']
['RNSCN-GRU', 'RNSCN-CRF', 'RNSCN+-GRU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R¨L || AS</th> <th>R¨L || OP</th> <th>R¨D || AS</th> <th>R¨D || OP</th> <th>L¨R || AS</th> <th>L¨R || OP</th> <th>L¨D || AS</th> <th>L¨D || OP</th> <th>D¨R || AS</th> <th>D¨R || OP</th> <th>D¨L || AS</th> <th>D¨L || OP</th> </tr> </thead> <tbody> <tr> <td>Models || CrossCRF</td> <td>19.72</td> <td>59.2</td> <td>21.07</td> <td>52.05</td> <td>28.19</td> <td>65.52</td> <td>29.96</td> <td>56.17</td> <td>6.59</td> <td>39.38</td> <td>24.22</td> <td>46.67</td> </tr> <tr> <td>Models\t || CrossCRF</td> <td>-1.82</td> <td>-1.34</td> <td>-0.44</td> <td>-1.67</td> <td>-0.58</td> <td>-0.89</td> <td>-1.69</td> <td>-1.49</td> <td>-0.49</td> <td>-3.06</td> <td>-2.54</td> <td>-2.43</td> </tr> <tr> <td>Models\t || RAP</td> <td>25.92</td> <td>62.72</td> <td>22.63</td> <td>54.44</td> <td>46.9</td> <td>67.98</td> <td>34.54</td> <td>54.25</td> <td>45.44</td> <td>60.67</td> <td>28.22</td> <td>59.79</td> </tr> <tr> <td>Models || RAP</td> <td>-2.75</td> <td>-0.49</td> <td>-0.52</td> <td>-2.2</td> <td>-1.64</td> <td>-1.05</td> <td>-0.64</td> <td>-1.65</td> <td>-1.61</td> <td>-2.15</td> <td>-2.42</td> <td>-4.18</td> </tr> <tr> <td>Models\t || Hier-Joint</td> <td>33.66</td> <td>-</td> <td>33.2</td> <td>-</td> <td>48.1</td> <td>-</td> <td>31.25</td> <td>-</td> <td>47.97</td> <td>-</td> <td>34.74</td> <td>-</td> </tr> <tr> <td>Models\t || Hier-Joint</td> <td>-1.47</td> <td>-</td> <td>-0.52</td> <td>-</td> <td>-1.45</td> <td>-</td> <td>-0.49</td> <td>-</td> <td>-0.46</td> <td>-</td> <td>-2.27</td> <td></td> </tr> <tr> <td>Models\t || RNCRF</td> <td>24.26</td> <td>60.86</td> <td>24.31</td> <td>51.28</td> <td>40.88</td> <td>66.5</td> <td>31.52</td> <td>55.85</td> <td>34.59</td> <td>63.89</td> <td>40.59</td> <td>60.17</td> </tr> <tr> <td>Models\t || RNCRF</td> <td>-3.97</td> <td>-3.35</td> <td>-2.57</td> <td>-1.78</td> <td>-2.09</td> <td>-1.48</td> <td>-1.4</td> <td>-1.09</td> <td>-1.34</td> <td>-1.59</td> <td>-0.8</td> <td>-1.2</td> </tr> <tr> <td>Models\t || RNGRU</td> <td>24.23</td> <td>60.65</td> <td>20.49</td> <td>52.28</td> <td>39.78</td> <td>62.99</td> <td>32.51</td> <td>52.24</td> <td>38.15</td> <td>64.21</td> <td>39.44</td> <td>60.85</td> </tr> <tr> <td>Models\t || RNGRU</td> <td>-2.41</td> <td>-1.04</td> <td>-2.68</td> <td>-2.69</td> <td>-0.61</td> <td>-0.95</td> <td>-1.12</td> <td>-2.37</td> <td>-2.82</td> <td>-1.11</td> <td>-2.79</td> <td>-1.25</td> </tr> <tr> <td>Models\t || RNSCN-CRF</td> <td>35.26</td> <td>61.67</td> <td>32</td> <td>52.81</td> <td>53.38</td> <td>67.6</td> <td>34.63</td> <td>56.22</td> <td>48.13</td> <td>65.06</td> <td>46.71</td> <td>61.88</td> </tr> <tr> <td>Models || RNSCN-CRF</td> <td>-1.31</td> <td>-1.35</td> <td>-1.48</td> <td>-1.29</td> <td>-1.49</td> <td>-0.99</td> <td>-1.38</td> <td>-1.1</td> <td>-0.71</td> <td>-0.66</td> <td>-1.16</td> <td>-1.52</td> </tr> <tr> <td>Models\t || RNSCN-GRU</td> <td>37.77</td> <td>62.35</td> <td>33.02</td> <td>57.54</td> <td>53.18</td> <td>71.44</td> <td>35.65</td> <td>60.02</td> <td>49.62</td> <td>69.42</td> <td>45.92</td> <td>63.85</td> </tr> <tr> <td>Models\t || RNSCN-GRU</td> <td>-0.45</td> <td>-1.85</td> <td>-0.58</td> <td>-1.27</td> <td>-0.75</td> <td>-0.97</td> <td>-0.77</td> <td>-0.8</td> <td>-0.34</td> <td>-2.27</td> <td>-1.14</td> <td>-1.97</td> </tr> <tr> <td>Models\t || RNSCN+-GRU</td> <td>40.43</td> <td>65.85</td> <td>35.1</td> <td>60.17</td> <td>52.91</td> <td>72.51</td> <td>40.42</td> <td>61.15</td> <td>48.36</td> <td>73.75</td> <td>51.14</td> <td>71.18</td> </tr> <tr> <td>Models\t || RNSCN+-GRU</td> <td>-0.96</td> <td>-1.5</td> <td>-0.62</td> <td>-0.75</td> <td>-1.82</td> <td>-1.03</td> <td>-0.7</td> <td>-0.6</td> <td>-1.14</td> <td>-1.76</td> <td>-1.68</td> <td>-1.58</td> </tr> </tbody></table>
Table 2
table_2
P18-1202
8
acl2018
The overall comparison results with the baselines are shown in Table 2 with average F1 scores and standard deviations over three random splits. Clearly, the results for aspect terms (AS) transfer are much lower than opinion terms (OP) transfer, which indicate that the aspect terms are usually quite different across domains, whereas the opinion terms could be more common and similar. Hence the ability to adapt the aspect extraction from the source domain to the target domain becomes more crucial. On this behalf, our proposed model shows clear advantage over other baselines for this more difficult transfer problem. Specifically, we achieve 6.77%, 5.88%, 10.55% improvement over the bestperforming baselines for aspect extraction in R¨L,L¨D and D¨L, respectively. By comparing with RNCRF and RNGRU, we show that the structural correspondence network is indeed effective when integrated into RNN.
[1, 1, 2, 1, 1, 2]
['The overall comparison results with the baselines are shown in Table 2 with average F1 scores and standard deviations over three random splits.', 'Clearly, the results for aspect terms (AS) transfer are much lower than opinion terms (OP) transfer, which indicate that the aspect terms are usually quite different across domains, whereas the opinion terms could be more common and similar.', 'Hence the ability to adapt the aspect extraction from the source domain to the target domain becomes more crucial.', 'On this behalf, our proposed model shows clear advantage over other baselines for this more difficult transfer problem.', 'Specifically, we achieve 6.77%, 5.88%, 10.55% improvement over the bestperforming baselines for aspect extraction in R\x81¨L,L\x81¨D and D\x81¨L, respectively.', 'By comparing with RNCRF and RNGRU, we show that the structural correspondence network is indeed effective when integrated into RNN.']
[['CrossCRF', 'RAP', 'Hier-Joint', 'RNCRF', 'RNGRU', 'RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU'], ['AS', 'OP'], None, ['RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU'], ['RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU', 'R\x81¨L', 'L\x81¨D', 'D\x81¨L', 'AS'], ['RNCRF', 'RNGRU']]
1
P18-1205table_4
Human Evaluation of various PERSONA-CHAT models, along with a comparison to human performance, and Twitter and OpenSubtitles based models (last 4 rows), standard deviation in parenthesis.
6
[['Method', 'Model', 'Human', '-', 'Profile', 'Self'], ['Method', 'Model', 'Generative PersonaChat Models', 'Seq2Seq', 'Profile', 'None'], ['Method', 'Model', 'Generative PersonaChat Models', 'Profile Memory', 'Profile', 'Self'], ['Method', 'Model', 'Ranking PersonaChat Models', 'KV Memory', 'Profile', 'None'], ['Method', 'Model', 'Ranking PersonaChat Models', 'KV Profile Memory', 'Profile', 'Self'], ['Method', 'Model', 'Twitter LM', '-', 'Profile', 'None'], ['Method', 'Model', 'OpenSubtitles 2018 LM', '-', 'Profile', 'None'], ['Method', 'Model', 'OpenSubtitles 2009 LM', '-', 'Profile', 'None'], ['Method', 'Model', 'OpenSubtitles 2009 KV Memory', '-', 'Profile', 'None']]
1
[['Fluency'], ['Engagingness'], ['Consistency'], ['Persona Detection']]
[['4.31(1.07)', '4.25(1.06)', '4.36(0.92)', '0.95(0.22)'], ['3.17(1.10)', '3.18(1.41)', '2.98(1.45)', '0.51(0.50)'], ['3.08(1.40)', '3.13(1.39)', '3.14(1.26)', '0.72(0.45)'], ['3.81(1.14)', '3.88(0.98)', '3.36(1.37)', '0.59(0.49)'], ['3.97(0.94)', '3.50(1.17)', '3.44(1.30)', '0.81(0.39)'], ['3.21(1.54)', '1.75(1.04)', '1.95(1.22)', '0.57(0.50)'], ['2.85(1.46)', '2.13(1.07)', '2.15(1.08)', '0.35(0.48)'], ['2.25(1.37)', '2.12(1.33)', '1.96(1.22)', '0.38(0.49)'], ['2.14(1.20)', '2.22(1.22)', '2.06(1.29)', '0.42(0.49)']]
column
['Fluency', 'Engagingness', 'Consistency', 'Persona Detection']
['KV Memory', 'KV Profile Memory']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Engagingness</th> <th>Consistency</th> <th>Persona Detection</th> </tr> </thead> <tbody> <tr> <td>Method || Model || Human || - || Profile || Self</td> <td>4.31(1.07)</td> <td>4.25(1.06)</td> <td>4.36(0.92)</td> <td>0.95(0.22)</td> </tr> <tr> <td>Method || Model || Generative PersonaChat Models || Seq2Seq || Profile || None</td> <td>3.17(1.10)</td> <td>3.18(1.41)</td> <td>2.98(1.45)</td> <td>0.51(0.50)</td> </tr> <tr> <td>Method || Model || Generative PersonaChat Models || Profile Memory || Profile || Self</td> <td>3.08(1.40)</td> <td>3.13(1.39)</td> <td>3.14(1.26)</td> <td>0.72(0.45)</td> </tr> <tr> <td>Method || Model || Ranking PersonaChat Models || KV Memory || Profile || None</td> <td>3.81(1.14)</td> <td>3.88(0.98)</td> <td>3.36(1.37)</td> <td>0.59(0.49)</td> </tr> <tr> <td>Method || Model || Ranking PersonaChat Models || KV Profile Memory || Profile || Self</td> <td>3.97(0.94)</td> <td>3.50(1.17)</td> <td>3.44(1.30)</td> <td>0.81(0.39)</td> </tr> <tr> <td>Method || Model || Twitter LM || - || Profile || None</td> <td>3.21(1.54)</td> <td>1.75(1.04)</td> <td>1.95(1.22)</td> <td>0.57(0.50)</td> </tr> <tr> <td>Method || Model || OpenSubtitles 2018 LM || - || Profile || None</td> <td>2.85(1.46)</td> <td>2.13(1.07)</td> <td>2.15(1.08)</td> <td>0.35(0.48)</td> </tr> <tr> <td>Method || Model || OpenSubtitles 2009 LM || - || Profile || None</td> <td>2.25(1.37)</td> <td>2.12(1.33)</td> <td>1.96(1.22)</td> <td>0.38(0.49)</td> </tr> <tr> <td>Method || Model || OpenSubtitles 2009 KV Memory || - || Profile || None</td> <td>2.14(1.20)</td> <td>2.22(1.22)</td> <td>2.06(1.29)</td> <td>0.42(0.49)</td> </tr> </tbody></table>
Table 4
table_4
P18-1205
7
acl2018
The results are reported in Table 4 for the best performing generative and ranking models, in both the No Persona and Self Persona categories, 100 dialogues each. We also evaluate the scores of human performance by replacing the chatbot with a human (another Turker). This effectively gives us upper bound scores which we can aim for with our models. Finally, and importantly, we compare our models trained on PERSONA-CHAT with chit-chat models trained with the Twitter and OpenSubtitles datasets (2009 and 2018 versions) instead, following Vinyals and Le (2015). Example chats from a few of the models are shown in the Appendix in Tables 7, 8, 9, 10, 11 and 12. Firstly, we see a difference in fluency, engagingness and consistency between all PERSONACHAT models and the models trained on OpenSubtitles and Twitter. PERSONA-CHAT is a resource that is particularly strong at providing training data for the beginning of conversations, when the two speakers do not know each other, focusing on asking and answering questions, in contrast to other resources.
[1, 1, 2, 1, 0, 1, 2]
['The results are reported in Table 4 for the best performing generative and ranking models, in both the No Persona and Self Persona categories, 100 dialogues each.', 'We also evaluate the scores of human performance by replacing the chatbot with a human (another Turker).', 'This effectively gives us upper bound scores which we can aim for with our models.', 'Finally, and importantly, we compare our models trained on PERSONA-CHAT with chit-chat models trained with the Twitter and OpenSubtitles datasets (2009 and 2018 versions) instead, following Vinyals and Le (2015).', 'Example chats from a few of the models are shown in the Appendix in Tables 7, 8, 9, 10, 11 and 12.', 'Firstly, we see a difference in fluency, engagingness and consistency between all PERSONACHAT models and the models trained on OpenSubtitles and Twitter.', 'PERSONA-CHAT is a resource that is particularly strong at providing training data for the beginning of conversations, when the two speakers do not know each other, focusing on asking and answering questions, in contrast to other resources.']
[['Generative PersonaChat Models', 'Ranking PersonaChat Models', 'Profile'], ['Human'], None, ['KV Memory', 'KV Profile Memory', 'Twitter LM', 'OpenSubtitles 2018 LM', 'OpenSubtitles 2009 LM', 'OpenSubtitles 2009 KV Memory'], None, ['Fluency', 'Engagingness', 'Consistency', 'Seq2Seq', 'Profile Memory', 'KV Memory', 'KV Profile Memory', 'Twitter LM', 'OpenSubtitles 2018 LM', 'OpenSubtitles 2009 LM', 'OpenSubtitles 2009 KV Memory'], ['Generative PersonaChat Models', 'Ranking PersonaChat Models']]
1
P18-1209table_3
Comparison of the training and testing speeds between TFN and LMF. The second and the third columns indicate the number of data point inferences per second (IPS) during training and testing time respectively. Both models are implemented in the same framework with equivalent running environment.
2
[['Model', 'TFN'], ['Model', 'LMF']]
1
[['Training Speed (IPS)'], ['Testing Speed (IPS)']]
[['340.74', '1177.17'], ['1134.82', '2249.90']]
column
['Training Speed (IPS)', 'Testing Speed (IPS)']
['LMF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training Speed (IPS)</th> <th>Testing Speed (IPS)</th> </tr> </thead> <tbody> <tr> <td>Model || TFN</td> <td>340.74</td> <td>1177.17</td> </tr> <tr> <td>Model || LMF</td> <td>1134.82</td> <td>2249.90</td> </tr> </tbody></table>
Table 3
table_3
P18-1209
8
acl2018
Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model. Here we set rank to be 4 since it can generally achieve fairly competent performance. Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors significantly reduces the amount of time needed for training and testing the model. On an NVIDIA Quadro K4200 GPU, LMF trains with an average frequency of 1134.82 IPS (data point inferences per second) while the TFN model trains at an average of 340.74 IPS.
[1, 2, 2, 1]
['Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model.', 'Here we set rank to be 4 since it can generally achieve fairly competent performance.', 'Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors significantly reduces the amount of time needed for training and testing the model.', 'On an NVIDIA Quadro K4200 GPU, LMF trains with an average frequency of 1134.82 IPS (data point inferences per second) while the TFN model trains at an average of 340.74 IPS.']
[['LMF', 'TFN', 'Training Speed (IPS)', 'Testing Speed (IPS)'], None, None, ['LMF', 'Training Speed (IPS)', 'TFN']]
1
P18-1211table_1
Performance of our approach on storycloze task from Mostafazadeh et al. (2016) compared with other unsupervised approaches (accuracy numbers as reported in Mostafazadeh et al. (2016)).
2
[['Our Method variants', 'Sequential CG + Unigram Mixture'], ['Our Method variants', 'Sequential CG + Brown clustering'], ['Our Method variants', 'Sequential CG + Sentiment'], ['Our Method variants', 'Sequential CG'], ['Our Method variants', 'Sequential CG (unnormalized)'], ['DSSM', '-'], ['GenSim', '-'], ['Skip-thoughts', '-'], ['Narrative-Chain(Stories)', '-'], ['N-grams', '-']]
1
[['Accuracy']]
[['0.602'], ['0.593'], ['0.581'], ['0.589'], ['0.531'], ['0.585'], ['0.539'], ['0.552'], ['0.494'], ['0.494']]
column
['Accuracy']
['Sequential CG + Unigram Mixture']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Our Method variants || Sequential CG + Unigram Mixture</td> <td>0.602</td> </tr> <tr> <td>Our Method variants || Sequential CG + Brown clustering</td> <td>0.593</td> </tr> <tr> <td>Our Method variants || Sequential CG + Sentiment</td> <td>0.581</td> </tr> <tr> <td>Our Method variants || Sequential CG</td> <td>0.589</td> </tr> <tr> <td>Our Method variants || Sequential CG (unnormalized)</td> <td>0.531</td> </tr> <tr> <td>DSSM || -</td> <td>0.585</td> </tr> <tr> <td>GenSim || -</td> <td>0.539</td> </tr> <tr> <td>Skip-thoughts || -</td> <td>0.552</td> </tr> <tr> <td>Narrative-Chain(Stories) || -</td> <td>0.494</td> </tr> <tr> <td>N-grams || -</td> <td>0.494</td> </tr> </tbody></table>
Table 1
table_1
P18-1211
7
acl2018
Table 1 shows the performance of variants of our approach for the task. Our baselines include previous approaches for the same task: DSSM is a deep-learning based approach, which maps the context and ending to the same space, and is the best-performing method in Mostafazadeh et al.(2016). GenSim and N-gram return the ending that is more similar to the context based on word2vec embeddings (Mikolov et al., 2013) and n-grams, respectively. Narrative-Chains computes the probability of each alternative based on eventchains, following the approach of Chambers and Jurafsky (2008). We note that our method improves on the previous best unsupervised methods for the task. This is quite surprising, since our Sequential-CG model in this case is trained on bag-of-lemma representations, and only needs sentence segmentation, tokenization and lemmatization for preprocessing. On the other hand, approaches such as Narrative-Chains require parsing and eventrecognition, while approaches such as GenSim require learning word embeddings on large text corpora for training. Further, we note that predicting the ending without normalizing for the probability of the words in the ending results in significantly weaker performance, as expected. We train another variant of Sequential-CG with the sentence-level sentiment annotation (from Stanford CoreNLP) also added as a feature. This does not improve performance, consistent with findings in Mostafazadeh et al. (2016). We also experiment with a variant where we perform Brown clustering (Brown et al., 1992) of words in the unlabeled stories (K = 500 clusters), and include cluster-annotations as features for training the method. Doing this explicitly incorporates lexical similarity into the model, leading to a small improvement in performance. Finally, a mixture model consisting of the Sequential-CG and a unigram language model leads to a further improvement in performance.
[1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 1 shows the performance of variants of our approach for the task.', 'Our baselines include previous approaches for the same task: DSSM is a deep-learning based approach, which maps the context and ending to the same space, and is the best-performing method in Mostafazadeh et al.(2016).', 'GenSim and N-gram return the ending that is more similar to the context based on word2vec embeddings (Mikolov et al., 2013) and n-grams, respectively.', 'Narrative-Chains computes the probability of each alternative based on eventchains, following the approach of Chambers and Jurafsky (2008).', 'We note that our method improves on the previous best unsupervised methods for the task.', 'This is quite surprising, since our Sequential-CG model in this case is trained on bag-of-lemma representations, and only needs sentence segmentation, tokenization and lemmatization for preprocessing.', 'On the other hand, approaches such as Narrative-Chains require parsing and eventrecognition, while approaches such as GenSim require learning word embeddings on large text corpora for training.', 'Further, we note that predicting the ending without normalizing for the probability of the words in the ending results in significantly weaker performance, as expected.', 'We train another variant of Sequential-CG with the sentence-level sentiment annotation (from Stanford CoreNLP) also added as a feature.', 'This does not improve performance, consistent with findings in Mostafazadeh et al. (2016).', 'We also experiment with a variant where we perform Brown clustering (Brown et al., 1992) of words in the unlabeled stories (K = 500 clusters), and include cluster-annotations as features for training the method.', 'Doing this explicitly incorporates lexical similarity into the model, leading to a small improvement in performance.', 'Finally, a mixture model consisting of the Sequential-CG and a unigram language model leads to a further improvement in performance.']
[None, ['DSSM'], ['GenSim', 'N-grams'], ['Narrative-Chain(Stories)'], ['Our Method variants'], ['Sequential CG + Unigram Mixture', 'Sequential CG + Brown clustering', 'Sequential CG + Sentiment', 'Sequential CG'], ['Narrative-Chain(Stories)', 'GenSim'], ['Sequential CG (unnormalized)'], ['Sequential CG + Sentiment'], ['Sequential CG + Sentiment'], ['Sequential CG + Brown clustering'], ['Sequential CG + Brown clustering'], ['Sequential CG + Unigram Mixture']]
1
P18-1220table_4
Results of correcting lines in the RDD newspapers and TCP books with multiple witnesses when decoding with different strategies using the same supervised model. Attention combination strategies that statistically significantly outperform single-input decoding are highlighted with * (p < 0.05, paired-permutation test). Best result for each column is in bold.
2
[['Decode', 'None'], ['Decode', 'Single'], ['Decode', 'Flat'], ['Decode', 'Weighted'], ['Decode', 'Average']]
2
[['RDD Newspapers', 'CER'], ['RDD Newspapers', 'LCER'], ['RDD Newspapers', 'WER'], ['RDD Newspapers', 'LWER'], ['TCP Books', 'CER'], ['TCP Books', 'LCER'], ['TCP Books', 'WER'], ['TCP Books', 'LWER']]
[['0.15149', '0.04717', '0.37111', '0.13799', '0.10590', '0.07666', '0.30549', '0.23495'], ['0.07199', '0.03300', '0.14906', '0.06948', '0.04508', '0.01407', '0.11283', '0.03392'], ['0.07238', '0.02904*', '0.15818', '0.06241*', '0.05554', '0.01727', '0.13487', '0.04079'], ['0.06882*', '0.02145*', '0.15221', '0.05375', '0.05516', '0.01392*', '0.1330', '0.03669'], ['0.04210*', '0.01399*', '0.09397', '0.02863*', '0.04072*', '0.01021*', '0.09786*', '0.02092*']]
column
['CER', 'LCER', 'WER', 'LWER', 'CER', 'LCER', 'WER', 'LWER']
['Average']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RDD Newspapers || CER</th> <th>RDD Newspapers || LCER</th> <th>RDD Newspapers || WER</th> <th>RDD Newspapers || LWER</th> <th>TCP Books || CER</th> <th>TCP Books || LCER</th> <th>TCP Books || WER</th> <th>TCP Books || LWER</th> </tr> </thead> <tbody> <tr> <td>Decode || None</td> <td>0.15149</td> <td>0.04717</td> <td>0.37111</td> <td>0.13799</td> <td>0.10590</td> <td>0.07666</td> <td>0.30549</td> <td>0.23495</td> </tr> <tr> <td>Decode || Single</td> <td>0.07199</td> <td>0.03300</td> <td>0.14906</td> <td>0.06948</td> <td>0.04508</td> <td>0.01407</td> <td>0.11283</td> <td>0.03392</td> </tr> <tr> <td>Decode || Flat</td> <td>0.07238</td> <td>0.02904*</td> <td>0.15818</td> <td>0.06241*</td> <td>0.05554</td> <td>0.01727</td> <td>0.13487</td> <td>0.04079</td> </tr> <tr> <td>Decode || Weighted</td> <td>0.06882*</td> <td>0.02145*</td> <td>0.15221</td> <td>0.05375</td> <td>0.05516</td> <td>0.01392*</td> <td>0.1330</td> <td>0.03669</td> </tr> <tr> <td>Decode || Average</td> <td>0.04210*</td> <td>0.01399*</td> <td>0.09397</td> <td>0.02863*</td> <td>0.04072*</td> <td>0.01021*</td> <td>0.09786*</td> <td>0.02092*</td> </tr> </tbody></table>
Table 4
table_4
P18-1220
7
acl2018
The results from Table 4 reveal that average attention combination performs best among all the decoding strategies on RDD newspapers and TCP books datasets. It reduces the CER of single input decoding by 41.5% for OCRfd lines in RDD newspapers and 9.76% for TCP books. The comparison between two hierarchical attention combination strategies shows that averaging evidence from each input works better than a weighted summation mechanism. Flat attention combination, which merges all the inputs into a long sequence when computing the strength of each encoder hidden state, obtains the worst performance in terms of both CER and WER.
[1, 1, 1, 1]
['The results from Table 4 reveal that average attention combination performs best among all the decoding strategies on RDD newspapers and TCP books datasets.', 'It reduces the CER of single input decoding by 41.5% for OCR\x81fd lines in RDD newspapers and 9.76% for TCP books.', 'The comparison between two hierarchical attention combination strategies shows that averaging evidence from each input works better than a weighted summation mechanism.', 'Flat attention combination, which merges all the inputs into a long sequence when computing the strength of each encoder hidden state, obtains the worst performance in terms of both CER and WER.']
[['Average', 'RDD Newspapers', 'TCP Books'], ['CER', 'Single', 'RDD Newspapers', 'TCP Books'], ['Average', 'Weighted'], ['Flat', 'CER', 'WER']]
1
P18-1220table_5
Results from model trained under different settings on single-input decoding and multiple-input decoding for both the RDD newspapers and TCP books. All training is unsupervised except for supervised results in italics. Unsupervised training settings with multi-input decoding that are significantly better than other unsupervised counterparts are highlighted with * (p < 0.05, paired-permutation test). Best result among unsupervised training in each column is in bold.
4
[['Decode', '-', 'Model', 'None'], ['Decode', 'Single', 'Model', 'Seq2Seq-Super'], ['Decode', 'Single', 'Model', 'Seq2Seq-Noisy'], ['Decode', 'Single', 'Model', 'Seq2Seq-Syn'], ['Decode', 'Single', 'Model', 'Seq2Seq-Boots'], ['Decode', 'Multi', 'Model', 'LMR'], ['Decode', 'Multi', 'Model', 'Majority Vote'], ['Decode', 'Multi', 'Model', 'Seq2Seq-Super'], ['Decode', 'Multi', 'Model', 'Seq2Seq-Noisy'], ['Decode', 'Multi', 'Model', 'Seq2Seq-Syn'], ['Decode', 'Multi', 'Model', 'Seq2Seq-Boots']]
2
[['RDD Newspapers', 'CER'], ['RDD Newspapers', 'LCER'], ['RDD Newspapers', 'WER'], ['RDD Newspapers', 'LWER'], ['TCP Books', 'CER'], ['TCP Books', 'LCER'], ['TCP Books', 'WER'], ['TCP Books', 'LWER']]
[['0.18133', '0.13552', '0.41780', '0.31544', '0.10670', '0.08800', '0.31734', '0.27227'], ['0.09044', '0.04469', '0.17812', '0.09063', '0.04944', '0.01498', '0.12186', '0.03500'], ['0.10524', '0.05565', '0.20600', '0.11416', '0.08704', '0.05889', '0.25994', '0.15725'], ['0.16136', '0.11986', '0.35802', '0.26547', '0.09551', '0.06160', '0.27845', '0.18221'], ['0.11037', '0.06149', '0.22750', '0.13123', '0.07196', '0.03684', '0.21711', '0.11233'], ['0.15507', '0.13552', '0.34653', '0.31544', '0.10862', '0.08800', '0.33983', '0.27227'], ['0.16285', '0.13552', '0.40063', '0.31544', '0.11096', '0.08800', '0.34151', '0.27227'], ['0.07731', '0.03634', '0.15393', '0.07269', '0.04668', '0.01252', '0.11236', '0.02667'], ['0.09203*', '0.04554*', '0.17940', '0.09269', '0.08317', '0.05588', '0.24824', '0.14885'], ['0.12948', '0.09112', '0.28901', '0.19977', '0.08506', '0.05002', '0.24942', '0.15169'], ['0.09435', '0.04976', '0.19681', '0.10604', '0.06824*', '0.03343*', '0.20325*', '0.09995*']]
column
['CER', 'LCER', 'WER', 'LWER', 'CER', 'LCER', 'WER', 'LWER']
['Seq2Seq-Noisy', 'Seq2Seq-Boots']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RDD Newspapers || CER</th> <th>RDD Newspapers || LCER</th> <th>RDD Newspapers || WER</th> <th>RDD Newspapers || LWER</th> <th>TCP Books || CER</th> <th>TCP Books || LCER</th> <th>TCP Books || WER</th> <th>TCP Books || LWER</th> </tr> </thead> <tbody> <tr> <td>Decode || - || Model || None</td> <td>0.18133</td> <td>0.13552</td> <td>0.41780</td> <td>0.31544</td> <td>0.10670</td> <td>0.08800</td> <td>0.31734</td> <td>0.27227</td> </tr> <tr> <td>Decode || Single || Model || Seq2Seq-Super</td> <td>0.09044</td> <td>0.04469</td> <td>0.17812</td> <td>0.09063</td> <td>0.04944</td> <td>0.01498</td> <td>0.12186</td> <td>0.03500</td> </tr> <tr> <td>Decode || Single || Model || Seq2Seq-Noisy</td> <td>0.10524</td> <td>0.05565</td> <td>0.20600</td> <td>0.11416</td> <td>0.08704</td> <td>0.05889</td> <td>0.25994</td> <td>0.15725</td> </tr> <tr> <td>Decode || Single || Model || Seq2Seq-Syn</td> <td>0.16136</td> <td>0.11986</td> <td>0.35802</td> <td>0.26547</td> <td>0.09551</td> <td>0.06160</td> <td>0.27845</td> <td>0.18221</td> </tr> <tr> <td>Decode || Single || Model || Seq2Seq-Boots</td> <td>0.11037</td> <td>0.06149</td> <td>0.22750</td> <td>0.13123</td> <td>0.07196</td> <td>0.03684</td> <td>0.21711</td> <td>0.11233</td> </tr> <tr> <td>Decode || Multi || Model || LMR</td> <td>0.15507</td> <td>0.13552</td> <td>0.34653</td> <td>0.31544</td> <td>0.10862</td> <td>0.08800</td> <td>0.33983</td> <td>0.27227</td> </tr> <tr> <td>Decode || Multi || Model || Majority Vote</td> <td>0.16285</td> <td>0.13552</td> <td>0.40063</td> <td>0.31544</td> <td>0.11096</td> <td>0.08800</td> <td>0.34151</td> <td>0.27227</td> </tr> <tr> <td>Decode || Multi || Model || Seq2Seq-Super</td> <td>0.07731</td> <td>0.03634</td> <td>0.15393</td> <td>0.07269</td> <td>0.04668</td> <td>0.01252</td> <td>0.11236</td> <td>0.02667</td> </tr> <tr> <td>Decode || Multi || Model || Seq2Seq-Noisy</td> <td>0.09203*</td> <td>0.04554*</td> <td>0.17940</td> <td>0.09269</td> <td>0.08317</td> <td>0.05588</td> <td>0.24824</td> <td>0.14885</td> </tr> <tr> <td>Decode || Multi || Model || Seq2Seq-Syn</td> <td>0.12948</td> <td>0.09112</td> <td>0.28901</td> <td>0.19977</td> <td>0.08506</td> <td>0.05002</td> <td>0.24942</td> <td>0.15169</td> </tr> <tr> <td>Decode || Multi || Model || Seq2Seq-Boots</td> <td>0.09435</td> <td>0.04976</td> <td>0.19681</td> <td>0.10604</td> <td>0.06824*</td> <td>0.03343*</td> <td>0.20325*</td> <td>0.09995*</td> </tr> </tbody></table>
Table 5
table_5
P18-1220
8
acl2018
Table 5 presents the results for our model trained in different training settings as well as the baseline language model reranking (LMR) and majority vote methods. Multiple input decoding performs better than single input decoding for every training setting, and the model trained in supervised mode with multi-input decoding achieves the best performance. The majority vote baseline, which works only on more than two inputs, performs worst on both the TCP books and RDD newspapers. Our proposed unsupervised framework Seq2Seq-Noisy and Seq2SeqBoots achieves performance comparable with the supervised model via multi-input decoding on the RDD newspaper dataset. The performance of Seq2Seq-Noisy is worse on the TCP Books than the RDD newspapers, since those old books contain the character long s 6, which is formerly used where s occurred in the middle or at the beginning of a word. These characters are recognized as f in all the witnesses because of similar shape. Thus, the model trained on noisy data are unable to correct them into s. Nonetheless, by removing the factor of long s, i.e., replacing the long s in the ground truth with f, Seq2Seq-Noisy could achieve a CER of 0.062 for single-input decoding and 0.058 for multi-input decoding on the TCP books. Both Seq2Seq-Syn and Seq2Seq-Boots work better on the RDD newspapers than the TCP books dataset.
[1, 1, 1, 1, 1, 2, 2, 1, 1]
['Table 5 presents the results for our model trained in different training settings as well as the baseline language model reranking (LMR) and majority vote methods.', 'Multiple input decoding performs better than single input decoding for every training setting, and the model trained in supervised mode with multi-input decoding achieves the best performance.', 'The majority vote baseline, which works only on more than two inputs, performs worst on both the TCP books and RDD newspapers.', 'Our proposed unsupervised framework Seq2Seq-Noisy and Seq2SeqBoots achieves performance comparable with the supervised model via multi-input decoding on the RDD newspaper dataset.', 'The performance of Seq2Seq-Noisy is worse on the TCP Books than the RDD newspapers, since those old books contain the character long s 6, which is formerly used where s occurred in the middle or at the beginning of a word.', 'These characters are recognized as f in all the witnesses because of similar shape.', 'Thus, the model trained on noisy data are unable to correct them into s.', 'Nonetheless, by removing the factor of long s, i.e., replacing the long s in the ground truth with f, Seq2Seq-Noisy could achieve a CER of 0.062 for single-input decoding and 0.058 for multi-input decoding on the TCP books.', 'Both Seq2Seq-Syn and Seq2Seq-Boots work better on the RDD newspapers than the TCP books dataset.']
[['Seq2Seq-Super', 'Seq2Seq-Noisy', 'Seq2Seq-Syn', 'Seq2Seq-Boots', 'LMR', 'Majority Vote'], ['Multi', 'Single', 'Seq2Seq-Super'], ['Majority Vote', 'RDD Newspapers', 'TCP Books'], ['Seq2Seq-Noisy', 'Seq2Seq-Boots', 'RDD Newspapers', 'Seq2Seq-Super'], ['Seq2Seq-Noisy', 'RDD Newspapers', 'TCP Books'], None, ['Seq2Seq-Noisy'], ['Seq2Seq-Noisy', 'CER', 'Single', 'Multi', 'TCP Books'], ['Seq2Seq-Syn', 'Seq2Seq-Boots', 'RDD Newspapers', 'TCP Books']]
1
P18-1221table_1
Comparing the performance of recipe generation task. All the results are on the test set of the corresponding corpus. AWD LSTM (type model) is our type model implemented with the baseline language model AWD LSTM (Merity et al., 2017). Our second baseline is the same language model (AWD LSTM) with the type information added as an additional feature for each word.
4
[['Model', 'AWD LSTM', 'Dataset (Recipe Corpus)', 'original'], ['Model', 'AWD LSTM type model', 'Dataset (Recipe Corpus)', 'modified type'], ['Model', 'AWD LSTM with type feature', 'Dataset (Recipe Corpus)', 'original'], ['Model', 'our model', 'Dataset (Recipe Corpus)', 'original']]
1
[['Vocabulary Size'], ['Perplexity']]
[['52,472', '20.23'], ['51,675', '17.62'], ['52,472', '18.23'], ['52,472', '9.67']]
column
['Vocabulary Size', 'Perplexity']
['our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>Model || AWD LSTM || Dataset (Recipe Corpus) || original</td> <td>52,472</td> <td>20.23</td> </tr> <tr> <td>Model || AWD LSTM type model || Dataset (Recipe Corpus) || modified type</td> <td>51,675</td> <td>17.62</td> </tr> <tr> <td>Model || AWD LSTM with type feature || Dataset (Recipe Corpus) || original</td> <td>52,472</td> <td>18.23</td> </tr> <tr> <td>Model || our model || Dataset (Recipe Corpus) || original</td> <td>52,472</td> <td>9.67</td> </tr> </tbody></table>
Table 1
table_1
P18-1221
7
acl2018
We compare our model with the baselines using perplexity metric?lower perplexity means the better prediction. Table 1 summarizes the result. The 3rd row shows that adding type as a simple feature does not guarantee a significant performance improvement while our proposed method significantly outperforms both baselines and achieves 52.2% improvement with respect to baseline in terms of perplexity.
[1, 1, 1]
['We compare our model with the baselines using perplexity metric?lower perplexity means the better prediction.', 'Table 1 summarizes the result.', 'The 3rd row shows that adding type as a simple feature does not guarantee a significant performance improvement while our proposed method significantly outperforms both baselines and achieves 52.2% improvement with respect to baseline in terms of perplexity.']
[['Perplexity'], None, ['Perplexity', 'AWD LSTM with type feature', 'our model', 'AWD LSTM', 'AWD LSTM type model']]
1
P18-1221table_2
Comparing the performance of code generation task. All the results are on the test set of the corresponding corpus. fLSTM, bLSTM denotes forward and backward LSTM respectively. SLP-Core refers to (Hellendoorn and Devanbu, 2017).
4
[['Model', 'SLP-Core', 'Dataset (Code Corpus)', 'original'], ['Model', 'fLSTM', 'Dataset (Code Corpus)', 'original'], ['Model', 'fLSTM [type model]', 'Dataset (Code Corpus)', 'modified type'], ['Model', 'fLSTM with type feature', 'Dataset (Code Corpus)', 'original'], ['Model', 'our model (fLSTM)', 'Dataset (Code Corpus)', 'original'], ['Model', 'bLSTM', 'Dataset (Code Corpus)', 'original'], ['Model', 'bLSTM [type model]', 'Dataset (Code Corpus)', 'modified type'], ['Model', 'bLSTM with type feature', 'Dataset (Code Corpus)', 'original'], ['Model', 'our model (bLSTM)', 'Dataset (Code Corpus)', 'original']]
1
[['Vocabulary Size'], ['Perplexity']]
[['38,297', '3.40'], ['38,297', '21.97'], ['14,177', '7.94'], ['38,297', '20.05'], ['38,297', '12.52'], ['38,297', '7.19'], ['14,177', '2.58'], ['38,297', '6.11'], ['38,297', '2.65']]
column
['Vocabulary Size', 'Perplexity']
['our model (fLSTM)', 'our model (bLSTM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>Model || SLP-Core || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>3.40</td> </tr> <tr> <td>Model || fLSTM || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>21.97</td> </tr> <tr> <td>Model || fLSTM [type model] || Dataset (Code Corpus) || modified type</td> <td>14,177</td> <td>7.94</td> </tr> <tr> <td>Model || fLSTM with type feature || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>20.05</td> </tr> <tr> <td>Model || our model (fLSTM) || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>12.52</td> </tr> <tr> <td>Model || bLSTM || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>7.19</td> </tr> <tr> <td>Model || bLSTM [type model] || Dataset (Code Corpus) || modified type</td> <td>14,177</td> <td>2.58</td> </tr> <tr> <td>Model || bLSTM with type feature || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>6.11</td> </tr> <tr> <td>Model || our model (bLSTM) || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>2.65</td> </tr> </tbody></table>
Table 2
table_2
P18-1221
8
acl2018
Table 2 shows that adding type as simple features does not guarantee a significant performance improvement while our proposed method significantly outperforms both forward and backward LSTM baselines. Our approach with backward LSTM has 40.3% better perplexity than original backward LSTM and forward has 63.14% lower (i.e., better) perplexity than original forward LSTM. With respect to SLP-Core performance, our model is 22.06% better in perplexity.
[1, 1, 1]
['Table 2 shows that adding type as simple features does not guarantee a significant performance improvement while our proposed method significantly outperforms both forward and backward LSTM baselines.', 'Our approach with backward LSTM has 40.3% better perplexity than original backward LSTM and forward has 63.14% lower (i.e., better) perplexity than original forward LSTM.', 'With respect to SLP-Core performance, our model is 22.06% better in perplexity.']
[['fLSTM', 'bLSTM', 'our model (fLSTM)', 'our model (bLSTM)'], ['our model (bLSTM)', 'Perplexity', 'bLSTM', 'our model (fLSTM)', 'fLSTM'], ['SLP-Core', 'Perplexity', 'our model (bLSTM)']]
1
P18-1222table_7
DBLP results evaluated on 63,342 citation contexts with newcomer ground-truth.
4
[['Model', 'w2v (I4O)', 'Newcomer Friendly', 'no'], ['Model', 'NPM', 'Newcomer Friendly', 'no'], ['Model', 'd2v-nc', 'Newcomer Friendly', 'yes'], ['Model', 'd2v-cac', 'Newcomer Friendly', 'yes'], ['Model', 'h-d2v', 'Newcomer Friendly', 'yes']]
1
[['Rec'], ['MAP'], ['MRR'], ['nDCG']]
[['3.64', '3.23', '3.41', '2.73'], ['1.37', '1.13', '1.15', '0.92'], ['6.48', '3.52', '3.54', '3.96'], ['8.16', '5.13', '5.24', '5.21'], ['6.41', '4.95', '5.21', '4.49']]
column
['Rec', 'MAP', 'MRR', 'nDCG']
['d2v-cac']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec</th> <th>MAP</th> <th>MRR</th> <th>nDCG</th> </tr> </thead> <tbody> <tr> <td>Model || w2v (I4O) || Newcomer Friendly || no</td> <td>3.64</td> <td>3.23</td> <td>3.41</td> <td>2.73</td> </tr> <tr> <td>Model || NPM || Newcomer Friendly || no</td> <td>1.37</td> <td>1.13</td> <td>1.15</td> <td>0.92</td> </tr> <tr> <td>Model || d2v-nc || Newcomer Friendly || yes</td> <td>6.48</td> <td>3.52</td> <td>3.54</td> <td>3.96</td> </tr> <tr> <td>Model || d2v-cac || Newcomer Friendly || yes</td> <td>8.16</td> <td>5.13</td> <td>5.24</td> <td>5.21</td> </tr> <tr> <td>Model || h-d2v || Newcomer Friendly || yes</td> <td>6.41</td> <td>4.95</td> <td>5.21</td> <td>4.49</td> </tr> </tbody></table>
Table 7
table_7
P18-1222
8
acl2018
Table 7 analyzes the impact of newcomer friendliness. Opposite from what is done in Section 5.2.2,we only evaluate on testing examples where at least a ground-truth paper is a newcomer. Please note that newcomer unfriendly approaches do not necessarily get zero scores. The table shows that newcomer friendly approaches are superior to unfriendly ones. Note that, like Table 5, this table is also based on controlled experiments and not intended for comparing approaches.
[1, 2, 1, 2]
['Table 7 analyzes the impact of newcomer friendliness. Opposite from what is done in Section 5.2.2,we only evaluate on testing examples where at least a ground-truth paper is a newcomer.', ' Please note that newcomer unfriendly approaches do not necessarily get zero scores.', 'The table shows that newcomer friendly approaches are superior to unfriendly ones.', 'Note that, like Table 5, this table is also based on controlled experiments and not intended for comparing approaches.']
[['Newcomer Friendly'], None, ['d2v-nc', 'd2v-cac', 'h-d2v', 'w2v (I4O)', 'NPM'], None]
1
P18-1228table_2
Evaluation results. Our method performs best on both Standard English and Twitter.
3
[['Standard English', 'Method', 'SEMAXIS'], ['Standard English', 'Method', 'DENSIFIER'], ['Standard English', 'Method', 'SENTPROP'], ['Standard English', 'Method', 'WordNet'], ['Twitter', 'Method', 'SEMAXIS'], ['Twitter', 'Method', 'DENSIFIER'], ['Twitter', 'Method', 'SENTPROP'], ['Twitter', 'Method', 'Sentiment140']]
1
[['AUC'], ['Ternary F1'], ['Tau']]
[['92.2', '61', '0.48'], ['91', '58.2', '0.46'], ['88.4', '56.1', '0.41'], ['89.5', '58.7', '0.34'], ['90', '59.2', '0.57'], ['88.5', '58.8', '0.55'], ['85', '58.2', '0.5'], ['86.2', '57.7', '0.51']]
column
['AUC', 'Ternary F1', 'Tau']
['SEMAXIS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AUC</th> <th>Ternary F1</th> <th>Tau</th> </tr> </thead> <tbody> <tr> <td>Standard English || Method || SEMAXIS</td> <td>92.2</td> <td>61</td> <td>0.48</td> </tr> <tr> <td>Standard English || Method || DENSIFIER</td> <td>91</td> <td>58.2</td> <td>0.46</td> </tr> <tr> <td>Standard English || Method || SENTPROP</td> <td>88.4</td> <td>56.1</td> <td>0.41</td> </tr> <tr> <td>Standard English || Method || WordNet</td> <td>89.5</td> <td>58.7</td> <td>0.34</td> </tr> <tr> <td>Twitter || Method || SEMAXIS</td> <td>90</td> <td>59.2</td> <td>0.57</td> </tr> <tr> <td>Twitter || Method || DENSIFIER</td> <td>88.5</td> <td>58.8</td> <td>0.55</td> </tr> <tr> <td>Twitter || Method || SENTPROP</td> <td>85</td> <td>58.2</td> <td>0.5</td> </tr> <tr> <td>Twitter || Method || Sentiment140</td> <td>86.2</td> <td>57.7</td> <td>0.51</td> </tr> </tbody></table>
Table 2
table_2
P18-1228
5
acl2018
Table 2 summarizes the performance. Surprisingly, SEMAXIS - the simplest approach - outperforms others on both Standard English and Twitter datasets across all measures.
[1, 1]
['Table 2 summarizes the performance.', 'Surprisingly, SEMAXIS - the simplest approach - outperforms others on both Standard English and Twitter datasets across all measures.']
[None, ['SEMAXIS', 'Standard English', 'Twitter', 'AUC', 'Ternary F1', 'Tau']]
1
P18-1229table_1
Results of the end-to-end taxonomy induction experiment. Our approach significantly outperforms two-phase methods (Panchenko et al., 2016; Shwartz et al., 2016; Bansal et al., 2014). Bansal et al. (2014) and TaxoRL (NR) + FG are listed separately because they use extra resources.
2
[['Model', 'TAXI'], ['Model', 'HypeNET'], ['Model', 'HypeNET+MST'], ['Model', 'TaxoRL (RE)'], ['Model', 'TaxoRL (NR)'], ['Model', 'Bansal et al. (2014)'], ['Model', 'TaxoRL (NR) + FG']]
1
[['P a'], ['R a'], ['F1 a'], ['P e'], ['R e'], ['F1 e']]
[['66.1', '13.9', '23.0', '54.8', '18.0', '27.1'], ['32.8', '26.7', '29.4', '26.1', '17.2', '20.7'], ['33.7', '41.1', '37.0', '29.2', '29.2', '29.2'], ['35.8', '47.4', '40.8', '35.4', '35.4', '35.4'], ['41.3', '49.2', '44.9', '35.6', '35.6', '35.6'], ['48.0', '55.2', '51.4', '-', '-', '-'], ['52.9', '58.6', '55.6', '43.8', '43.8', '43.8']]
column
['P a', 'R a', 'F1 a', 'P e', 'R e', 'F1 e']
['TaxoRL (NR) + FG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P a</th> <th>R a</th> <th>F1 a</th> <th>P e</th> <th>R e</th> <th>F1 e</th> </tr> </thead> <tbody> <tr> <td>Model || TAXI</td> <td>66.1</td> <td>13.9</td> <td>23.0</td> <td>54.8</td> <td>18.0</td> <td>27.1</td> </tr> <tr> <td>Model || HypeNET</td> <td>32.8</td> <td>26.7</td> <td>29.4</td> <td>26.1</td> <td>17.2</td> <td>20.7</td> </tr> <tr> <td>Model || HypeNET+MST</td> <td>33.7</td> <td>41.1</td> <td>37.0</td> <td>29.2</td> <td>29.2</td> <td>29.2</td> </tr> <tr> <td>Model || TaxoRL (RE)</td> <td>35.8</td> <td>47.4</td> <td>40.8</td> <td>35.4</td> <td>35.4</td> <td>35.4</td> </tr> <tr> <td>Model || TaxoRL (NR)</td> <td>41.3</td> <td>49.2</td> <td>44.9</td> <td>35.6</td> <td>35.6</td> <td>35.6</td> </tr> <tr> <td>Model || Bansal et al. (2014)</td> <td>48.0</td> <td>55.2</td> <td>51.4</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || TaxoRL (NR) + FG</td> <td>52.9</td> <td>58.6</td> <td>55.6</td> <td>43.8</td> <td>43.8</td> <td>43.8</td> </tr> </tbody></table>
Table 1
table_1
P18-1229
7
acl2018
Table 1 shows the results of the first experiment. HypeNET (Shwartz et al., 2016) uses additional surface features described in Section 2.2. HypeNET+MST extends HypeNET by first constructing a hypernym graph using HypeNET output as weights of edges and then finding the MST (Chu, 1965) of this graph. TaxoRL (RE) denotes our RL approach which assumes a common Root Embedding, and TaxoRL (NR) denotes its variant that allows a New Root to be added. We can see that TAXI has the lowest F1 a while HypeNET performs the worst in F1 e. Both TAXI and HypeNET F1 a and F1 e are lower than 30. HypeNET+MST outperforms HypeNET in both F1 a and F1 e, because it considers the global taxonomy structure, although the two phases are performed independently. TaxoRL (RE) uses exactly the same input as HypeNET+MST and yet achieves significantly better performance, which demonstrates the superiority of combining the phases of hypernymy detection and hypernymy organization. Also, we found that presuming a shared root embedding for all taxonomies can be inappropriate if they are from different domains, which explains why TaxoRL (NR) performs better than TaxoRL (RE). Finally, after we add the frequency and generality features (TaxoRL (NR) + FG), our approach outperforms Bansal et al. (2014), even if a much smaller corpus is used.
[1, 2, 2, 2, 1, 1, 1, 1, 1, 1]
['Table 1 shows the results of the first experiment.', 'HypeNET (Shwartz et al., 2016) uses additional surface features described in Section 2.2.', 'HypeNET+MST extends HypeNET by first constructing a hypernym graph using HypeNET output as weights of edges and then finding the MST (Chu, 1965) of this graph.', 'TaxoRL (RE) denotes our RL approach which assumes a common Root Embedding, and TaxoRL (NR) denotes its variant that allows a New Root to be added.', 'We can see that TAXI has the lowest F1 a while HypeNET performs the worst in F1 e.', 'Both TAXI and HypeNET F1 a and F1 e are lower than 30.', 'HypeNET+MST outperforms HypeNET in both F1 a and F1 e, because it considers the global taxonomy structure, although the two phases are performed independently.', 'TaxoRL (RE) uses exactly the same input as HypeNET+MST and yet achieves significantly better performance, which demonstrates the superiority of combining the phases of hypernymy detection and hypernymy organization.', 'Also, we found that presuming a shared root embedding for all taxonomies can be inappropriate if they are from different domains, which explains why TaxoRL (NR) performs better than TaxoRL (RE).', 'Finally, after we add the frequency and generality features (TaxoRL (NR) + FG), our approach outperforms Bansal et al. (2014), even if a much smaller corpus is used.']
[None, ['HypeNET'], ['HypeNET+MST'], ['TaxoRL (RE)', 'TaxoRL (NR)'], ['HypeNET', 'F1 e', 'TAXI', 'F1 a'], ['HypeNET', 'F1 e', 'TAXI', 'F1 a'], ['HypeNET+MST', 'HypeNET', 'F1 a', 'F1 e'], ['TaxoRL (RE)', 'HypeNET+MST'], ['TaxoRL (RE)', 'TaxoRL (NR)'], ['TaxoRL (NR) + FG']]
1
P18-1232table_2
Results of lexicon term sentiment classification.
1
[['EmbeddingP'], ['EmbeddingQ'], ['EmbeddingCat'], ['EmbeddingAll'], ['Yang'], ['SSWE'], ['DSE']]
2
[['B & D', 'HL'], ['B & D', 'MPQA'], ['B & E', 'HL'], ['B & E', 'MPQA'], ['B & K', 'HL'], ['B & K', 'MPQA'], ['D & E', 'HL'], ['D & E', 'MPQA'], ['D & K', 'HL'], ['D & K', 'MPQA'], ['E & K', 'HL'], ['E & K', 'MPQA']]
[['0.740', '0.733', '0.742', '0.734', '0.747', '0.735', '0.744', '0.701', '0.745', '0.709', '0.628', '0.574'], ['0.743', '0.701', '0.627', '0.573', '0.464', '0.453', '0.621', '0.577', '0.462', '0.450', '0.465', '0.453'], ['0.780', '0.772', '0.773', '0.756', '0.772', '0.751', '0.744', '0.728', '0.755', '0.702', '0.683', '0.639'], ['0.777', '0.769', '0.773', '0.730', '0.762', '0.760', '0.712', '0.707', '0.749', '0.724', '0.670', '0.658'], ['0.780', '0.775', '0.789', '0.762', '0.781', '0.770', '0.762', '0.736', '0.756', '0.713', '0.634', '0.614'], ['0.816', '0.801', '0.831', '0.817', '0.822', '0.808', '0.826', '0.785', '0.784', '0.772', '0.707', '0.659'], ['0.802', '0.788', '0.833', '0.828', '0.832', '0.799', '0.804', '0.797', '0.796', '0.786', '0.725', '0.683']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['DSE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B &amp; D || HL</th> <th>B &amp; D || MPQA</th> <th>B &amp; E || HL</th> <th>B &amp; E || MPQA</th> <th>B &amp; K || HL</th> <th>B &amp; K || MPQA</th> <th>D &amp; E || HL</th> <th>D &amp; E || MPQA</th> <th>D &amp; K || HL</th> <th>D &amp; K || MPQA</th> <th>E &amp; K || HL</th> <th>E &amp; K || MPQA</th> </tr> </thead> <tbody> <tr> <td>EmbeddingP</td> <td>0.740</td> <td>0.733</td> <td>0.742</td> <td>0.734</td> <td>0.747</td> <td>0.735</td> <td>0.744</td> <td>0.701</td> <td>0.745</td> <td>0.709</td> <td>0.628</td> <td>0.574</td> </tr> <tr> <td>EmbeddingQ</td> <td>0.743</td> <td>0.701</td> <td>0.627</td> <td>0.573</td> <td>0.464</td> <td>0.453</td> <td>0.621</td> <td>0.577</td> <td>0.462</td> <td>0.450</td> <td>0.465</td> <td>0.453</td> </tr> <tr> <td>EmbeddingCat</td> <td>0.780</td> <td>0.772</td> <td>0.773</td> <td>0.756</td> <td>0.772</td> <td>0.751</td> <td>0.744</td> <td>0.728</td> <td>0.755</td> <td>0.702</td> <td>0.683</td> <td>0.639</td> </tr> <tr> <td>EmbeddingAll</td> <td>0.777</td> <td>0.769</td> <td>0.773</td> <td>0.730</td> <td>0.762</td> <td>0.760</td> <td>0.712</td> <td>0.707</td> <td>0.749</td> <td>0.724</td> <td>0.670</td> <td>0.658</td> </tr> <tr> <td>Yang</td> <td>0.780</td> <td>0.775</td> <td>0.789</td> <td>0.762</td> <td>0.781</td> <td>0.770</td> <td>0.762</td> <td>0.736</td> <td>0.756</td> <td>0.713</td> <td>0.634</td> <td>0.614</td> </tr> <tr> <td>SSWE</td> <td>0.816</td> <td>0.801</td> <td>0.831</td> <td>0.817</td> <td>0.822</td> <td>0.808</td> <td>0.826</td> <td>0.785</td> <td>0.784</td> <td>0.772</td> <td>0.707</td> <td>0.659</td> </tr> <tr> <td>DSE</td> <td>0.802</td> <td>0.788</td> <td>0.833</td> <td>0.828</td> <td>0.832</td> <td>0.799</td> <td>0.804</td> <td>0.797</td> <td>0.796</td> <td>0.786</td> <td>0.725</td> <td>0.683</td> </tr> </tbody></table>
Table 2
table_2
P18-1232
8
acl2018
Table 2 shows the experimental results of lexicon term sentiment classification. Our DSE method can achieve competitive performance among all the methods. Compared with SSWE, our DSE is still competitive because both of them consider the sentiment information in the embeddings. Our DSE model outperforms other methods which do not consider sentiments such as Yang, EmbeddingCat and EmbeddingAll. Note that the advantage of domain-sensitive embeddings would be insufficient for this task because the sentiment lexicons are not domain-specific.
[1, 1, 1, 1, 2]
['Table 2 shows the experimental results of lexicon term sentiment classification.', 'Our DSE method can achieve competitive performance among all the methods.', 'Compared with SSWE, our DSE is still competitive because both of them consider the sentiment information in the embeddings.', 'Our DSE model outperforms other methods which do not consider sentiments such as Yang, EmbeddingCat and EmbeddingAll.', 'Note that the advantage of domain-sensitive embeddings would be insufficient for this task because the sentiment lexicons are not domain-specific.']
[None, ['DSE'], ['SSWE', 'DSE'], ['DSE', 'Yang', 'EmbeddingCat', 'EmbeddingAll'], None]
1
P18-1239table_2
Our results are consistently better than those reported by Kiela et al. (2015), averaged over Dutch, French, German, Italian, and Spanish on a similar set of 500 concrete nouns. The rightmost column shows the added challenge with our larger, more realistic dataset.
1
[['MRR'], ['Top 1'], ['Top 5'], ['Top 20']]
3
[['dataset', 'BERGSMA500 Kiela et al. (2015)', '# words 500'], ['dataset', 'BERGSMA500 (ours)', '# words 500'], ['-', 'all (ours)', '# words 8500']]
[['0.658', '0.704', '0.277'], ['0.567', '0.679', '0.229'], ['0.692', '0.763', '0.326'], ['0.774', '0.811', '0.385']]
row
['MRR', 'Top 1', 'Top 5', 'Top 20']
['BERGSMA500 (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dataset || BERGSMA500 Kiela et al. (2015) || # words 500</th> <th>dataset || BERGSMA500 (ours) || # words 500</th> <th>- || all (ours) || # words 8500</th> </tr> </thead> <tbody> <tr> <td>MRR</td> <td>0.658</td> <td>0.704</td> <td>0.277</td> </tr> <tr> <td>Top 1</td> <td>0.567</td> <td>0.679</td> <td>0.229</td> </tr> <tr> <td>Top 5</td> <td>0.692</td> <td>0.763</td> <td>0.326</td> </tr> <tr> <td>Top 20</td> <td>0.774</td> <td>0.811</td> <td>0.385</td> </tr> </tbody></table>
Table 2
table_2
P18-1239
5
acl2018
Table 2 shows the results reported by Kiela et al. (2015) on the BERGSMA500 dataset, along with results using our image crawl method (Section 3.2) on BERGSMA500fs vocabulary. On all five languages, our dataset performs better than that of Kiela et al. (2015). We attribute this to improvements in image search since they collected images. We additionally note that in the BERGSMA500 vocabularies, approximately 11% of the translation pairs are string-identical, like film ? film. In all subsequent experiments, we remove trivial translation pairs like this. We also evaluate the identical model on our full data set, which contains 8,500 words, covering all parts of speech and the full range of concreteness ratings. The top-1 accuracy of the model is 23% on our more realistic and challenging data set, versus 68% on the easier concrete nouns set.
[1, 1, 2, 2, 2, 1, 1]
['Table 2 shows the results reported by Kiela et al. (2015) on the BERGSMA500 dataset, along with results using our image crawl method (Section 3.2) on BERGSMA500\x81fs vocabulary.', 'On all five languages, our dataset performs better than that of Kiela et al. (2015).', 'We attribute this to improvements in image search since they collected images.', 'We additionally note that in the BERGSMA500 vocabularies, approximately 11% of the translation pairs are string-identical, like film ? film.', 'In all subsequent experiments, we remove trivial translation pairs like this.', 'We also evaluate the identical model on our full data set, which contains 8,500 words, covering all parts of speech and the full range of concreteness ratings.', 'The top-1 accuracy of the model is 23% on our more realistic and challenging data set, versus\r\n68% on the easier concrete nouns set.']
[['BERGSMA500 Kiela et al. (2015)', 'BERGSMA500 (ours)'], ['BERGSMA500 (ours)'], None, ['BERGSMA500 (ours)'], ['BERGSMA500 (ours)'], ['all (ours)'], ['all (ours)', 'Top 1', 'BERGSMA500 (ours)']]
1
P18-1246table_2
Results for XPOS tags. The first column shows the language acronym, the column named DQM shows the results of Dozat et al. (2017). Our system outperforms Dozat et al. (2017) on 32 out of 54 treebanks and Dozat et al. outperforms our model on 10 of 54 treebanks, with 13 ties. RRIE is the relative reduction in error. We excluded ties in the calculation of macro-avg since these treebanks do not contain meaningful xpos tags.
2
[['lang.', 'cs_cac'], ['lang.', 'cs'], ['lang.', 'fi'], ['lang.', 'sl'], ['lang.', 'la_ittb'], ['lang.', 'grc'], ['lang.', 'bg'], ['lang.', 'ca'], ['lang.', 'grc_proiel'], ['lang.', 'pt'], ['lang.', 'cu'], ['lang.', 'it'], ['lang.', 'fa'], ['lang.', 'ru'], ['lang.', 'sv'], ['lang.', 'ko'], ['lang.', 'sk'], ['lang.', 'nl'], ['lang.', 'fi_ftb'], ['lang.', 'de'], ['lang.', 'tr'], ['lang.', 'hi'], ['lang.', 'es_ancora'], ['lang.', 'ro'], ['lang.', 'la_proiel'], ['lang.', 'pl'], ['lang.', 'ar'], ['lang.', 'gl'], ['lang.', 'sv_lines'], ['lang.', 'cs_clt'], ['lang.', 'lv'], ['lang.', 'zh'], ['lang.', 'da'], ['lang.', 'es'], ['lang.', 'eu'], ['lang.', 'fr_sequoia'], ['lang.', 'fr'], ['lang.', 'hr'], ['lang.', 'hu'], ['lang.', 'id'], ['lang.', 'ja'], ['lang.', 'nl_lassy'], ['lang.', 'no_bok.'], ['lang.', 'no_nyn.'], ['lang.', 'ru_syn.'], ['lang.', 'en_lines'], ['lang.', 'ur'], ['lang.', 'he'], ['lang.', 'vi'], ['lang.', 'gl_treegal'], ['lang.', 'en'], ['lang.', 'en_partut'], ['lang.', 'pt_br'], ['lang.', 'et'], ['lang.', 'el'], ['lang.', 'macro-avg']]
1
[['CONLL Winner'], ['DQM'], ['ours']]
[['95.16', '95.16', '96.91'], ['95.86', '95.86', '97.28'], ['97.37', '97.37', '97.81'], ['94.74', '94.74', '95.54'], ['94.79', '94.79', '95.56'], ['84.47', '84.47', '86.51'], ['96.71', '96.71', '97.05'], ['98.58', '98.58', '98.72'], ['97.51', '97.51', '97.72'], ['83.04', '83.04', '84.39'], ['96.20', '96.20', '96.49'], ['97.93', '97.93', '98.08'], ['97.12', '97.12', '97.32'], ['96.73', '96.73', '96.95'], ['96.40', '96.40', '96.64'], ['93.02', '93.02', '93.45'], ['85.00', '85.00', '85.88'], ['90.61', '90.61', '91.10'], ['95.31', '95.31', '95.56'], ['97.29', '97.29', '97.39'], ['93.11', '93.11', '93.43'], ['97.01', '97.01', '97.13'], ['98.73', '98.73', '98.78'], ['96.98', '96.98', '97.08'], ['96.93', '96.93', '97.00'], ['91.97', '91.97', '92.12'], ['87.66', '87.66', '87.82'], ['97.50', '97.50', '97.53'], ['94.84', '94.84', '94.90'], ['89.98', '89.98', '90.09'], ['80.05', '80.05', '80.20'], ['88.40', '85.07', '85.10'], ['100.00', '99.96', '99.96'], ['99.81', '99.69', '99.69'], ['99.98', '99.96', '99.96'], ['99.49', '99.06', '99.06'], ['99.50', '98.87', '98.87'], ['99.93', '99.93', '99.93'], ['99.85', '99.82', '99.82'], ['100.00', '99.99', '99.99'], ['98.59', '89.68', '89.68'], ['99.99', '99.93', '99.93'], ['99.88', '99.75', '99.75'], ['99.93', '99.85', '99.85'], ['99.58', '99.57', '99.57'], ['95.41', '95.41', '95.39'], ['92.30', '92.30', '92.21'], ['83.24', '82.45', '82.16'], ['75.42', '73.56', '73.12'], ['91.65', '91.65', '91.40'], ['94.82', '94.82', '94.66'], ['95.08', '95.08', '94.81'], ['98.22', '98.22', '98.11'], ['95.05', '95.05', '94.72'], ['97.76', '97.76', '97.53'], ['93.18', '93.11', '93.40']]
column
['accuracy', 'accuracy', 'accuracy']
['ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL Winner</th> <th>DQM</th> <th>ours</th> <th>RRIE</th> </tr> </thead> <tbody> <tr> <td>lang. || cs_cac</td> <td>95.16</td> <td>95.16</td> <td>96.91</td> <td>36.2</td> </tr> <tr> <td>lang. || cs</td> <td>95.86</td> <td>95.86</td> <td>97.28</td> <td>35.5</td> </tr> <tr> <td>lang. || fi</td> <td>97.37</td> <td>97.37</td> <td>97.81</td> <td>16.7</td> </tr> <tr> <td>lang. || sl</td> <td>94.74</td> <td>94.74</td> <td>95.54</td> <td>15.2</td> </tr> <tr> <td>lang. || la_ittb</td> <td>94.79</td> <td>94.79</td> <td>95.56</td> <td>14.8</td> </tr> <tr> <td>lang. || grc</td> <td>84.47</td> <td>84.47</td> <td>86.51</td> <td>13.1</td> </tr> <tr> <td>lang. || bg</td> <td>96.71</td> <td>96.71</td> <td>97.05</td> <td>10.3</td> </tr> <tr> <td>lang. || ca</td> <td>98.58</td> <td>98.58</td> <td>98.72</td> <td>9.9</td> </tr> <tr> <td>lang. || grc_proiel</td> <td>97.51</td> <td>97.51</td> <td>97.72</td> <td>8.4</td> </tr> <tr> <td>lang. || pt</td> <td>83.04</td> <td>83.04</td> <td>84.39</td> <td>8.0</td> </tr> <tr> <td>lang. || cu</td> <td>96.20</td> <td>96.20</td> <td>96.49</td> <td>7.6</td> </tr> <tr> <td>lang. || it</td> <td>97.93</td> <td>97.93</td> <td>98.08</td> <td>7.2</td> </tr> <tr> <td>lang. || fa</td> <td>97.12</td> <td>97.12</td> <td>97.32</td> <td>6.9</td> </tr> <tr> <td>lang. || ru</td> <td>96.73</td> <td>96.73</td> <td>96.95</td> <td>6.7</td> </tr> <tr> <td>lang. || sv</td> <td>96.40</td> <td>96.40</td> <td>96.64</td> <td>6.7</td> </tr> <tr> <td>lang. || ko</td> <td>93.02</td> <td>93.02</td> <td>93.45</td> <td>6.2</td> </tr> <tr> <td>lang. || sk</td> <td>85.00</td> <td>85.00</td> <td>85.88</td> <td>5.9</td> </tr> <tr> <td>lang. || nl</td> <td>90.61</td> <td>90.61</td> <td>91.10</td> <td>5.4</td> </tr> <tr> <td>lang. || fi_ftb</td> <td>95.31</td> <td>95.31</td> <td>95.56</td> <td>5.3</td> </tr> <tr> <td>lang. || de</td> <td>97.29</td> <td>97.29</td> <td>97.39</td> <td>4.7</td> </tr> <tr> <td>lang. || tr</td> <td>93.11</td> <td>93.11</td> <td>93.43</td> <td>4.6</td> </tr> <tr> <td>lang. || hi</td> <td>97.01</td> <td>97.01</td> <td>97.13</td> <td>4.0</td> </tr> <tr> <td>lang. || es_ancora</td> <td>98.73</td> <td>98.73</td> <td>98.78</td> <td>3.9</td> </tr> <tr> <td>lang. || ro</td> <td>96.98</td> <td>96.98</td> <td>97.08</td> <td>3.6</td> </tr> <tr> <td>lang. || la_proiel</td> <td>96.93</td> <td>96.93</td> <td>97.00</td> <td>2.3</td> </tr> <tr> <td>lang. || pl</td> <td>91.97</td> <td>91.97</td> <td>92.12</td> <td>1.9</td> </tr> <tr> <td>lang. || ar</td> <td>87.66</td> <td>87.66</td> <td>87.82</td> <td>1.3</td> </tr> <tr> <td>lang. || gl</td> <td>97.50</td> <td>97.50</td> <td>97.53</td> <td>1.2</td> </tr> <tr> <td>lang. || sv_lines</td> <td>94.84</td> <td>94.84</td> <td>94.90</td> <td>1.2</td> </tr> <tr> <td>lang. || cs_clt</td> <td>89.98</td> <td>89.98</td> <td>90.09</td> <td>1.1</td> </tr> <tr> <td>lang. || lv</td> <td>80.05</td> <td>80.05</td> <td>80.20</td> <td>0.8</td> </tr> <tr> <td>lang. || zh</td> <td>88.40</td> <td>85.07</td> <td>85.10</td> <td>0.2</td> </tr> <tr> <td>lang. || da</td> <td>100.00</td> <td>99.96</td> <td>99.96</td> <td>0.0</td> </tr> <tr> <td>lang. || es</td> <td>99.81</td> <td>99.69</td> <td>99.69</td> <td>0.0</td> </tr> <tr> <td>lang. || eu</td> <td>99.98</td> <td>99.96</td> <td>99.96</td> <td>0.0</td> </tr> <tr> <td>lang. || fr_sequoia</td> <td>99.49</td> <td>99.06</td> <td>99.06</td> <td>0.0</td> </tr> <tr> <td>lang. || fr</td> <td>99.50</td> <td>98.87</td> <td>98.87</td> <td>0.0</td> </tr> <tr> <td>lang. || hr</td> <td>99.93</td> <td>99.93</td> <td>99.93</td> <td>0.0</td> </tr> <tr> <td>lang. || hu</td> <td>99.85</td> <td>99.82</td> <td>99.82</td> <td>0.0</td> </tr> <tr> <td>lang. || id</td> <td>100.00</td> <td>99.99</td> <td>99.99</td> <td>0.0</td> </tr> <tr> <td>lang. || ja</td> <td>98.59</td> <td>89.68</td> <td>89.68</td> <td>0.0</td> </tr> <tr> <td>lang. || nl_lassy</td> <td>99.99</td> <td>99.93</td> <td>99.93</td> <td>0.0</td> </tr> <tr> <td>lang. || no_bok.</td> <td>99.88</td> <td>99.75</td> <td>99.75</td> <td>0.0</td> </tr> <tr> <td>lang. || no_nyn.</td> <td>99.93</td> <td>99.85</td> <td>99.85</td> <td>0.0</td> </tr> <tr> <td>lang. || ru_syn.</td> <td>99.58</td> <td>99.57</td> <td>99.57</td> <td>0.0</td> </tr> <tr> <td>lang. || en_lines</td> <td>95.41</td> <td>95.41</td> <td>95.39</td> <td>-0.4</td> </tr> <tr> <td>lang. || ur</td> <td>92.30</td> <td>92.30</td> <td>92.21</td> <td>-1.2</td> </tr> <tr> <td>lang. || he</td> <td>83.24</td> <td>82.45</td> <td>82.16</td> <td>-1.7</td> </tr> <tr> <td>lang. || vi</td> <td>75.42</td> <td>73.56</td> <td>73.12</td> <td>-1.7</td> </tr> <tr> <td>lang. || gl_treegal</td> <td>91.65</td> <td>91.65</td> <td>91.40</td> <td>-3.0</td> </tr> <tr> <td>lang. || en</td> <td>94.82</td> <td>94.82</td> <td>94.66</td> <td>-3.1</td> </tr> <tr> <td>lang. || en_partut</td> <td>95.08</td> <td>95.08</td> <td>94.81</td> <td>-5.5</td> </tr> <tr> <td>lang. || pt_br</td> <td>98.22</td> <td>98.22</td> <td>98.11</td> <td>-6.2</td> </tr> <tr> <td>lang. || et</td> <td>95.05</td> <td>95.05</td> <td>94.72</td> <td>-6.7</td> </tr> <tr> <td>lang. || el</td> <td>97.76</td> <td>97.76</td> <td>97.53</td> <td>-10.3</td> </tr> <tr> <td>lang. || macro-avg</td> <td>93.18</td> <td>93.11</td> <td>93.40</td> <td>-</td> </tr> </tbody></table>
Table 2
table_2
P18-1246
6
acl2018
Table 2 contains the results of this task for the large treebanks. Because Dozat et al. (2017) won the challenge for the majority of the languages, we first compare our results with the performance of their system. Our model outperforms Dozat et al. (2017) in 32 of the 54 treebanks with 13 ties. These ties correspond mostly to languages where XPOS tagging anyhow obtains accuracies above 99%. Our model tends to produce better results, especially for morphologically rich languages (e.g. Slaviclanguages), whereas Dozat et al (2017) showed higher performance in 10 languages in particular English, Greek, Brazilian Portuguese and Estonian.
[1, 1, 1, 2, 1]
['Table 2 contains the results of this task for the large treebanks.', 'Because Dozat et al. (2017) won the challenge for the majority of the languages, we first compare our results with the performance of their system.', 'Our model outperforms Dozat et al. (2017) in 32 of the 54 treebanks with 13 ties.', 'These ties correspond mostly to languages where XPOS tagging anyhow obtains accuracies above 99%.', 'Our model tends to produce better results, especially for morphologically rich languages (e.g. Slaviclanguages), whereas Dozat et al (2017) showed higher performance in 10 languages in particular English, Greek, Brazilian Portuguese and Estonian.']
[None, ['DQM'], ['DQM', 'ours'], None, ['ours', 'sv', 'DQM', 'en', 'gl_treegal', 'pt_br', 'et']]
1
P18-1246table_3
Results on WSJ test set.
2
[['System', 'Sogaard (2011)'], ['System', 'Huang et al. (2015)'], ['System', 'Choi (2016)'], ['System', 'Andor et al. (2016)'], ['System', 'Dozat et al. (2017)'], ['System', 'ours']]
1
[['Accuracy']]
[['97.50'], ['97.55'], ['97.64'], ['97.44'], ['97.41'], ['97.96']]
column
['Accuracy']
['ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Sogaard (2011)</td> <td>97.50</td> </tr> <tr> <td>System || Huang et al. (2015)</td> <td>97.55</td> </tr> <tr> <td>System || Choi (2016)</td> <td>97.64</td> </tr> <tr> <td>System || Andor et al. (2016)</td> <td>97.44</td> </tr> <tr> <td>System || Dozat et al. (2017)</td> <td>97.41</td> </tr> <tr> <td>System || ours</td> <td>97.96</td> </tr> </tbody></table>
Table 3
table_3
P18-1246
7
acl2018
Table 3 shows the results of our model in comparison to the results reported in state-ofthe-art literature. Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%.
[1, 1]
['Table 3 shows the results of our model in comparison to the results reported in state-ofthe-art literature.', 'Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%.']
[None, ['ours', 'Accuracy', 'Sogaard (2011)', 'Huang et al. (2015)', 'Choi (2016)', 'Andor et al. (2016)', 'Dozat et al. (2017)']]
1
P18-1246table_5
Comparison of optimization methods: Separate optimization of the word, character and meta model is more accurate on average than full back-propagation using a single loss function.The results are statistically significant with two-tailed paired t-test for xpos with p<0.001 and for morphology with p <0.0001.
2
[['Optimization', 'separate'], ['Optimization', 'jointly']]
1
[['Avg. F1 Score morphology'], ['Avg. F1 Score xpos']]
[['94.57', '94.85'], ['94.15', '94.48']]
column
['Avg. F1 Score morphology', 'Avg. F1 Score xpos']
['separate']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. F1 Score morphology</th> <th>Avg. F1 Score xpos</th> </tr> </thead> <tbody> <tr> <td>Optimization || separate</td> <td>94.57</td> <td>94.85</td> </tr> <tr> <td>Optimization || jointly</td> <td>94.15</td> <td>94.48</td> </tr> </tbody></table>
Table 5
table_5
P18-1246
8
acl2018
Table 5 shows that separately optimized models are significantly more accurate on average than jointly optimized models.
[1]
['Table 5 shows that separately optimized models are significantly more accurate on average than jointly optimized models.']
[['separate', 'jointly']]
1
P18-1246table_8
F1 score of char models and their performance on the dev. set for selected languages with different gather strategies, concatenate to gi (Equation 1). DQM shows results for our reimplementation of Dozat et al. (2017) (cf. §3.2), where we feed in only the characters. The final column shows the number of xpos tags in the training set.
2
[['dev. set lang.', 'el'], ['dev. set lang.', 'grc'], ['dev. set lang.', 'la_ittb'], ['dev. set lang.', 'ru'], ['dev. set lang.', 'tr']]
1
[['Flast B1st'], ['F1st Blast'], ['Flast Blast'], ['F1st B1st'], ['DQM']]
[['96.6', '96.6', '96.2', '96.1', '95.9'], ['87.3', '87.1', '87.1', '86.8', '86.7'], ['91.1', '91.5', '91.9', '91.3', '91.0'], ['95.6', '95.4', '95.6', '95.3', '95.8'], ['93.5', '93.3', '93.2', '92.5', '93.9']]
column
['F1', 'F1', 'F1', 'F1', 'F1']
['DQM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flast B1st</th> <th>F1st Blast</th> <th>Flast Blast</th> <th>F1st B1st</th> <th>DQM</th> <th>|xpos|</th> </tr> </thead> <tbody> <tr> <td>dev. set lang. || el</td> <td>96.6</td> <td>96.6</td> <td>96.2</td> <td>96.1</td> <td>95.9</td> <td>16</td> </tr> <tr> <td>dev. set lang. || grc</td> <td>87.3</td> <td>87.1</td> <td>87.1</td> <td>86.8</td> <td>86.7</td> <td>3130</td> </tr> <tr> <td>dev. set lang. || la_ittb</td> <td>91.1</td> <td>91.5</td> <td>91.9</td> <td>91.3</td> <td>91.0</td> <td>811</td> </tr> <tr> <td>dev. set lang. || ru</td> <td>95.6</td> <td>95.4</td> <td>95.6</td> <td>95.3</td> <td>95.8</td> <td>49</td> </tr> <tr> <td>dev. set lang. || tr</td> <td>93.5</td> <td>93.3</td> <td>93.2</td> <td>92.5</td> <td>93.9</td> <td>37</td> </tr> </tbody></table>
Table 8
table_8
P18-1246
9
acl2018
Table 8 reports, for a few morphological rich languages, the part-of-speech tagging performance of different strategies to gather the characters when creating initial word encodings. The strategies were defined in 3.1. The Table also contains a column with results for our reimplementation of Dozat et al.(2017). We removed, for all systems, the word model in order to assess each strategy in isolation. The performance is quite different per language. E.g., for Latin, the outputs of the forward and backward LSTMs of the last character scored highest.
[1, 2, 1, 2, 1, 1]
['Table 8 reports, for a few morphological rich languages, the part-of-speech tagging performance of different strategies to gather the characters when creating initial word encodings.', 'The strategies were defined in 3.1.', 'The Table also contains a column with results for our reimplementation of Dozat et al.(2017).', 'We removed, for all systems, the word model in order to assess each strategy in isolation.', 'The performance is quite different per language.', 'E.g., for Latin, the outputs of the forward and backward LSTMs of the last character scored highest.']
[['el', 'grc', 'la_ittb', 'ru', 'tr'], None, ['DQM'], None, ['el', 'grc', 'la_ittb', 'ru', 'tr'], ['la_ittb', 'Flast Blast']]
1
P18-1248table_4
Experiment results (UAS, %) on the UD 2.0 development set. Bold: best result per language.
2
[['Lan.', 'eu'], ['Lan.', 'ur'], ['Lan.', 'got'], ['Lan.', 'hu'], ['Lan.', 'cu'], ['Lan.', 'da'], ['Lan.', 'el'], ['Lan.', 'hi'], ['Lan.', 'de'], ['Lan.', 'ro'], ['-', 'Avg.']]
2
[['Global Models', 'MH 3'], ['Global Models', 'MST'], ['Global Models', 'MH 4-two'], ['Global Models', 'MH 4-hybrid'], ['Global Models', '1EC'], ['Greedy Models', 'MH 3'], ['Greedy Models', 'MH 4']]
[['82.07 ± 0.17', '83.61 ± 0.16', '82.94 ± 0.24', '84.13 ± 0.13', '84.09 ± 0.19', '81.27 ± 0.20', '81.71 ± 0.33'], ['86.89 ± 0.18', '86.78 ± 0.13', '86.84 ± 0.26', '87.06 ± 0.24', '87.11 ± 0.11', '86.40 ± 0.16', '86.05 ± 0.18'], ['83.72 ± 0.19', '84.74 ± 0.28', '83.85 ± 0.19', '84.59 ± 0.38', '84.77 ± 0.27', '82.28 ± 0.18', '81.40 ± 0.45'], ['83.05 ± 0.17', '82.81 ± 0.49', '83.69 ± 0.20', '84.59 ± 0.50', '83.48 ± 0.27', '81.75 ± 0.47', '80.75 ± 0.54'], ['86.70 ± 0.30', '88.02 ± 0.25', '87.57 ± 0.14', '88.09 ± 0.28', '88.27 ± 0.32', '86.05 ± 0.23', '86.01 ± 0.11'], ['85.09 ± 0.16', '84.68 ± 0.36', '85.45 ± 0.43', '85.77 ± 0.39', '85.77 ± 0.16', '83.90 ± 0.24', '83.59 ± 0.06'], ['87.82 ± 0.24', '87.27 ± 0.22', '87.77 ± 0.20', '87.83 ± 0.36', '87.95 ± 0.23', '87.14 ± 0.25', '86.95 ± 0.25'], ['93.75 ± 0.14', '93.91 ± 0.26', '93.99 ± 0.15', '94.27 ± 0.08', '94.24 ± 0.04', '93.44 ± 0.09', '93.02 ± 0.10'], ['86.46 ± 0.13', '86.34 ± 0.24', '86.53 ± 0.22', '86.89 ± 0.17', '86.95 ± 0.32', '84.99 ± 0.26', '85.27 ± 0.32'], ['89.34 ± 0.27', '88.79 ± 0.43', '89.25 ± 0.15', '89.53 ± 0.20', '89.52 ± 0.25', '88.76 ± 0.30', '87.97 ± 0.31'], ['86.49', '86.69', '86.79', '87.27', '87.21', '85.6', '85.27']]
column
['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS']
['MH 4-hybrid']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Global Models || MH 3</th> <th>Global Models || MST</th> <th>Global Models || MH 4-two</th> <th>Global Models || MH 4-hybrid</th> <th>Global Models || 1EC</th> <th>Greedy Models || MH 3</th> <th>Greedy Models || MH 4</th> </tr> </thead> <tbody> <tr> <td>Lan. || eu</td> <td>82.07 ± 0.17</td> <td>83.61 ± 0.16</td> <td>82.94 ± 0.24</td> <td>84.13 ± 0.13</td> <td>84.09 ± 0.19</td> <td>81.27 ± 0.20</td> <td>81.71 ± 0.33</td> </tr> <tr> <td>Lan. || ur</td> <td>86.89 ± 0.18</td> <td>86.78 ± 0.13</td> <td>86.84 ± 0.26</td> <td>87.06 ± 0.24</td> <td>87.11 ± 0.11</td> <td>86.40 ± 0.16</td> <td>86.05 ± 0.18</td> </tr> <tr> <td>Lan. || got</td> <td>83.72 ± 0.19</td> <td>84.74 ± 0.28</td> <td>83.85 ± 0.19</td> <td>84.59 ± 0.38</td> <td>84.77 ± 0.27</td> <td>82.28 ± 0.18</td> <td>81.40 ± 0.45</td> </tr> <tr> <td>Lan. || hu</td> <td>83.05 ± 0.17</td> <td>82.81 ± 0.49</td> <td>83.69 ± 0.20</td> <td>84.59 ± 0.50</td> <td>83.48 ± 0.27</td> <td>81.75 ± 0.47</td> <td>80.75 ± 0.54</td> </tr> <tr> <td>Lan. || cu</td> <td>86.70 ± 0.30</td> <td>88.02 ± 0.25</td> <td>87.57 ± 0.14</td> <td>88.09 ± 0.28</td> <td>88.27 ± 0.32</td> <td>86.05 ± 0.23</td> <td>86.01 ± 0.11</td> </tr> <tr> <td>Lan. || da</td> <td>85.09 ± 0.16</td> <td>84.68 ± 0.36</td> <td>85.45 ± 0.43</td> <td>85.77 ± 0.39</td> <td>85.77 ± 0.16</td> <td>83.90 ± 0.24</td> <td>83.59 ± 0.06</td> </tr> <tr> <td>Lan. || el</td> <td>87.82 ± 0.24</td> <td>87.27 ± 0.22</td> <td>87.77 ± 0.20</td> <td>87.83 ± 0.36</td> <td>87.95 ± 0.23</td> <td>87.14 ± 0.25</td> <td>86.95 ± 0.25</td> </tr> <tr> <td>Lan. || hi</td> <td>93.75 ± 0.14</td> <td>93.91 ± 0.26</td> <td>93.99 ± 0.15</td> <td>94.27 ± 0.08</td> <td>94.24 ± 0.04</td> <td>93.44 ± 0.09</td> <td>93.02 ± 0.10</td> </tr> <tr> <td>Lan. || de</td> <td>86.46 ± 0.13</td> <td>86.34 ± 0.24</td> <td>86.53 ± 0.22</td> <td>86.89 ± 0.17</td> <td>86.95 ± 0.32</td> <td>84.99 ± 0.26</td> <td>85.27 ± 0.32</td> </tr> <tr> <td>Lan. || ro</td> <td>89.34 ± 0.27</td> <td>88.79 ± 0.43</td> <td>89.25 ± 0.15</td> <td>89.53 ± 0.20</td> <td>89.52 ± 0.25</td> <td>88.76 ± 0.30</td> <td>87.97 ± 0.31</td> </tr> <tr> <td>- || Avg.</td> <td>86.49</td> <td>86.69</td> <td>86.79</td> <td>87.27</td> <td>87.21</td> <td>85.6</td> <td>85.27</td> </tr> </tbody></table>
Table 4
table_4
P18-1248
7
acl2018
Table 4 shows the developmentset performance of our models as compared with baseline systems. MST considers non-projective structures, and thus enjoys a theoretical advantage over projective MH 3, especially for the most non-projective languages. However, it has a vastly larger output space, making the selection of correct structures difficult. Further, the scoring is edge-factored, and does not take any structural contexts into consideration. This tradeoff leads to the similar performance of MST comparing to MH 3. In comparison, both 1EC and MH 4 are mildly non-projective parsing algorithms, limiting the size of the output space. 1EC includes higherorder features that look at tree-structural contexts; MH 4 derives its features from parsing configurations of a transition system, hence leveraging contexts within transition sequences. These considerations explain their significant improvements over MST. We also observe that MH 4 recovers more short dependencies than 1EC, while 1EC is better at longer-distance ones. In comparison to MH 4-two, the richer feature representation of MH 4-hybrid helps in all our languages. Interestingly, MH 4 and MH 3 react differently to switching from global to greedy models. MH 4 covers more structures than MH 3, and is naturally more capable in the global case, even when the feature functions are the same (MH 4-two). However, its greedy version is outperformed by MH 3. We conjecture that this is because MH 4 explores only the same number of configurations as MH 3, despite the fact that introducing non-projectivity expands the search space dramatically.
[1, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2]
['Table 4 shows the developmentset performance of our models as compared with baseline systems.', 'MST considers non-projective structures, and thus enjoys a theoretical advantage over projective MH 3, especially for the most non-projective languages.', 'However, it has a vastly larger output space, making the selection of correct structures difficult.', 'Further, the scoring is edge-factored, and does not take any structural contexts into consideration.', 'This tradeoff leads to the similar performance of MST comparing to MH 3.', 'In comparison, both 1EC and MH 4 are mildly non-projective parsing algorithms, limiting the size of the output space.', '1EC includes higherorder features that look at tree-structural contexts; MH 4 derives its features from parsing configurations of a transition system, hence leveraging contexts within transition sequences.', 'These considerations explain their significant improvements over MST.', 'We also observe that MH 4 recovers more short dependencies than 1EC, while 1EC is better at longer-distance ones.', 'In comparison to MH 4-two, the richer feature representation of MH 4-hybrid helps in all our languages.', 'Interestingly, MH 4 and MH 3 react differently to switching from global to greedy models.', 'MH 4 covers more structures than MH 3, and is naturally more capable in the global case, even when the feature functions are the same (MH 4-two).', 'However, its greedy version is outperformed by MH 3.', 'We conjecture that this is because MH 4 explores only the same number of configurations as MH 3, despite the fact that introducing non-projectivity expands the search space dramatically.']
[['MH 4-two', 'MH 4-hybrid', 'MH 3', 'MST', '1EC'], ['MH 3', 'MST'], ['MST'], ['MST'], ['MH 3', 'MST'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MST'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MH 4-two', 'MH 4-hybrid'], ['Greedy Models', 'MH 3', 'MH 4'], ['Global Models', 'MH 3', 'MH 4-two', 'MH 4-hybrid'], ['Greedy Models', 'MH 3', 'MH 4'], ['Greedy Models', 'MH 3', 'MH 4']]
1
P18-1250table_3
The overall performance of the two sequential models on development data.
1
[['Interspace'], ['Pre2'], ['Pre3'], ['Prepost']]
3
[['Linear CRF', 'Without POS', 'P'], ['Linear CRF', 'Without POS', 'R'], ['Linear CRF', 'Without POS', 'F1'], ['Linear CRF', 'With POS', 'P'], ['Linear CRF', 'With POS', 'R'], ['Linear CRF', 'With POS', 'F1'], ['LSTM-CRF', 'Without POS', 'P'], ['LSTM-CRF', 'Without POS', 'R'], ['LSTM-CRF', 'Without POS', 'F1'], ['LSTM-CRF', 'With POS', 'P'], ['LSTM-CRF', 'With POS', 'R'], ['LSTM-CRF', 'With POS', 'F1']]
[['74.6', '20.6', '32.2', '71.2', '30.3', '42.5', '67.9', '59.8', '63.6', '73.0', '61.6', '66.8'], ['72.4', '30.1', '42.5', '72.8', '32.4', '44.8', '71.1', '58.3', '64.1', '74.8', '57.4', '65.0'], ['73.1', '30.2', '42.8', '73.0', '32.5', '44.9', '71.1', '58.5', '64.2', '73.8', '57.0', '64.3'], ['70.9', '32.9', '45.0', '74.4', '30.3', '43.1', '71.0', '57.6', '63.6', '72.9', '58.6', '65.0']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['LSTM-CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Linear CRF || Without POS || P</th> <th>Linear CRF || Without POS || R</th> <th>Linear CRF || Without POS || F1</th> <th>Linear CRF || With POS || P</th> <th>Linear CRF || With POS || R</th> <th>Linear CRF || With POS || F1</th> <th>LSTM-CRF || Without POS || P</th> <th>LSTM-CRF || Without POS || R</th> <th>LSTM-CRF || Without POS || F1</th> <th>LSTM-CRF || With POS || P</th> <th>LSTM-CRF || With POS || R</th> <th>LSTM-CRF || With POS || F1</th> </tr> </thead> <tbody> <tr> <td>Interspace</td> <td>74.6</td> <td>20.6</td> <td>32.2</td> <td>71.2</td> <td>30.3</td> <td>42.5</td> <td>67.9</td> <td>59.8</td> <td>63.6</td> <td>73.0</td> <td>61.6</td> <td>66.8</td> </tr> <tr> <td>Pre2</td> <td>72.4</td> <td>30.1</td> <td>42.5</td> <td>72.8</td> <td>32.4</td> <td>44.8</td> <td>71.1</td> <td>58.3</td> <td>64.1</td> <td>74.8</td> <td>57.4</td> <td>65.0</td> </tr> <tr> <td>Pre3</td> <td>73.1</td> <td>30.2</td> <td>42.8</td> <td>73.0</td> <td>32.5</td> <td>44.9</td> <td>71.1</td> <td>58.5</td> <td>64.2</td> <td>73.8</td> <td>57.0</td> <td>64.3</td> </tr> <tr> <td>Prepost</td> <td>70.9</td> <td>32.9</td> <td>45.0</td> <td>74.4</td> <td>30.3</td> <td>43.1</td> <td>71.0</td> <td>57.6</td> <td>63.6</td> <td>72.9</td> <td>58.6</td> <td>65.0</td> </tr> </tbody></table>
Table 3
table_3
P18-1250
7
acl2018
Table 3 shows overall performances of the two sequential models on development data. From the results, we can clearly see that the introduction of neural structure pushes up the scores exceptionally. The reason is that our LSTM-CRF model not only benefits from the linear weighted combination of local characteristics like ordinary CRF models, but also has the ability to integrate more contextual information, especially long-distance information. It confirms LSTM-based modelsfgreat superiority in sequence labeling problems. Further more, we find that the difference among the four kinds of representations is not so obvious. The most performing one with LSTM-CRF model is Interspace, but the advantage is narrow. Pre3 uses a larger window length to incorporate richer contextual tokens, but at the same time, the searching space for decoding grows larger. It explains that the performance drops slightly with increasing window length. In general, experiments with POS tags show higher scores as more syntactic clues are incorporated.
[1, 1, 1, 1, 1, 1, 1, 2, 1]
['Table 3 shows overall performances of the two sequential models on development data.', 'From the results, we can clearly see that the introduction of neural structure pushes up the scores exceptionally.', 'The reason is that our LSTM-CRF model not only benefits from the linear weighted combination of local characteristics like ordinary CRF models, but also has the ability to integrate more contextual information, especially long-distance information.', 'It confirms LSTM-based models\x81fgreat superiority in sequence labeling problems.', 'Further more, we find that the difference among the four kinds of representations is not so obvious.', 'The most performing one with LSTM-CRF model is Interspace, but the advantage is narrow.', 'Pre3 uses a larger window length to incorporate richer contextual tokens, but at the same time, the searching space for decoding grows larger.', 'It explains that the performance drops slightly with increasing window length.', 'In general, experiments with POS tags show higher scores as more syntactic clues are incorporated.']
[['Linear CRF', 'LSTM-CRF'], ['LSTM-CRF'], ['LSTM-CRF'], ['LSTM-CRF'], ['Interspace', 'Pre2', 'Pre3', 'Prepost'], ['LSTM-CRF', 'Interspace'], ['Pre3'], None, ['With POS', 'Without POS']]
1
P18-1250table_6
The performances of the firstand second-order in-parsing models on test data.
2
[['Type', 'pro'], ['Type', 'PRO'], ['Type', 'OP'], ['Type', 'T'], ['Type', 'RNR'], ['Type', '*'], ['Type', 'Overall']]
3
[['-', 'First-order', 'P'], ['-', 'First-order', 'R'], ['-', 'First-order', 'F1'], ['-', 'Second-order', 'P'], ['-', 'Second-order', 'R'], ['-', 'Second-order', 'F1'], ['Evaluation with Head', 'First-order', 'P'], ['Evaluation with Head', 'First-order', 'R'], ['Evaluation with Head', 'First-order', 'F1'], ['Evaluation with Head', 'Second-order', 'P'], ['Evaluation with Head', 'Second-order', 'R'], ['Evaluation with Head', 'Second-order', 'F1']]
[['52.5', '16.8', '25.5', '54.4', '19.7', '28.9', '50.5', '16.2', '24.5', '52.6', '19.1', '28'], ['59.7', '47.3', '52.8', '60.6', '58', '59.3', '58.4', '46.3', '51.7', '57.8', '55.3', '56.6'], ['74.5', '55.8', '63.8', '79.6', '67.8', '73.2', '72.2', '54.1', '61.8', '78.6', '67', '72.3'], ['70.6', '51.7', '59.7', '77.3', '62.8', '69.3', '68.5', '50.2', '57.9', '75.4', '61.2', '67.6'], ['70.8', '50', '58.6', '77.8', '61.8', '68.9', '70.8', '50', '58.6', '77.8', '61.8', '68.9'], ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0'], ['68.2', '45.7', '54.7', '72.6', '55.5', '62.9', '66.3', '44.4', '53.2', '70.9', '54.1', '61.4']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['Second-order']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || First-order || P</th> <th>- || First-order || R</th> <th>- || First-order || F1</th> <th>- || Second-order || P</th> <th>- || Second-order || R</th> <th>- || Second-order || F1</th> <th>Evaluation with Head || First-order || P</th> <th>Evaluation with Head || First-order || R</th> <th>Evaluation with Head || First-order || F1</th> <th>Evaluation with Head || Second-order || P</th> <th>Evaluation with Head || Second-order || R</th> <th>Evaluation with Head || Second-order || F1</th> </tr> </thead> <tbody> <tr> <td>Type || pro</td> <td>52.5</td> <td>16.8</td> <td>25.5</td> <td>54.4</td> <td>19.7</td> <td>28.9</td> <td>50.5</td> <td>16.2</td> <td>24.5</td> <td>52.6</td> <td>19.1</td> <td>28</td> </tr> <tr> <td>Type || PRO</td> <td>59.7</td> <td>47.3</td> <td>52.8</td> <td>60.6</td> <td>58</td> <td>59.3</td> <td>58.4</td> <td>46.3</td> <td>51.7</td> <td>57.8</td> <td>55.3</td> <td>56.6</td> </tr> <tr> <td>Type || OP</td> <td>74.5</td> <td>55.8</td> <td>63.8</td> <td>79.6</td> <td>67.8</td> <td>73.2</td> <td>72.2</td> <td>54.1</td> <td>61.8</td> <td>78.6</td> <td>67</td> <td>72.3</td> </tr> <tr> <td>Type || T</td> <td>70.6</td> <td>51.7</td> <td>59.7</td> <td>77.3</td> <td>62.8</td> <td>69.3</td> <td>68.5</td> <td>50.2</td> <td>57.9</td> <td>75.4</td> <td>61.2</td> <td>67.6</td> </tr> <tr> <td>Type || RNR</td> <td>70.8</td> <td>50</td> <td>58.6</td> <td>77.8</td> <td>61.8</td> <td>68.9</td> <td>70.8</td> <td>50</td> <td>58.6</td> <td>77.8</td> <td>61.8</td> <td>68.9</td> </tr> <tr> <td>Type || *</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>Type || Overall</td> <td>68.2</td> <td>45.7</td> <td>54.7</td> <td>72.6</td> <td>55.5</td> <td>62.9</td> <td>66.3</td> <td>44.4</td> <td>53.2</td> <td>70.9</td> <td>54.1</td> <td>61.4</td> </tr> </tbody></table>
Table 6
table_6
P18-1250
7
acl2018
Table 6 presents detailed results of the in-parsing models on test data. Compared with the stateof-the-art, the first-order model performs a little worse while the second-order model achieves a remarkable score. The first-order parsing model only constrains the dependencies of both the covert and overt tokens to make up a tree. Due to the loose scoring constraint of the first-order model, the prediction of empty nodes is affected little from the prediction of dependencies of overt words. The four bold numbers in the table intuitively elicits the conclusion that integrating an empty edge and its sibling overt edges is necessary to boost the performance. It makes sense because empty categories are highly related to syntactic analysis. When we conduct ECD and dependency parsing simultaneously, we can leverage more hierarchical contextual information. Comparing results regarding EC types, we can find that OP and T benefit most from the parsing information, the F1 score increasing by about ten points, more markedly than other types.
[1, 1, 2, 2, 1, 2, 2, 1]
['Table 6 presents detailed results of the in-parsing models on test data.', 'Compared with the stateof-the-art, the first-order model performs a little worse while the second-order model achieves a remarkable score.', 'The first-order parsing model only constrains the dependencies of both the covert and overt tokens to make up a tree.', 'Due to the loose scoring constraint of the first-order model, the prediction of empty nodes is affected little from the prediction of dependencies of overt words.', 'The four bold numbers in the table intuitively elicits the conclusion that integrating an empty edge and its sibling overt edges is necessary to boost the performance.', 'It makes sense because empty categories are highly related to syntactic analysis.', 'When we conduct ECD and dependency parsing simultaneously, we can leverage more hierarchical contextual information.', 'Comparing results regarding EC types, we can find that OP and T benefit most from the parsing information, the F1 score increasing by about ten points, more markedly than other types.']
[None, ['First-order', 'Second-order'], ['First-order'], ['First-order'], ['Overall'], ['pro', 'OP', 'T', 'RNR', '*'], None, ['OP', 'T', 'F1']]
1
P18-1252table_5
Parsing accuracy on test data. LAS difference between any two systems is statistically significant (p < 0:005) according to Dan Bikel’s randomized parsing evaluation comparer for significance test Noreen (1989).
3
[['Single', 'Training data', 'train'], ['Single (hetero)', 'Training data', 'train-HIT'], ['Multi-task', 'Training data', 'train & train-HIT'], ['Single (large)', 'Training data', 'converted train-HIT']]
1
[['UAS'], ['LAS']]
[['75.99', '70.95'], ['76.20', '68.43'], ['79.29', '74.51'], ['80.45', '75.83']]
column
['UAS', 'LAS']
['converted train-HIT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Single || Training data || train</td> <td>75.99</td> <td>70.95</td> </tr> <tr> <td>Single (hetero) || Training data || train-HIT</td> <td>76.20</td> <td>68.43</td> </tr> <tr> <td>Multi-task || Training data || train &amp; train-HIT</td> <td>79.29</td> <td>74.51</td> </tr> <tr> <td>Single (large) || Training data || converted train-HIT</td> <td>80.45</td> <td>75.83</td> </tr> </tbody></table>
Table 5
table_5
P18-1252
9
acl2018
Table 5 shows the empirical results. Please kindly note that the parsing accuracy looks very low, because the test data is partially annotated and only about 30% most uncertain (difficult) words are manually labeled with their heads according to our guideline, as discussed in Section 2.1. The first-row, gsingleh is the baseline targetside parser trained on the train data. The second-row gsingle (hetero)h refers to the source-side heterogeneous parser trained on train-HIT and evaluated on the target-side test data. Since the similarity between the two guidelines is high, as discussed in Section 2.2, the source-side parser achieves even higher UAS by 0.21 (76.20 ? 75.99) than the baseline target-side parser trained on the small-scale train data. The LAS is obtained by mapping the HIT-CDT labels to ours (Section 2.2). In the third row, gmulti-taskh is the targetside parser trained on train & train-HIT with the multi-task learning approach. It significantly outperforms the baseline parser by 4.30 (74.51 ? 70.21) in LAS. This shows that the multi-task learning approach can effectively utilize the large-scale train-HIT to help the target-side parsing. In the fourth row, gsingle (large)h is the basic parser trained on the large-scale converted train-HIT (homogeneous). We employ the treeLSTM approach to convert all sentences in train-HIT into our guideline.7. We can see that the single parser trained on the converted data significantly outperforms the parser in the multi-task learning approach by 1.32 (75.83 ?74.51) in LAS. In summary, we can conclude that treebank conversion is superior to multi-task learning in multi-treebank exploitation for its simplicity and better performance.
[1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 2]
['Table 5 shows the empirical results.', 'Please kindly note that the parsing accuracy looks very low, because the test data is partially annotated and only about 30% most uncertain (difficult) words are manually labeled with their heads according to our guideline, as discussed in Section 2.1.', 'The first-row, \x81gsingle\x81h is the baseline targetside parser trained on the train data.', 'The second-row \x81gsingle (hetero)\x81h refers to the source-side heterogeneous parser trained on train-HIT and evaluated on the target-side test data.', 'Since the similarity between the two guidelines is high, as discussed in Section 2.2, the source-side parser achieves even higher UAS by 0.21 (76.20 ? 75.99) than the baseline target-side parser trained on the small-scale train data.', 'The LAS is obtained by mapping the HIT-CDT labels to ours (Section 2.2).', 'In the third row, \x81gmulti-task\x81h is the targetside parser trained on train & train-HIT with the multi-task learning approach.', 'It significantly outperforms the baseline parser by 4.30 (74.51 ? 70.21) in LAS.', 'This shows that the multi-task learning approach can effectively utilize the large-scale train-HIT to help the target-side parsing.', 'In the fourth row, \x81gsingle (large)\x81h is the basic parser trained on the large-scale converted train-HIT (homogeneous).', 'We employ the treeLSTM approach to convert all sentences in train-HIT into our guideline.7.', 'We can see that the single parser trained on the converted data significantly outperforms the parser in the multi-task learning approach by 1.32 (75.83 ?74.51) in LAS.', 'In summary, we can conclude that treebank conversion is superior to multi-task learning in multi-treebank exploitation for its simplicity and better performance.']
[None, None, ['Single', 'train'], ['Single (hetero)', 'train-HIT'], ['UAS', 'Single', 'Single (hetero)'], ['LAS'], ['Multi-task', 'train & train-HIT'], ['Multi-task', 'LAS', 'Single'], ['Multi-task', 'train & train-HIT'], ['Single (large)', 'converted train-HIT'], ['Single (large)', 'converted train-HIT'], ['Single (large)', 'Multi-task', 'LAS'], None]
1
P18-1255table_2
Model performances on 500 samples when evaluated against the union of the “best” annotations (B1 ∪ B2), intersection of the “valid” annotations (V 1 ∩ V 2) and the original question paired with the post in the dataset. The difference between the bold and the non-bold numbers is statistically significant with p < 0.05 as calculated using bootstrap test. p@k is the precision of the k questions ranked highest by the model and MAP is the mean average precision of the ranking predicted by the model.
2
[['Model', 'Random'], ['Model', 'Bag-of-ngrams'], ['Model', 'Community QA'], ['Model', 'Neural (p q)'], ['Model', 'Neural (p a)'], ['Model', 'Neural (p q a)'], ['Model', 'EVPI']]
2
[['B1 ∪ B2', 'p@1'], ['B1 ∪ B2', 'p@3'], ['B1 ∪ B2', 'p@5'], ['B1 ∪ B2', 'MAP'], ['V1 ∩ V2', 'p@1'], ['V1 ∩ V2', 'p@3'], ['V1 ∩ V2', 'p@5'], ['V1 ∩ V2', 'MAP'], ['Original', 'p@1']]
[['17.5', '17.5', '17.5', '35.2', '26.4', '26.4', '26.4', '42.1', '10.0'], ['19.4', '19.4', '18.7', '34.4', '25.6', '27.6', '27.5', '42.7', '10.7'], ['23.1', '21.2', '20.0', '40.2', '33.6', '30.8', '29.1', '47.0', '18.5'], ['21.9', '20.9', '19.5', '39.2', '31.6', '30.0', '28.9', '45.5', '15.4'], ['24.1', '23.5', '20.6', '41.4', '32.3', '31.5', '29.0', '46.5', '18.8'], ['25.2', '22.7', '21.3', '42.5', '34.4', '31.8', '30.1', '47.7', '20.5'], ['27.7', '23.4', '21.5', '43.6', '36.1', '32.2', '30.5', '49.2', '21.4']]
column
['p@1', 'p@3', 'p@5', 'MAP', 'p@1', 'p@3', 'p@5', 'MAP', 'p@1']
['EVPI']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B1 ∪ B2 || p@1</th> <th>B1 ∪ B2 || p@3</th> <th>B1 ∪ B2 || p@5</th> <th>B1 ∪ B2 || MAP</th> <th>V1 ∩ V2 || p@1</th> <th>V1 ∩ V2 || p@3</th> <th>V1 ∩ V2 || p@5</th> <th>V1 ∩ V2 || MAP</th> <th>Original || p@1</th> </tr> </thead> <tbody> <tr> <td>Model || Random</td> <td>17.5</td> <td>17.5</td> <td>17.5</td> <td>35.2</td> <td>26.4</td> <td>26.4</td> <td>26.4</td> <td>42.1</td> <td>10.0</td> </tr> <tr> <td>Model || Bag-of-ngrams</td> <td>19.4</td> <td>19.4</td> <td>18.7</td> <td>34.4</td> <td>25.6</td> <td>27.6</td> <td>27.5</td> <td>42.7</td> <td>10.7</td> </tr> <tr> <td>Model || Community QA</td> <td>23.1</td> <td>21.2</td> <td>20.0</td> <td>40.2</td> <td>33.6</td> <td>30.8</td> <td>29.1</td> <td>47.0</td> <td>18.5</td> </tr> <tr> <td>Model || Neural (p q)</td> <td>21.9</td> <td>20.9</td> <td>19.5</td> <td>39.2</td> <td>31.6</td> <td>30.0</td> <td>28.9</td> <td>45.5</td> <td>15.4</td> </tr> <tr> <td>Model || Neural (p a)</td> <td>24.1</td> <td>23.5</td> <td>20.6</td> <td>41.4</td> <td>32.3</td> <td>31.5</td> <td>29.0</td> <td>46.5</td> <td>18.8</td> </tr> <tr> <td>Model || Neural (p q a)</td> <td>25.2</td> <td>22.7</td> <td>21.3</td> <td>42.5</td> <td>34.4</td> <td>31.8</td> <td>30.1</td> <td>47.7</td> <td>20.5</td> </tr> <tr> <td>Model || EVPI</td> <td>27.7</td> <td>23.4</td> <td>21.5</td> <td>43.6</td> <td>36.1</td> <td>32.2</td> <td>30.5</td> <td>49.2</td> <td>21.4</td> </tr> </tbody></table>
Table 2
table_2
P18-1255
7
acl2018
We first describe the results of the different models when evaluated against the expert annotations we collect on 500 samples (4). Since the annotators had a low agreement on a single best, we evaluate against the union of the best annotations (B1 ∪ B2 in Table 2) and against the intersection of the valid annotations (V1 ∩ V2 in Table 2). Among non-neural baselines, we find that the bag-of-ngrams baseline performs slightly better than random but worse than all the other models. The Community QA baseline, on the other hand, performs better than the neural baseline (Neural (p, q)), both of which are trained without using the answers. The neural baselines with answers (Neural(p, q, a) and Neural(p, a)) outperform the neural baseline without answers (Neural(p, q)), showing that answer helps in selecting the right question. More importantly, EVPI outperforms the Neural (p, q, a) baseline across most metrics. Both models use the same information regarding the true question and answer and are trained using the same number of model parameters.17. However, the EVPI model, unlike the neural baseline, additionally makes use of alternate question and answer candidates to compute its loss function. This shows that when the candidate set consists of questions similar to the original question, summing over their utilities gives us a boost. 5.2.2 Evaluating against the original question. The last column in Table 2 shows the results when evaluated against the original question paired with the post. The bag-of-ngrams baseline performs similar to random, unlike when evaluated against human judgments. The Community QA baseline again outperforms Neural(p, q) model and comes very close to the Neural (p, a) model. As before, the neural baselines that make use of the answer outperform the one that does not use the answer and the EVPI model performs significantly better than Neural(p, q, a).
[1, 1, 1, 1, 1, 1, 2, 2, 2, 0, 1, 1, 1, 1]
['We first describe the results of the different models when evaluated against the expert annotations we collect on 500 samples (4).', 'Since the annotators had a low agreement on a single best, we evaluate against the union of the best annotations (B1 ∪ B2 in Table 2) and against the intersection of the valid annotations (V1 ∩ V2 in Table 2).', 'Among non-neural baselines, we find that the bag-of-ngrams baseline performs slightly better than random but worse than all the other models.', 'The Community QA baseline, on the other hand, performs better than the neural baseline (Neural (p, q)), both of which are trained without using the answers.', 'The neural baselines with answers (Neural(p, q, a) and Neural(p, a)) outperform the neural baseline without answers (Neural(p, q)), showing that answer helps in selecting the right question.', 'More importantly, EVPI outperforms the Neural (p, q, a) baseline across most metrics.', 'Both models use the same information regarding the true question and answer and are trained using the same number of model parameters.17.', 'However, the EVPI model, unlike the neural baseline, additionally makes use of alternate question and answer candidates to compute its loss function.', 'This shows that when the candidate set consists of questions similar to the original question, summing over their utilities gives us a boost.', '5.2.2 Evaluating against the original question.', 'The last column in Table 2 shows the results when evaluated against the original question paired with the post.', 'The bag-of-ngrams baseline performs similar to random, unlike when evaluated against human judgments.', 'The Community QA baseline again outperforms Neural(p, q) model and comes very close to the Neural (p, a) model.', 'As before, the neural baselines that make use of the answer outperform the one that does not use the answer and the EVPI model performs significantly better than Neural(p, q, a).']
[None, ['B1 ∪ B2', 'V1 ∩ V2'], ['Random', 'Bag-of-ngrams', 'Community QA'], ['Community QA', 'Neural (p q)'], ['Neural (p a)', 'Neural (p q a)', 'Neural (p q)'], ['EVPI', 'Neural (p q a)'], ['EVPI', 'Neural (p q a)'], ['EVPI'], None, None, ['p@1'], ['Bag-of-ngrams', 'Random'], ['Community QA', 'Neural (p q)', 'Neural (p a)'], ['Neural (p a)', 'Neural (p q a)', 'Neural (p q)', 'EVPI']]
1
P18-2002table_1
Comparison of validation and test set perplexity for r-RNTNs with f mapping (K = 100 for PTB, K = 376 for text8) versus s-RNNs and m-RNN. r-RNTNs with the same H as corresponding s-RNNs significantly increase model capacity and performance with no computational cost. The RNTN was not run on text8 due to the number of parameters required.
4
[['Method', 's-RNN', 'H', '100'], ['Method', 'r-RNTN f', 'H', '100'], ['Method', 'RNTN', 'H', '100'], ['Method', 'm-RNN', 'H', '100'], ['Method', 's-RNN', 'H', '150'], ['Method', 'r-RNTN f', 'H', '150'], ['Method', 'GRU', 'H', '244'], ['Method', 'GRU', 'H', '650'], ['Method', 'r-GRU f', 'H', '244'], ['Method', 'LSTM', 'H', '254'], ['Method', 'LSTM', 'H', '650'], ['Method', 'r-LSTM f', 'H', '254']]
2
[['PTB', '# Params'], ['PTB', 'Test PPL'], ['text8', '# Params'], ['text8', 'Test PPL']]
[['2M', '146.7', '7.6M', '236.4'], ['3M', '131.2', '11.4M', '190.1'], ['103M', '128.8', '388M', '-'], ['3M', '164.2', '11.4M', '895'], ['3M', '133.7', '11.4M', '207.9'], ['5.3M', '126.4', '19.8M', '171.7'], ['9.6M', '92.2', '-', '-'], ['15.5M', '90.3', '-', '-'], ['15.5M', '87.5', '-', '-'], ['10M', '88.8', '-', '-'], ['16.4M', '84.6', '-', '-'], ['16.4M', '87.1', '-', '-']]
column
['# Params', 'Test PPL', '# Params', 'Test PPL']
['r-RNTN f']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PTB || # Params</th> <th>PTB || Test PPL</th> <th>text8 || # Params</th> <th>text8 || Test PPL</th> </tr> </thead> <tbody> <tr> <td>Method || s-RNN || H || 100</td> <td>2M</td> <td>146.7</td> <td>7.6M</td> <td>236.4</td> </tr> <tr> <td>Method || r-RNTN f || H || 100</td> <td>3M</td> <td>131.2</td> <td>11.4M</td> <td>190.1</td> </tr> <tr> <td>Method || RNTN || H || 100</td> <td>103M</td> <td>128.8</td> <td>388M</td> <td>-</td> </tr> <tr> <td>Method || m-RNN || H || 100</td> <td>3M</td> <td>164.2</td> <td>11.4M</td> <td>895</td> </tr> <tr> <td>Method || s-RNN || H || 150</td> <td>3M</td> <td>133.7</td> <td>11.4M</td> <td>207.9</td> </tr> <tr> <td>Method || r-RNTN f || H || 150</td> <td>5.3M</td> <td>126.4</td> <td>19.8M</td> <td>171.7</td> </tr> <tr> <td>Method || GRU || H || 244</td> <td>9.6M</td> <td>92.2</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || GRU || H || 650</td> <td>15.5M</td> <td>90.3</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || r-GRU f || H || 244</td> <td>15.5M</td> <td>87.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || LSTM || H || 254</td> <td>10M</td> <td>88.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || LSTM || H || 650</td> <td>16.4M</td> <td>84.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || r-LSTM f || H || 254</td> <td>16.4M</td> <td>87.1</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
P18-2002
4
acl2018
As shown in table 1, with an equal number of parameters, the r-RNTN with f mapping outperforms the s-RNN with a bigger hidden layer. It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than simply increasing hidden layer size, which also incurs a computational penalty. Although m-RNNs have been successfully employed in character-level language models with small vocabularies, they are seldom used in wordlevel models. The poor results shown in table 1 could explain why.2. For fixed hidden layer sizes, r-RNTNs yield significant improvements to s-RNNs, GRUs, and LSTMs, confirming the advantages of distinct representations.
[1, 2, 1, 1, 1]
['As shown in table 1, with an equal number of parameters, the r-RNTN with f mapping outperforms the s-RNN with a bigger hidden layer.', 'It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than simply increasing hidden layer size, which also incurs a computational penalty.', 'Although m-RNNs have been successfully employed in character-level language models with small vocabularies, they are seldom used in wordlevel models.', 'The poor results shown in table 1 could explain why.2.', 'For fixed hidden layer sizes, r-RNTNs yield significant improvements to s-RNNs, GRUs, and LSTMs, confirming the advantages of distinct representations.']
[['# Params', 'r-RNTN f', 's-RNN'], ['r-RNTN f'], ['m-RNN'], None, ['r-RNTN f', 's-RNN', 'GRU', 'LSTM']]
1
P18-2005table_1
POS prediction accuracy [%] using the Trustpilot test set, stratified by SEX and AGE (higher is better), and the absolute difference (∆) within each bias group (smaller is better). The best result is indicated in bold.
1
[['BASELINE'], ['ADV']]
2
[['SEX', 'F'], ['SEX', 'M'], ['SEX', 'delta'], ['AGE', 'O45'], ['AGE', 'U35'], ['AGE', 'delta']]
[['90.9', '91.1', '0.2', '91.4', '89.9', '1.5'], ['92.2', '92.1', '0.1', '92.3', '92.0', '0.3']]
column
['F', 'M', 'delta', 'O45', 'U35', 'delta']
['ADV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEX || F</th> <th>SEX || M</th> <th>SEX || delta</th> <th>AGE || O45</th> <th>AGE || U35</th> <th>AGE || delta</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>90.9</td> <td>91.1</td> <td>0.2</td> <td>91.4</td> <td>89.9</td> <td>1.5</td> </tr> <tr> <td>ADV</td> <td>92.2</td> <td>92.1</td> <td>0.1</td> <td>92.3</td> <td>92.0</td> <td>0.3</td> </tr> </tbody></table>
Table 1
table_1
P18-2005
5
acl2018
Table 1 shows the results for the TrustPilot dataset. Observe that the disparity for the BASELINE tagger accuracy (the delta column), for AGE is larger than for SEX, consistent with the results of Hovy and Sogaard (2015). Our ADV method leads to a sizeable reduction in the difference in accuracy across both SEX and AGE, showing our model is capturing the bias signal less and more robust to the tagging task. Moreover, our method leads to a substantial improvement in accuracy across all the test cases. We speculate that this is a consequence of the regularising effect of the adversarial loss, leading to a better characterisation of the tagging problem.
[1, 1, 1, 1, 2]
['Table 1 shows the results for the TrustPilot dataset.', 'Observe that the disparity for the BASELINE tagger accuracy (the delta column), for AGE is larger than for SEX, consistent with the results of Hovy and Sogaard (2015).', 'Our ADV method leads to a sizeable reduction in the difference in accuracy across both SEX and AGE, showing our model is capturing the bias signal less and more robust to the tagging task.', 'Moreover, our method leads to a substantial improvement in accuracy across all the test cases.', 'We speculate that this is a consequence of the regularising effect of the adversarial loss, leading to a better characterisation of the tagging problem.']
[None, ['BASELINE', 'delta', 'AGE', 'SEX'], ['ADV', 'SEX', 'AGE', 'delta'], ['ADV', 'F', 'M', 'O45', 'U35', 'SEX', 'AGE'], None]
1
P18-2005table_2
POS predictive accuracy [%] over the AAVE dataset, stratified over the three domains, alongside the macro-average accuracy. The best result is indicated in bold.
1
[['BASELINE'], ['ADV']]
1
[['LYRICS'], ['SUBTITLES'], ['TWEETS'], ['Average']]
[['73.7', '81.4', '59.9', '71.7'], ['80.5', '85.8', '65.4', '77.0']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ADV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LYRICS</th> <th>SUBTITLES</th> <th>TWEETS</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>73.7</td> <td>81.4</td> <td>59.9</td> <td>71.7</td> </tr> <tr> <td>ADV</td> <td>80.5</td> <td>85.8</td> <td>65.4</td> <td>77.0</td> </tr> </tbody></table>
Table 2
table_2
P18-2005
5
acl2018
Table 2 shows the results for the AAVE heldout domain. Note that we do not have annotations for SEX or AGE, and thus we only report the overall accuracy on this dataset. Note that ADV also significantly outperforms the BASELINE across the three heldout domains.
[1, 2, 1]
['Table 2 shows the results for the AAVE heldout domain.', 'Note that we do not have annotations for SEX or AGE, and thus we only report the overall accuracy on this dataset.', 'Note that ADV also significantly outperforms the BASELINE across the three heldout domains.']
[None, None, ['ADV', 'LYRICS', 'SUBTITLES', 'TWEETS', 'BASELINE']]
1
P18-2010table_4
Evaluation results on the dataset of polysemous verb classes by Korhonen et al. (2003).
2
[['Method', 'LDA-Frames'], ['Method', 'Triframes WATSET'], ['Method', 'NOAC'], ['Method', 'HOSG'], ['Method', 'Triadic Spectral'], ['Method', 'Triadic k-Means'], ['Method', 'Triframes CW'], ['Method', 'Whole'], ['Method', 'Singletons']]
1
[['nmPU'], ['niPU'], ['F1']]
[['52.60', '45.84', '48.98'], ['40.05', '62.09', '48.69'], ['37.19', '64.09', '47.07'], ['38.22', '43.76', '40.80'], ['35.76', '38.96', '36.86'], ['52.22', '27.43', '35.96'], ['18.05', '12.72', '14.92'], ['24.14', '79.09', '36.99'], ['0.00', '27.21', '0.00']]
column
['nmPU', 'niPU', 'F1']
['LDA-Frames']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>nmPU</th> <th>niPU</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || LDA-Frames</td> <td>52.60</td> <td>45.84</td> <td>48.98</td> </tr> <tr> <td>Method || Triframes WATSET</td> <td>40.05</td> <td>62.09</td> <td>48.69</td> </tr> <tr> <td>Method || NOAC</td> <td>37.19</td> <td>64.09</td> <td>47.07</td> </tr> <tr> <td>Method || HOSG</td> <td>38.22</td> <td>43.76</td> <td>40.80</td> </tr> <tr> <td>Method || Triadic Spectral</td> <td>35.76</td> <td>38.96</td> <td>36.86</td> </tr> <tr> <td>Method || Triadic k-Means</td> <td>52.22</td> <td>27.43</td> <td>35.96</td> </tr> <tr> <td>Method || Triframes CW</td> <td>18.05</td> <td>12.72</td> <td>14.92</td> </tr> <tr> <td>Method || Whole</td> <td>24.14</td> <td>79.09</td> <td>36.99</td> </tr> <tr> <td>Method || Singletons</td> <td>0.00</td> <td>27.21</td> <td>0.00</td> </tr> </tbody></table>
Table 4
table_4
P18-2010
5
acl2018
Table 4 presents results on the second dataset for the best models identified on the first dataset. The LDA-Frames yielded the best results with our approach performing comparably in terms of the F1-score. We attribute the low performance of the Triframes method based on CW clustering to its hard partitioning output, whereas the evaluation dataset contains fuzzy clusters. Different rankings also suggest that frame induction cannot simply be treated as a verb clustering and requires a separate task.
[1, 1, 1, 2]
['Table 4 presents results on the second dataset for the best models identified on the first dataset.', 'The LDA-Frames yielded the best results with our approach performing comparably in terms of the F1-score.', 'We attribute the low performance of the Triframes method based on CW clustering to its hard partitioning output, whereas the evaluation dataset contains fuzzy clusters.', 'Different rankings also suggest that frame induction cannot simply be treated as a verb clustering and requires a separate task.']
[None, ['LDA-Frames', 'F1'], ['Triframes CW'], None]
1
P18-2012table_2
Performance as a function of the number of RNN units with a fixed unit size of 64; averaged across 5 runs apart from the 16 unit (average across 10 runs).
2
[['# RNN units', '1'], ['# RNN units', '2'], ['# RNN units', '4'], ['# RNN units', '8'], ['# RNN units', '16'], ['# RNN units', '32']]
1
[['F1']]
[['90.53 ±0.31'], ['90.79 ±0.18'], ['90.64 ±0.24'], ['91.09 ±0.28'], ['91.48 ±0.22'], ['90.68 ±0.18']]
column
['F1']
['# RNN units']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td># RNN units || 1</td> <td>90.53 ±0.31</td> </tr> <tr> <td># RNN units || 2</td> <td>90.79 ±0.18</td> </tr> <tr> <td># RNN units || 4</td> <td>90.64 ±0.24</td> </tr> <tr> <td># RNN units || 8</td> <td>91.09 ±0.28</td> </tr> <tr> <td># RNN units || 16</td> <td>91.48 ±0.22</td> </tr> <tr> <td># RNN units || 32</td> <td>90.68 ±0.18</td> </tr> </tbody></table>
Table 2
table_2
P18-2012
4
acl2018
Table 2 shows performance as a function of the number of RNN units with a fixed unit size. The number of units is clearly a hyperparameter which must be optimized for. We find good performance across the board (there is no catastrophic collapse in results) however when using 16 units we do outperform other models substantially.
[1, 2, 1]
['Table 2 shows performance as a function of the number of RNN units with a fixed unit size.', 'The number of units is clearly a hyperparameter which\nmust be optimized for.', 'We find good performance across the board (there is no catastrophic collapse in results) however when using 16 units we do outperform other models substantially.']
[['# RNN units'], None, ['16', 'F1']]
1
P18-2013table_3
KBC performance for base, typed, and related formulations. Typed models outperform their base models across all datasets.
2
[['Model', 'E'], ['Model', 'DM+E'], ['Model', 'DM'], ['Model', 'TypeDM'], ['Model', 'Complex'], ['Model', 'TypeComplex']]
2
[['FB15K', 'MRR'], ['FB15K', 'HITS@1'], ['FB15K', 'HITS@10'], ['FB15K237', 'MRR'], ['FB15K237', 'HITS@1'], ['FB15K237', 'HITS@10'], ['YAGO3-10', 'MRR'], ['YAGO3-10', 'HITS@1'], ['YAGO3-10', 'HITS@10']]
[['23.40', '17.39', '35.29', '21.30', '14.51', '36.38', '7.87', '6.22', '10.00'], ['60.84', '49.53', '79.70', '38.15', '28.06', '58.02', '52.48', '38.72', '77.40'], ['67.47', '56.52', '84.86', '37.21', '27.43', '56.12', '55.31', '46.80', '70.76'], ['75.01', '66.07', '87.92', '38.70', '29.30', '57.36', '58.16', '51.36', '70.08'], ['70.50', '61.00', '86.09', '37.58', '26.97', '55.98', '54.86', '46.90', '69.08'], ['75.44', '66.32', '88.51', '38.93', '29.57', '57.50', '58.65', '51.62', '70.42']]
column
['MRR', 'HITS@1', 'HITS@10', 'MRR', 'HITS@1', 'HITS@10', 'MRR', 'HITS@1', 'HITS@10']
['TypeComplex']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FB15K || MRR</th> <th>FB15K || HITS@1</th> <th>FB15K || HITS@10</th> <th>FB15K237 || MRR</th> <th>FB15K237 || HITS@1</th> <th>FB15K237 || HITS@10</th> <th>YAGO3-10 || MRR</th> <th>YAGO3-10 || HITS@1</th> <th>YAGO3-10 || HITS@10</th> </tr> </thead> <tbody> <tr> <td>Model || E</td> <td>23.40</td> <td>17.39</td> <td>35.29</td> <td>21.30</td> <td>14.51</td> <td>36.38</td> <td>7.87</td> <td>6.22</td> <td>10.00</td> </tr> <tr> <td>Model || DM+E</td> <td>60.84</td> <td>49.53</td> <td>79.70</td> <td>38.15</td> <td>28.06</td> <td>58.02</td> <td>52.48</td> <td>38.72</td> <td>77.40</td> </tr> <tr> <td>Model || DM</td> <td>67.47</td> <td>56.52</td> <td>84.86</td> <td>37.21</td> <td>27.43</td> <td>56.12</td> <td>55.31</td> <td>46.80</td> <td>70.76</td> </tr> <tr> <td>Model || TypeDM</td> <td>75.01</td> <td>66.07</td> <td>87.92</td> <td>38.70</td> <td>29.30</td> <td>57.36</td> <td>58.16</td> <td>51.36</td> <td>70.08</td> </tr> <tr> <td>Model || Complex</td> <td>70.50</td> <td>61.00</td> <td>86.09</td> <td>37.58</td> <td>26.97</td> <td>55.98</td> <td>54.86</td> <td>46.90</td> <td>69.08</td> </tr> <tr> <td>Model || TypeComplex</td> <td>75.44</td> <td>66.32</td> <td>88.51</td> <td>38.93</td> <td>29.57</td> <td>57.50</td> <td>58.65</td> <td>51.62</td> <td>70.42</td> </tr> </tbody></table>
Table 3
table_3
P18-2013
4
acl2018
Table 3 shows that TypeDM and TypeComplex dominate across all data sets. E by itself is understandably weak, and DM+E does not lift it much. Each typed model improves upon the corresponding base model on all measures, underscoring the value of type compatibility scores.3 . To the best of our knowledge, the results of our typed models are competitive with various reported results for models of similar sizes that do not use any additional information, e.g., soft rules (Guo et al., 2018), or textual corpora (Toutanova et al., 2015).
[1, 1, 1, 2]
['Table 3 shows that TypeDM and TypeComplex dominate across all data sets.', 'E by itself is understandably weak, and DM+E does not lift it much.', 'Each typed model improves upon the corresponding base model on all measures, underscoring the value of type compatibility scores.3 .', 'To the best of our knowledge, the results of our typed models are competitive with various reported results for models of similar sizes that do not use any additional information, e.g., soft rules (Guo et al., 2018), or textual corpora (Toutanova et al., 2015).']
[['TypeDM', 'TypeComplex'], ['E', 'DM+E'], ['TypeDM', 'TypeComplex', 'DM', 'Complex'], ['TypeDM', 'TypeComplex']]
1
P18-2014table_1
Relation extraction performance on ACE 2005 test dataset. * denotes significance at p < 0.05 compared to SPTree, (cid:5) denotes significance at p < 0.05 compared to the Baseline.
2
[['Model', 'SPTree'], ['Model', 'Baseline'], ['Model', 'No walks l = 1'], ['Model', '+ Walks l = 2'], ['Model', '+ Walks l = 4'], ['Model', '+ Walks l = 8']]
1
[['P'], ['R'], ['F1 (%)']]
[['70.1', '61.2', '65.3'], ['72.5', '53.3', '61.4*'], ['71.9', '55.6', '62.7'], ['69.9', '58.4', '63.6◇'], ['69.7', '59.5', '64.2◇'], ['71.5', '55.3', '62.4']]
column
['P', 'R', 'F1 (%)']
['No walks l = 1', '+ Walks l = 2', '+ Walks l = 4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Model || SPTree</td> <td>70.1</td> <td>61.2</td> <td>65.3</td> </tr> <tr> <td>Model || Baseline</td> <td>72.5</td> <td>53.3</td> <td>61.4*</td> </tr> <tr> <td>Model || No walks l = 1</td> <td>71.9</td> <td>55.6</td> <td>62.7</td> </tr> <tr> <td>Model || + Walks l = 2</td> <td>69.9</td> <td>58.4</td> <td>63.6◇</td> </tr> <tr> <td>Model || + Walks l = 4</td> <td>69.7</td> <td>59.5</td> <td>64.2◇</td> </tr> <tr> <td>Model || + Walks l = 8</td> <td>71.5</td> <td>55.3</td> <td>62.4</td> </tr> </tbody></table>
Table 1
table_1
P18-2014
4
acl2018
Table 1 illustrates the performance of our proin comparison with SPTree sysposed model tem Miwa and Bansal (2016) on ACE 2005. We use the same data split with SPTree to compare with their model. We retrained their model with gold entities in order to compare the performances on the relation extraction task. The Baseline corresponds to a model that classifies relations by using only the representations of entities in a target pair. As it can be observed from the table, the Baseline model achieves the lowest F1 score between the proposed models. By incorporating attention we can further improve the performance by 1.3 percent point (pp). The addition of 2-length walks further improves performance (0.9 pp). The best results among the proposed models are achieved for maximum 4-length walks. By using up-to 8-length walks the performance drops almost by 2 pp.
[1, 1, 2, 1, 1, 1, 1, 1, 1]
['Table 1 illustrates the performance of our proin comparison with SPTree sysposed model tem Miwa and Bansal (2016) on ACE 2005.', 'We use the same data split with SPTree to compare with their model.', 'We retrained their model with gold entities in order to compare the performances on the relation extraction task.', 'The Baseline corresponds to a model that classifies relations by using only the representations of entities in a target pair.', 'As it can be observed from the table, the Baseline model achieves the lowest F1 score between the proposed models.', 'By incorporating attention we can further improve the performance by 1.3 percent point (pp).', 'The addition of 2-length walks further improves performance (0.9 pp).', 'The best results among the proposed models are achieved for maximum 4-length walks.', 'By using up-to 8-length walks the performance drops almost by 2 pp.']
[['SPTree', 'No walks l = 1', '+ Walks l = 2', '+ Walks l = 4', '+ Walks l = 8'], ['SPTree', 'No walks l = 1', '+ Walks l = 2', '+ Walks l = 4', '+ Walks l = 8'], None, ['Baseline'], ['Baseline', 'F1 (%)'], ['No walks l = 1', 'F1 (%)'], ['+ Walks l = 2', 'F1 (%)'], ['+ Walks l = 4'], ['+ Walks l = 8']]
1
P18-2014table_2
Relation extraction performance (F1 %) on ACE 2005 development set for different number of entities. * denotes significance at p < 0.05 compared to l = 1.
2
[['# Entities', '2'], ['# Entities', '3'], ['# Entities', '[4, 6)'], ['# Entities', '[6, 12)'], ['# Entities', '[12, 23)']]
1
[['l = 1'], ['l = 2'], ['l = 4'], ['l = 8']]
[['71.2', '69.8', '72.9', '71.0'], ['70.1', '67.5', '67.8', '63.5*'], ['56.5', '59.7', '59.3', '59.9'], ['59.2', '64.2*', '62.2', '60.4'], ['54.7', '59.3', '62.3*', '55.0']]
column
['F1', 'F1', 'F1', 'F1']
['# Entities']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>l = 1</th> <th>l = 2</th> <th>l = 4</th> <th>l = 8</th> </tr> </thead> <tbody> <tr> <td># Entities || 2</td> <td>71.2</td> <td>69.8</td> <td>72.9</td> <td>71.0</td> </tr> <tr> <td># Entities || 3</td> <td>70.1</td> <td>67.5</td> <td>67.8</td> <td>63.5*</td> </tr> <tr> <td># Entities || [4, 6)</td> <td>56.5</td> <td>59.7</td> <td>59.3</td> <td>59.9</td> </tr> <tr> <td># Entities || [6, 12)</td> <td>59.2</td> <td>64.2*</td> <td>62.2</td> <td>60.4</td> </tr> <tr> <td># Entities || [12, 23)</td> <td>54.7</td> <td>59.3</td> <td>62.3*</td> <td>55.0</td> </tr> </tbody></table>
Table 2
table_2
P18-2014
5
acl2018
Finally, we show the performance of the proposed model as a function of the number of entities in a sentence. Results in Table 2 reveal that for multi-pair sentences the model performs significantly better compared to the no-walks models, proving the effectiveness of the method. Additionally, it is observed that for more entity pairs, longer walks seem to be required. However, very long walks result to reduced performance (l = 8).
[1, 1, 1, 1]
['Finally, we show the performance of the proposed model as a function of the number of entities in a sentence.', 'Results in Table 2 reveal that for multi-pair sentences the model performs significantly better compared to the no-walks models, proving the effectiveness of the method.', 'Additionally, it is observed that for more entity pairs, longer walks seem to be required.', 'However, very long walks result to reduced performance (l = 8).']
[['# Entities'], ['2', '3'], ['l = 1', 'l = 2', 'l = 4'], ['l = 8']]
1