gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
dict | paper_headers
dict | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-85#paper-1220#slide-4
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-4
|
Tree to String Syntax based MT
|
1 million sentence pairs (train-1.txt)
3 million Japanese sentences (train-1.txt, train-2.txt)
Japanese: In-house tokenizer and POS tagger
Assign linguistic syntax label to X hole of HPB model
|
1 million sentence pairs (train-1.txt)
3 million Japanese sentences (train-1.txt, train-2.txt)
Japanese: In-house tokenizer and POS tagger
Assign linguistic syntax label to X hole of HPB model
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-5
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-5
|
Tree to String Syntax based MT 2 2
|
Proposed by CMUs venugopal and Zollmann in 2006
Extract more rules by modifying parse trees
Use relax-parser in Moses toolkit (option: SAMT 2)
Baseline nodes Additional nodes
|
Proposed by CMUs venugopal and Zollmann in 2006
Extract more rules by modifying parse trees
Use relax-parser in Moses toolkit (option: SAMT 2)
Baseline nodes Additional nodes
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-6
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-6
|
Handling OOV
|
1) Hyphen word split
Ex.) nano-laminate -> nano laminate
2) English spell correction
Use open source spell checker, Aspell
Detection Phrase Based on skip rules
Skip the word containing capital, number or symbol
Based on edit distance
Because large gap causes wrong correction
Select one with shortest distance among top-3 suggestion
detection correction remrakable remarkable
1. remarkable 2. remakable 3. reamarkable
|
1) Hyphen word split
Ex.) nano-laminate -> nano laminate
2) English spell correction
Use open source spell checker, Aspell
Detection Phrase Based on skip rules
Skip the word containing capital, number or symbol
Based on edit distance
Because large gap causes wrong correction
Select one with shortest distance among top-3 suggestion
detection correction remrakable remarkable
1. remarkable 2. remakable 3. reamarkable
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-7
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-7
|
Neural Machine Translation 1 2
|
RNN with an attention mechanism [Bahdanau, 2015]
Size of recurrent unit
Optimization Stochastic gradient descent(SGD)
Time of training 10 days (4 epoch)
|
RNN with an attention mechanism [Bahdanau, 2015]
Size of recurrent unit
Optimization Stochastic gradient descent(SGD)
Time of training 10 days (4 epoch)
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-8
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-8
|
Neural Machine Translation 2 2
|
Prob. of the next target word
[ Modified RNN ]
|
Prob. of the next target word
[ Modified RNN ]
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-9
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-9
|
Experimental Results T2S Syntax based MT
|
+ Rule augmentation 1950M
+ Parameter modification 1950M
Rule augmentation increases both BLEU and #Rules
OOV handling improves the performance
|
+ Rule augmentation 1950M
+ Parameter modification 1950M
Rule augmentation increases both BLEU and #Rules
OOV handling improves the performance
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-10
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-10
|
Experimental Results Neural MT
|
Modified RNN (target char-level with BI)
Char-level of target language is better than word-level
BI representation is helpful
Modified RNN is better than original RNN
|
Modified RNN (target char-level with BI)
Char-level of target language is better than word-level
BI representation is helpful
Modified RNN is better than original RNN
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-11
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-11
|
Experimental Results w Human evaluation
|
T2S SB MT* only
T2S SB MT* + NMT** re-ranking
NMT only outperform T2S SB MT
NMT re-ranking gives the best
T2S SB MT* : Rule augmentation + Parameter modification + OOV handling NMT** : Modified NMT using target char. seg. with B/I
|
T2S SB MT* only
T2S SB MT* + NMT** re-ranking
NMT only outperform T2S SB MT
NMT re-ranking gives the best
T2S SB MT* : Rule augmentation + Parameter modification + OOV handling NMT** : Modified NMT using target char. seg. with B/I
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-13
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-13
|
Outline of KOR JPN MT Task
|
Korean N-best Re-ranking Decoding NMT sentence
|
Korean N-best Re-ranking Decoding NMT sentence
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-14
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-14
|
Phrase based MT system
|
Translation model & Language model
1 million sentence pairs (JPO corpus)
use Mecab-ko and Juman for tokenization
tokenize Korean and Japanese into char-level
Max-phrase length : 10
|
Translation model & Language model
1 million sentence pairs (JPO corpus)
use Mecab-ko and Juman for tokenization
tokenize Korean and Japanese into char-level
Max-phrase length : 10
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-15
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-15
|
Combination of PBMT NMT
|
Choose the result of char-based PB if there is OOV in word-level
Choose the result of word-based PB, otherwise
Re-rank simply by NMT score
|
Choose the result of char-based PB if there is OOV in word-level
Choose the result of word-based PB, otherwise
Re-rank simply by NMT score
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-16
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-16
|
Experimental Results
|
Word PB + Character PB
Character-level PB is comparable to Word-level PB
Combined system has the best result
|
Word PB + Character PB
Character-level PB is comparable to Word-level PB
Combined system has the best result
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-17
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-17
|
Experimental Results w human evaluation
|
Word PB + Character PB
NMT only doesnt outperform PBMT
NMT re-ranking gives the best
|
Word PB + Character PB
NMT only doesnt outperform PBMT
NMT re-ranking gives the best
|
[] |
GEM-SciDuet-train-85#paper-1220#slide-18
|
1220
|
NAVER Machine Translation System for WAT 2015
|
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123
],
"paper_content_text": [
"Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .",
"We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).",
"Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).",
"We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.",
"We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .",
"We give detailed explanations of each SMT system in section 2 and section 3.",
"We describe our NMT model in section 4.",
"2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.",
"We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.",
"We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We filtered out the sentences that have 100 or more tokens from training data.",
"Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.",
"We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.",
"After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.",
"Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.",
"As a result, we chose the tree-to-string syntax-based model.",
"The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.",
"The tree-to-string model was proposed by Huang (2006) and Liu (2006) .",
"It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.",
"The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.",
"We use synchronous context free grammar (SCFG) rules.",
"In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .",
"Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .",
"Thus it is required to augment tree-to-string translation rules.",
"The rule augmentation method allows the training system to extract more rules by modifying parse trees.",
"Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.",
"NP+VP.",
"We limit the maximum span of each rule to 40 tokens in the rule extraction process.",
"The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .",
"Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.",
"The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.",
"The latter is to automatically detect and correct spell errors in an input sentence.",
"We give a detailed description of spell error correction in section 2.4.1.",
"English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.",
"We discovered a lot of spell errors among OOV words that appear in English scientific text.",
"We introduce English spell correction for reducing OOV words in input sentences.",
"We developed our spell corrector by using Aspell 2 .",
"For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.",
"Then we regard words detected by Aspell as spell error words.",
"For correcting spell error, we use only top-3 suggestion words from Aspell.",
"We find that a large gap between an original word and its suggestion word makes wrong correction.",
"To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.",
"We also used Japanese part of the corpus for training the 5-gram language model.",
"We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.",
"We did not filter out any sentences.",
"Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.",
"We used Juman 4 for tokenizing a Japanese sentence.",
"We did not perform part-of-speech tagging for both languages.",
"Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.",
"For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.",
"We use word-level tokenization for the word-based system.",
"We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.",
"We use the 5-gram language model.",
"We use character-level tokenization for character-based system.",
"We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.",
"We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.",
"The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.",
"Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.",
"It generally produces better translations than a table-based transliteration.",
"Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.",
"We also investigated a parentheses imbalance problem.",
"We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.",
"We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.",
"We do not use the step for final submission.",
"To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).",
"If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.",
"Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.",
"Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).",
"An NMT system is a single neural network that reads a source sentence and generates its translation.",
"Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.",
"NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.",
"First, NMT uses minimal domain knowledge.",
"Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.",
"Third, the NMT system removes the need to store explicit phrase tables and language models.",
"Lastly, the decoder of an NMT system is easy to implement.",
"Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.",
"In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.",
"The proposed approach removes the need to replace rare words with the unknown word symbol.",
"Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .",
"Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.",
"(2015) .",
"The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .",
"At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.",
".",
".",
",h T ) with the alignment weights α 1 ,.",
".",
".",
",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).",
"A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.",
"Settings We constructed the source word vocabulary with the most common words in the source language corpora.",
"For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.",
"The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.",
"The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.",
"We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.",
"Each model was optimized using stochastic gradient descent (SGD).",
"We did not use dropout.",
"Training was early-stopped to maximize the performance on the development set measured by BLEU.",
"Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.",
"Table 1 shows the evaluation results of our En-Ja traditional SMT system.",
"The first row in the table indicates the baseline of the tree-to-string systaxbased model.",
"The second row shows the system that reflects the tree modification described in section 2.3.",
"The augmentation method drastically increased both the number of rules and the BLEU score.",
"Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.",
"Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .",
"Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.",
"We obtained the best result in the combination of two phrase-based SMT systems.",
"Table 3 shows effects of our NMT model.",
"\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.",
"In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.",
"\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.",
"The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.",
"Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.",
"The final output is 1best translation selected by considering only NMT score.",
"En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.",
"This result means that NMT produces a strong effect in the language pair with long linguistic distance.",
"Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.",
"From the human evaluation, we can be clear that our NMT model produces successful results.",
"Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.",
"We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.",
"Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.",
"For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.",
"We also plan to develop and evaluate the NMT system in other language pairs."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.4.1",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"5",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Training data",
"Language Analyzer",
"Tree-to-string Syntax-based SMT",
"Handling Out-of-Vocabulary",
"English Spell Correction",
"Training data",
"Language Analyzer",
"Phrase-based SMT",
"Neural Machine Translation",
"Model",
"Settings",
"Experimental Results",
"NMT",
"Conclusion"
]
}
|
GEM-SciDuet-train-85#paper-1220#slide-18
|
Summary
|
We apply different MT models for each task
T2S/PB SMT + NMT Re-ranking is best in both tasks
Char-level tokenization of target language is useful for NMT
Speed up the time of training
Give the better BLEU score
BI representation of char-level tokenization is helpful also for NMT
In the future, we will apply our method to other language-pair;
|
We apply different MT models for each task
T2S/PB SMT + NMT Re-ranking is best in both tasks
Char-level tokenization of target language is useful for NMT
Speed up the time of training
Give the better BLEU score
BI representation of char-level tokenization is helpful also for NMT
In the future, we will apply our method to other language-pair;
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-0
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-0
|
Challenge understand the topics and structure of a document
|
How can we represent a document with respect to the authors emphasis? Sy mptoms
(e.g. semantic class labels) structural information [Ag09, Gla16]
(e.g. coherent passages) in latent vector space [Le14, Bha16]
(i.e. distributional embedding) required for TDT, QA IR
|
How can we represent a document with respect to the authors emphasis? Sy mptoms
(e.g. semantic class labels) structural information [Ag09, Gla16]
(e.g. coherent passages) in latent vector space [Le14, Bha16]
(i.e. distributional embedding) required for TDT, QA IR
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-1
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-1
|
Task split a document into coherent sections with topic labels
|
We aim to detect topics in a document that are expressed by the author as a coherent sequence of sentences (e.g., a passage or book chapter).
|
We aim to detect topics in a document that are expressed by the author as a coherent sequence of sentences (e.g., a passage or book chapter).
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-2
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-2
|
WikiSection Wiki authors provide topics as section headings
|
en_disease de_disease en_city de_city
headings headings headings headings
27 topics 25 topics 30 topics 27 topics
|
en_disease de_disease en_city de_city
headings headings headings headings
27 topics 25 topics 30 topics 27 topics
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-3
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-3
|
SECTOR sequential prediction approach
|
Transform a document of N sentences s1...N into N topic distributions y1...N
Predict M sections T1...M based on coherence of the networks weights
Assign section-level topic labels y1...M
of sections is unknown!
|
Transform a document of N sentences s1...N into N topic distributions y1...N
Predict M sections T1...M based on coherence of the networks weights
Assign section-level topic labels y1...M
of sections is unknown!
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-4
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-4
|
Network architecture 0 4 Overview
|
Objective: maximize the log likelihood of model parameters per document on sentence-level
Requires the entire document as input
Focus on sharp distinction at topic shifts
|
Objective: maximize the log likelihood of model parameters per document on sentence-level
Requires the entire document as input
Focus on sharp distinction at topic shifts
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-5
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-5
|
Network architecture 1 4 Sentence encoding
|
Input: Vector representation of a full document
Split text into sequence of sentences s1...N
Encode sentence vectors x1...N using
Bag-of-words (~56k english words)
Use sentences as time-steps
|
Input: Vector representation of a full document
Split text into sequence of sentences s1...N
Encode sentence vectors x1...N using
Bag-of-words (~56k english words)
Use sentences as time-steps
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-6
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-6
|
Network architecture 2 4 Topic embedding
|
Encoder: Bidirectional Long Short-Term Memory
independent fw and bw parameters helps to sharpen left/right context embedding layer captures latent topics
2x256 LSTM cells, 128 dim embedding layer,
16 docs per batch, 0.5 dropout, ADAM opt. Sebastian Arnold
|
Encoder: Bidirectional Long Short-Term Memory
independent fw and bw parameters helps to sharpen left/right context embedding layer captures latent topics
2x256 LSTM cells, 128 dim embedding layer,
16 docs per batch, 0.5 dropout, ADAM opt. Sebastian Arnold
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-7
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-7
|
Network architecture 3 4 Topic classification
|
Human-readable topic labels for 2 Tasks:
|
Human-readable topic labels for 2 Tasks:
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-9
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-9
|
Coherent segmentation using edge detection
|
We use the topic embedding deviation (emd) dk to start new segments on peaks.
Idea adapted from image processing: we apply Laplacian-of-Gaussian
edge detection [Zi98] to find local maxima on the emd curve
Steps: dimensionality reduction (PCA), Gaussian smoothing, local maxima
Bidirectional deviation (bemd) on fw and bw layers allows for sharper separatio n
|
We use the topic embedding deviation (emd) dk to start new segments on peaks.
Idea adapted from image processing: we apply Laplacian-of-Gaussian
edge detection [Zi98] to find local maxima on the emd curve
Steps: dimensionality reduction (PCA), Gaussian smoothing, local maxima
Bidirectional deviation (bemd) on fw and bw layers allows for sharper separatio n
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-10
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-10
|
Experiments with 20 different models on 8 datasets
|
dataset articles article type headings topics segments
German/English diseases and cities
Wiki-50 [Kosh18] 50 test English generic X X
Cities/Elements 130 test English cities and
Clinical Textbook 227 test English clinical X X
Sentence Classification Baselines: ParVec [Le14], CNN [Kim14]
|
dataset articles article type headings topics segments
German/English diseases and cities
Wiki-50 [Kosh18] 50 test English generic X X
Cities/Elements 130 test English cities and
Clinical Textbook 227 test English clinical X X
Sentence Classification Baselines: ParVec [Le14], CNN [Kim14]
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-11
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-11
|
Experiment 1 segmentation and single label classification
|
Segment on sentence-level and assign one of 25-30 supervised topic labels (F1)
ParVec* ParVec* CNN* SECTOR SECTOR SECTOR SECTOR SECTOR* [Le14] +emd [Kim14] +bow +bloom +bloom+emd +bloom+bemd +word2vect+bemd 13
|
Segment on sentence-level and assign one of 25-30 supervised topic labels (F1)
ParVec* ParVec* CNN* SECTOR SECTOR SECTOR SECTOR SECTOR* [Le14] +emd [Kim14] +bow +bloom +bloom+emd +bloom+bemd +word2vect+bemd 13
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-12
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-12
|
Experiment 2 segmentation and multi label classification
|
Segment on sentence-level and rank 1.0k-2.8k noisy topic words per section (MAP)
CNN* SECTOR SECTOR SECTOR* SECTOR* SECTOR @fullwiki* [Kim14] +bloom +bloom+rank +word2vec +word2vectrank +word2vec 14
|
Segment on sentence-level and rank 1.0k-2.8k noisy topic words per section (MAP)
CNN* SECTOR SECTOR SECTOR* SECTOR* SECTOR @fullwiki* [Kim14] +bloom +bloom+rank +word2vec +word2vectrank +word2vec 14
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-14
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-14
|
Insights SECTOR captures topic distributions coherently
|
Topic predictions on sentence level top: ParVec [Le14] bottom: SECTOR
Segmentation left: newlines in text (\n) right: embedding deviation (emd)
|
Topic predictions on sentence level top: ParVec [Le14] bottom: SECTOR
Segmentation left: newlines in text (\n) right: embedding deviation (emd)
|
[] |
GEM-SciDuet-train-86#paper-1223#slide-16
|
1223
|
SECTOR: A Neural Model for Coherent Topic Segmentation and Classification
|
When searching for information, a human reader first glances over a document, spots relevant sections, and then focuses on a few sentences for resolving her intention. However, the high variance of document structure complicates the identification of the salient topic of a given section at a glance. To tackle this challenge, we present SECTOR, a model to support machine reading systems by segmenting documents into coherent sections and assigning topic labels to each section. Our deep neural network architecture learns a latent topic embedding over the course of a document. This can be leveraged to classify local topics from plain text and segment a document at topic shifts. In addition, we contribute WikiSection, a publicly available data set with 242k labeled sections in English and German from two distinct domains: diseases and cities. From our extensive evaluation of 20 architectures, we report a highest score of 71.6% F1 for the segmentation and classification of 30 topics from the English city domain, scored by our SECTOR long short-term memory model with Bloom filter embeddings and bidirectional segmentation. This is a significant improvement of 29.5 points F1 over state-of-the-art CNN classifiers with baseline segmentation. 1 Our source code is available under the Apache License 2.0 at https://github.com/sebastianarnold/ SECTOR.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294,
295,
296,
297,
298,
299,
300,
301,
302,
303,
304,
305,
306,
307,
308,
309,
310,
311,
312,
313,
314,
315,
316,
317,
318,
319,
320,
321,
322,
323,
324,
325,
326,
327,
328
],
"paper_content_text": [
"Introduction Today's systems for natural language understanding are composed of building blocks that extract semantic information from the text, such as named entities, relations, topics, or discourse structure.",
"In traditional natural language processing (NLP), these extractors are typically applied to bags of words or full sentences (Hirschberg and Manning, 2015) .",
"Recent neural architectures build upon pretrained word or sentence embeddings (Mikolov et al., 2013; Le and Mikolov, 2014) , which focus on semantic relations that can be learned from large sets of paradigmatic examples, even from long ranges (Dieng et al., 2017) .",
"From a human perspective, however, it is mostly the authors themselves who help best to understand a text.",
"Especially in long documents, an author thoughtfully designs a readable structure and guides the reader through the text by arranging topics into coherent passages (Glavaš et al., 2016) .",
"In many cases, this structure is not formally expressed as section headings (e.g., in news articles, reviews, discussion forums) or it is structured according to domain-specific aspects (e.g., health reports, research papers, insurance documents).",
"Ideally, systems for text analytics, such as topic detection and tracking (TDT) (Allan, 2002) , text summarization (Huang et al., 2003) , information retrieval (IR) (Dias et al., 2007) , or question answering (QA) , could access a document representation that is aware of both topical (i.e., latent semantic content) and structural information (i.e., segmentation) in the text (MacAvaney et al., 2018) .",
"The challenge in building such a representation is to combine these two dimensions that are strongly interwoven in the author's mind.",
"It is therefore important to understand topic segmentation and classification as a mutual task that requires encoding both topic information and document structure coherently.",
"In this paper, we present SECTOR, 1 an end-to-end model that learns an embedding of latent topics from potentially ambiguous headings and can be applied to entire documents to predict local topics on sentence level.",
"Our model encodes topical information on a vertical dimension and structural information on a horizontal dimension.",
"We show that the resulting embedding can be leveraged in a downstream pipeline to segment a document into coherent sections and classify the sections into one of up to 30 topic categories reaching 71.6% F 1 -or alternatively, attach up to 2.8k topic labels with 71.1% mean average precision (MAP).",
"We further show that segmentation performance of our bidirectional long short-term memory (LSTM) architecture is comparable to specialized state-of-the-art segmentation methods on various real-world data sets.",
"To the best of our knowledge, the combined task of segmentation and classification has not been approached on the full document level before.",
"There exist a large number of data sets for text segmentation, but most of them do not reflect real-world topic drifts (Choi, 2000; Sehikh et al., 2017) , do not include topic labels (Eisenstein and Barzilay, 2008; Jeong and Titov, 2010; Glavaš et al., 2016) , or are heavily normalized and too small to be used for training neural networks (Chen et al., 2009) .",
"We can utilize a generic segmentation data set derived from Wikipedia that includes headings (Koshorek et al., 2018) , but there is also a need in IR and QA for supervised structural topic labels (Agarwal and Yu, 2009; MacAvaney et al., 2018) , different languages and more specific domains, such as clinical or biomedical research (Tepper et al., 2012; Tsatsaronis et al., 2012) , and news-based TDT (Kumaran and Allan, 2004; Leetaru and Schrodt, 2013) .",
"Therefore we introduce WIKISECTION, 2 a large novel data set of 38k articles from the English and German Wikipedia labeled with 242k sections, original headings, and normalized topic labels for up to 30 topics from two domains: diseases and cities.",
"We chose these subsets to cover both clinical/biomedical aspects (e.g., symptoms, treatments, complications) and news-based topics (e.g., history, politics, economy, climate).",
"Both article types are reasonably well-structured according to Wikipedia guidelines (Piccardi et al., 2018) , but we show that they are also comple-2 The data set is available under the CC BY-SA 3.0 license at https://github.com/sebastianarnold/ WikiSection.",
"mentary: Diseases is a typical scientific domain with low entropy (i.e., very narrow topics, precise language, and low word ambiguity).",
"In contrast, cities resembles a diversified domain, with high entropy (i.e., broader topics, common language, and higher word ambiguity) and will be more applicable to for example, news, risk reports, or travel reviews.",
"We compare SECTOR to existing segmentation and classification methods based on latent Dirichlet allocation (LDA), paragraph embeddings, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).",
"We show that SECTOR significantly improves these methods in a combined task by up to 29.5 points F 1 when applied to plain text with no given segmentation.",
"The rest of this paper is structured as follows: We introduce related work in Section 2.",
"Next, we describe the task and data set creation process in Section 3.",
"We formalize our model in Section 4.",
"We report results and insights from the evaluation in Section 5.",
"Finally, we conclude in Section 6.",
"Related Work The analysis of emerging topics over the course of a document is related to a large number of research areas.",
"In particular, topic modeling (Blei et al., 2003) and TDT (Jin et al., 1999) focus on representing and extracting the semantic topical content of text.",
"Text segmentation (Beeferman et al.",
"1999 ) is used to split documents into smaller coherent chunks.",
"Finally, text classification (Joachims 1998) is often applied to detect topics on text chunks.",
"Our method unifies those strongly interwoven tasks and is the first to evaluate the combined topic segmentation and classification task using a corresponding data set with long structured documents.",
"Topic modeling is commonly applied to entire documents using probabilistic models, such as LDA (Blei et al., 2003) .",
"AlSumait et al.",
"(2008) introduced an online topic model that captures emerging topics when new documents appear.",
"Gabrilovich and Markovitch (2007) proposed the Explicit Semantic Analysis method in which concepts from Wikipedia articles are indexed and assigned to documents.",
"Later, and to overcome the vocabulary mismatch problem, Cimiano et al.",
"(2009) introduced a method for assigning latent concepts to documents.",
"More recently, Liu et al.",
"(2016) represented documents with vectors of closely related domain keyphrases.",
"Yeh et al.",
"(2016) proposed a conceptual dynamic LDA model for tracking topics in conversations.",
"Bhatia et al.",
"(2016) utilized Wikipedia document titles to learn neural topic embeddings and assign document labels.",
"Dieng et al.",
"(2017) focused on the issue of long-range dependencies and proposed a latent topic model based on RNNs.",
"However, the authors did not apply the RNN to predict local topics.",
"Text segmentation has been approached with a wide variety of methods.",
"Early unsupervised methods utilized lexical overlap statistics (Hearst 1997; Choi 2000) , dynamic programming (Utiyama and Isahara, 2001) , Bayesian models (Eisenstein and Barzilay, 2008) , or pointwise boundary sampling (Du et al., 2013) on raw terms.",
"Later, supervised methods included topic models (Riedl and Biemann, 2012) by calculating a coherence score using dense topic vectors obtained by LDA.",
"Bayomi et al.",
"(2015) exploited ontologies to measure semantic similarity between text blocks.",
"Alemi and Ginsparg (2015) and Naili et al.",
"(2017) studied how word embeddings can improve classical segmentation approaches.",
"Glavaš et al.",
"(2016) utilized semantic relatedness of word embeddings by identifying cliques in a graph.",
"More recently, Sehikh et al.",
"(2017) utilized LSTM networks and showed that cohesion between bidirectional layers can be leveraged to predict topic changes.",
"In contrast to our method, the authors focused on segmenting speech recognition transcripts on word level without explicit topic labels.",
"The network was trained with supervised pairs of contrary examples and was mainly evaluated on artificially segmented documents.",
"Our approach extends this idea so it can be applied to dense topic embeddings which are learned from raw section headings.",
"tackled segmentation by training a CNN to learn coherence scores for text pairs.",
"Similar to Sehikh et al.",
"(2017) , the network was trained with short contrary examples and no topic objective.",
"The authors showed that their pointwise ranking model performs well on data sets by Jeong and Titov (2010) .",
"In contrast to our method, the ranking algorithm strictly requires a given ground truth number of segments for each document and no topic labels are predicted.",
"Koshorek et al.",
"(2018) presented a large new data set for text segmentation based on Wikipedia that includes section headings.",
"The authors introduced a neural architecture for segmentation that is based on sentence embeddings and four layers of bidirectional LSTM.",
"Similar to Sehikh et al.",
"(2017) , the authors used a binary segmentation objective on the sentence level, but trained on entire documents.",
"Our work takes up this idea of end-to-end training and enriches the neural model with a layer of latent topic embeddings that can be utilized for topic classification.",
"Text classification is mostly applied at the paragraph or sentence level using machine learning methods such as support vector machines (Joachims, 1998) or, more recently, shallow and deep neural networks (Le et al., 2018; Conneau et al., 2017) .",
"Notably, paragraph vectors (Le and Mikolov, 2014) is an extension of word2vec for learning fixed-length distributed representations from texts of arbitrary length.",
"The resulting model can be utilized for classification by providing paragraph labels during training.",
"Furthermore, Kim (2014) has shown that CNNs combined with pre-trained task-specific word embeddings achieve the highest scores for various text classification tasks.",
"Combined approaches of topic segmentation and classification are rare to find.",
"Agarwal and Yu (2009) classified sections of BioMed Central articles into four structural classes (introduction, methods, results, and discussion).",
"However, their manually labeled data set only contains a sample of sentences from the documents, so they evaluated sentence classification as an isolated task.",
"Chen et al.",
"(2009) introduced two Wikipedia-based data sets for segmentation, one about large cities, the second about chemical elements.",
"Although these data sets have been used to evaluate word-level and sentence-level segmentation (Koshorek et al., 2018) , we are not aware of any topic classification approach on this data set.",
"Tepper et al.",
"(2012) approached segmentation and classification in a clinical domain as supervised sequence labeling problem.",
"The documents were segmented using a maximum entropy model and then classified into 11 or 33 categories.",
"A similar approach by Ajjour et al.",
"(2017) used sequence labeling with a small number of 3-6 classes.",
"Their model is extractive, so it does not produce a continuous segmentation over the entire document.",
"Finally, Piccardi et al.",
"(2018) did not approach segmentation, but recommended an ordered set of section labels based on Wikipedia articles.",
"(1) Plain Text without headings (1) Plain Text without headings (2) Topic Distribution over sequence Figure 1 : Overview of the WIKISECTION task: (1) The input is a plain text document D without structure information.",
"(2) We assume the sentences s 1...N contain a coherent sequence of local topics e 1...N .",
"(3) The task is to segment the document into coherent sections S 1...M and (4) to classify each section with a topic label y 1...M .",
"Eventually, we were inspired by passage retrieval (Liu and Croft, 2002) as an important downstream task for topic segmentation and classification.",
"For example, Hewlett et al.",
"(2016) proposed WikiReading, a QA task to retrieve values from sections of long documents.",
"The objective of TREC Complex Answer Retrieval is to retrieve a ranking of relevant passages for a given outline of hierarchical sections (Nanni et al., 2017) .",
"Both tasks highly depend on a building block for local topic embeddings such as our proposed model.",
"Task Overview and Data set We start with a definition of the WIKISECTION machine reading task shown in Figure 1 .",
"We take a document D = S, T consisting of N consecutive sentences S = [s 1 , .",
".",
".",
", s N ] and empty segmentation T = ∅ as input.",
"In our example, this is the plain text of a Wikipedia article (e.g., about Trichomoniasis 3 ) without any section information.",
"For each sentence s k , we assume a distribution of local topics e k that gradually changes over the course of the document.",
"The task is to split D into a sequence of distinct topic sections T = [T 1 , .",
".",
".",
", T M ], so that each predicted section T j = S j , y j contains a sequence of coherent sentences S j ⊆ S and a topic label y j that describes the common topic in these sentences.",
"For the document 3 https://en.wikipedia.org/w/index.php?",
"title=Trichomoniasis&oldid=814235024.",
"Trichomoniasis, the sequence of topic labels is y 1...M = [ symptom, cause, diagnosis, prevention, treatment, complication, epidemiology ].",
"WikiSection Data Set For the evaluation of this task, we created WIKI-SECTION, a novel data set containing a gold standard of 38k full-text documents from English and German Wikipedia comprehensively annotated with sections and topic labels (see Table 1 ).",
"The documents originate from recent dumps in English 4 and German.",
"5 We filtered the collection using SPARQL queries against Wikidata (Tanon et al., 2016) .",
"We retrieved instances of Wikidata categories disease (Q12136) and their subcategories (e.g., Trichomoniasis or Pertussis) or city (Q515) (e.g., London or Madrid).",
"Our data set contains the article abstracts, plain text of the body, positions of all sections given by the Wikipedia editors with their original headings (e.g., \"Causes | Genetic sequence\") and a normalized topic label (e.g., disease.",
"cause).",
"We randomized the order of documents and split them into 70% training, 10% validation, 20% test sets.",
"Preprocessing To obtain plain document text, we used Wikiextractor, 6 split the abstract sections and stripped all section headings and other structure tags except newline characters and lists.",
"Vocabulary Mismatch in Section Headings.",
"Table 2 shows examples of section headings from disease articles separated into head (most common), torso (frequently used), and tail (rare).",
"Initially, we expected articles to share congruent structure in naming and order.",
"Instead, we observe a high variance with 8.5k distinct headings in the diseases domain and over 23k for English cities.",
"A closer inspection reveals that Wikipedia authors utilize headings at different granularity levels, frequently copy and paste from other articles, but also introduce synonyms or hyponyms, which leads to a vocabulary mismatch problem (Furnas et al., 1987) .",
"As a result, the distribution of headings is heavy-tailed across all articles.",
"Roughly 1% of headings appear more than 25 times whereas the vast majority (88%) appear 1 or 2 times only.",
"Synset Clustering In order to use Wikipedia headlines as a source for topic labels, we contribute a normalization method to reduce the high variance of headings to a few representative labels based on the clustering of BabelNet synsets (Navigli and Ponzetto, 2012) .",
"We create a set H that contains all headings in the data set and use the BabelNet API to match 7 each heading h ∈ H to its corresponding synsets S h ⊂ S. For example, \"Cognitive behavioral therapy\" is assigned to synset bn:03387773n.",
"Next, we insert all matched synsets into an undirected graph G with nodes s ∈ S and edges e. We create edges between all synsets that match among each other with a lemma h ∈ H. Finally, we apply a community detection algorithm (Newman, 2006) on G to find dense clusters of synsets.",
"We use these clusters as normalized topics and assign the sense with most outgoing edges as representative label, in our example e.g.",
"therapy.",
"From this normalization step we obtain 598 synsets that we prune using the head/tail division rule count(s) < 1 |S| s i ∈S count(s i ) (Jiang, 2012) .",
"This method covers over 94.6% of all headings and yields 26 normalized labels and one other class in the English disease data set.",
"Table 1 shows the corresponding numbers for the other data sets.",
"We verify our normalization process by manual inspection of 400 randomly chosen headinglabel assignments by two independent judges and report an accuracy of 97.2% with an average observed inter-annotator agreement of 96.0%.",
"SECTOR Model We introduce SECTOR, a neural embedding model that predicts a latent topic distribution for every position in a document.",
"Based on the task During inference (B), we invoke SECTOR with unseen plain text to predict topic embeddings e k on sentence level.",
"The embeddings are used to segment the document and classify headingsẑ j and normalized topic labelsŷ j .",
"described in Section 3, we aim to detect M sections T 1...M in a document D and assign topic labels y j = topic(S j ), where j = 1, .",
".",
".",
", M .",
"Because we do not know the expected number of sections, we formulate the objective of our model on the sentence level and later segment based on the predictions.",
"Therefore, we assign each sentence s k a sentence topic labelȳ k = topic(s k ), where k = 1, .",
".",
".",
", N .",
"Thus, we aim to predict coherent sections with respect to document context: p(ȳ 1 , ... ,ȳ N | D) = N k=1 p(ȳ k | s 1 , ... , s N ) (1) We approach two variations of this task: For WIKISECTION-topics, we choose a single topic label y j ∈ Y out of a small number of normalized topic labels.",
"However, from this simplified classification task arises an entailment problem, because topics might be hierarchically structured.",
"For example, a section with heading \"Treatment | Gene Therapy\" might describe genetics as a subtopic of treatment.",
"Therefore, we also approach an extended task WIKISECTION-headings to capture ambiguity in a heading, We follow the CBOW approach (Mikolov et al., 2013) and assign all words in the heading z j ⊂ Z as multi-label bag over the original heading vocabulary.",
"This turns our problem into a ranked retrieval task with a large number of ambiguous labels, similar to Prabhu and Varma (2014) .",
"It further eliminates the need for normalized topic labels.",
"For both tasks, we aim to maximize the log likelihood of model parameters Θ on section and sentence level: L(Θ) = M j=1 log p(y j | s 1 , ... , s N ; Θ) L(Θ) = N k=1 log p(ȳ k | s 1 , ... , s N ; Θ) (2) Our SECTOR architecture consists of four stages, shown in Figure 2 : sentence encoding, topic embedding, topic classification and topic segmentation.",
"We now discuss each stage in more detail.",
"Sentence Encoding The first stage of our SECTOR model transforms each sentence s k from plain text into a fixed-size sentence vector x k that serves as input into the neural network layers.",
"Following Hill et al.",
"(2016) , word order is not critical for document-centric evaluation settings such as our WIKISECTION task.",
"Therefore, we mainly focus on unsupervised compositional sentence representations.",
"Bag-of-Words Encoding.",
"As a baseline, we compose sentence vectors using a weighted bagof-words scheme.",
"Let I(w) ∈ {0, 1} |V| be the indicator vector, such that I(w) (i) = 1 iff w is the i-th word in the fixed vocabulary V, and let tf-idf(w) be the TF-IDF weight of w in the corpus.",
"We define the sparse bag-of-words encoding x bow ∈ R |V| as follows: x bow (s) = w∈s tf-idf(w) · I(w) (3) Bloom Filter Embedding.",
"For large V and long documents, input matrices grow too large to fit into GPU memory, especially with larger batch sizes.",
"Therefore we apply a compression technique for sparse sentence vectors based on Bloom filters (Serrà and Karatzoglou, 2017) .",
"A Bloom filter projects every item of a set onto a bit array A(i) ∈ {0, 1} m using k independent hash functions.",
"We use the sum of bit arrays per word as compressed Bloom embedding x bloom ∈ N m : x bloom (s) = w∈s k i=1 A hash i (w) (4) We set parameters to m = 4096 and k = 5 to achieve a compression factor of 0.2, which showed good performance in the original paper.",
"Sentence Embeddings.",
"We use the strategy of Arora et al.",
"(2017) to generate a distributional sentence representation based on pre-trained word2vec embeddings (Mikolov et al., 2013) .",
"This method composes a sentence vector v emb ∈ R d for all sentences using a probability-weighted sum of word embeddings v w ∈ R d with α = 10 −4 , and subtracts the first principal component u of the embedding matrix [ v s : s ∈ S ]: v s = 1 |S| w∈s α α + p(w) v w x emb (s) = v s − uu T v s (5) Topic Embedding We model the second stage in our architecture to produce a dense distributional representation of latent topics for each sentence in the document.",
"We use two layers of LSTM (Hochreiter and Schmidhuber, 1997) with forget gates (Gers et al., 2000) connected to read the document in the forward and backward direction (Graves, 2012) .",
"We feed the LSTM outputs to a ''bottleneck'' layer with tanh activation as topic embedding.",
"Figure 3 shows these layers in context of the complete architecture.",
"We can see that context from left (k − 1) and right (k + 1) affects forward and backward layers independently.",
"It is therefore important to separate these weights in the embedding layer to precisely capture the difference between sentences at section boundaries.",
"We modify our objective given in Equation (2) Note that we separate network parameters Θ and Θ for forward and backward directions of the LSTM, and tie the remaining parameters Θ for the embedding and output layers.",
"This strategy couples the optimization of both directions into the same vector space without the need for an additional loss function.",
"The embeddings e 1...N are calculated from the context-adjusted hidden states h k of the LSTM cells (here simplified as f LSTM ) through the bottleneck layer: h k = f LSTM (x k , h k−1 , Θ) h k = f LSTM (x k , h k+1 , Θ) e k = tanh(W eh h k + b e ) e k = tanh(W eh h k + b e ) (7) Now, a simple concatenation of the embeddings e k = e k ⊕ e k can be used as topic vector by downstream applications.",
"Topic Classification The third stage in our architecture is the output layer that decodes the class labels.",
"To learn model parameters Θ required by the embedding, we need to optimize the full model for a training target.",
"For the WIKISECTION-topics task, we use a simple one-hot encodingȳ ∈ {0, 1} |Y| of the topic labels constructed in Section 3.3 with a softmax activation output layer.",
"For the WIKISECTIONheadings task, we encode each heading as lowercase bag-of-words vectorz ∈ {0, 1} |Z| , such that z (i) = 1 iff the i-th word in Z is contained in the heading, for example,z k= {gene, therapy, treatment}.",
"We then use a sigmoid activation function: y k = softmax(W ye e k + W ye e k + b y ) z k = sigmoid(W ze e k + W ze e k + b z ) (8) Ranking Loss for Multi-Label Optimization.",
"The multi-label objective is to maximize the likelihood of every word that appears in a heading: L(Θ) = N k=1 |Z| i=1 log p(z (i) k | x 1...N ; Θ) (9) For training this model, we use a variation of the logistic pairwise ranking loss function proposed by dos Santos et al.",
"(2015) .",
"It learns to maximize the distance between positive and negative labels: L = log 1 + exp(γ(m + − score + (x))) + log 1 + exp(γ(m − + score − (x))) We calculate the positive term of the loss by taking all scores of correct labels y + into account.",
"We average over all correct scores to avoid a toostrong positive push on the energy surface of the loss function (LeCun et al., 2006) .",
"For the negative term, we only take the most offending example y − among all incorrect class labels.",
"score + (x) = 1 |y + | y∈y + s θ (x) (y) score − (x) = arg max y∈y − s θ (x) (y) (11) Here, s θ (x) (y) denotes the score of label y for input x.",
"We follow the authors and set scaling factor γ = 2, margins m + = 2.5, and m − = 0.5.",
"Topic Segmentation In the final stage, we leverage the information encoded in the topic embedding and output layers to segment the document and classify each section.",
"Baseline Segmentation Methods.",
"As a simple baseline method, we use prior information from the text and split sections at newline characters (NL).",
"Additionally, we merge two adjacent sections if they are assigned the same topic label after classification.",
"If there is no newline information available in the text, we use a maximum label (max) approach: We first split sections at every sentence break (i.e., S j = s k ; j = k = 1, .",
".",
".",
", N ) and then merge all sections that share at least one label in the top-2 predictions.",
"Using Deviation of Topic Embeddings for Segmentation.",
"All information required to classify each sentence in a document is contained in our dense topic embedding matrix E = [e 1 , .",
".",
".",
", e N ].",
"We are now interested in the vector space movement of this embedding over the sequence of sentences.",
"Therefore, we apply a number of transformations adapted from Laplacian-of-Gaussian edge detection on images (Ziou and Tabbone, 1998) to obtain the magnitude of embedding deviation (emd) per sentence.",
"First, we reduce the dimensionality of E to D dimensions using PCA, that is, we solve E = U ΣW T using singular value decomposition and then project E on the D principal components E D = EW D .",
"Next, we apply Gaussian smoothing to obtain a smoothed matrix E D by convolution with a Gaussian kernel with variance σ 2 .",
"From the reduced and smoothed embedding vectors e 1...N we construct a sequence of deviations d 1...N by calculating the stepwise difference using cosine distance: d k = cos(e k−1 , e k ) = e k−1 · e k e k−1 e k (12) Finally we apply the sequence d 1...N with parameters D = 16 and σ = 2.5 to locate the spots of fastest movement (see Figure 4) , i.e.",
"all k where d k−1 < d k > d k+1 ; k = 1 .",
".",
".",
"N in our discrete case.",
"We use these positions to start a new section.",
"Improving Edge Detection with Bidirectional Layers.",
"We adopt the approach of Sehikh et al.",
"(2017) , who examine the difference between forward and backward layer of an LSTM for segmentation.",
"However, our approach focuses on the difference of left and right topic context over time steps k, which allows for a sharper distinction between sections.",
"Here, we obtain two smoothed embeddings e and e and define the bidirectional embedding deviation (bemd) as geometric mean of the forward and backward difference: d k = cos( e k−1 , e k ) · cos( e k , e k+1 ) (13) After segmentation, we assign each segment the mean class distribution of all contained sentences: y j = 1 | S j | s i ∈S jŷ i (14) Finally, we show in the evaluation that our SECTOR model, which was optimized for sentences y k , can be applied to the WIKISECTION task to predict coherently labeled sections T j = S j ,ŷ j .",
"Evaluation We conduct three experiments to evaluate the segmentation and classification task introduced in Section 3.",
"The WIKISECTION-topics experiment constitutes segmentation and classification of each section with a single topic label out of a small number of clean labels (25-30 topics).",
"The WIKISECTION-headings experiment extends the classification task to multi-label per section with a larger target vocabulary (1.0k-2.8k words).",
"This is important, because often there are no clean topic labels available for training or evaluation.",
"Finally, we conduct a third experiment to see how SECTOR performs across existing segmentation data sets.",
"Evaluation Data Sets.",
"For the first two experiments we use the WIKISECTION data sets introduced in Section 3.1, which contain documents about diseases and cities in both English and German.",
"The subsections are retained with full granularity.",
"For the third experiment, text segmentation results are often reported on artificial data sets (Choi, 2000) .",
"It was shown that this scenario is hardly applicable to topic-based segmentation (Koshorek et al., 2018) , so we restrict our evaluation to real-world data sets that are publicly available.",
"The Wiki-727k data set by Koshorek et al.",
"(2018) contains Wikipedia articles with a broad range of topics and their top-level sections.",
"However, it is too large to compare exhaustively, so we use the smaller Wiki-50 subset.",
"We further use the Cities and Elements data sets introduced by Chen et al.",
"(2009) , which also provide headings.",
"These sets are typically used for word-level segmentation, so they don't contain any punctuation and are lowercased.",
"Finally, we use the Clinical Textbook chapters introduced by Eisenstein and Barzilay (2008) , which do not supply headings.",
"Text Segmentation Models.",
"We compare SEC-TOR to common text segmentation methods as baseline, C99 (Choi, 2000) and TopicTiling (Riedl and Biemann, 2012) and the state-of-the-art TextSeg segmenter (Koshorek et al., 2018) .",
"In the third experiment we report numbers for BayesSeg (Eisenstein and Barzilay, 2008 ) (configured to predict with unknown number of segments) and GraphSeg (Glavaš et al., 2016) .",
"Classification Models.",
"We compare SECTOR to existing models for single and multi-label sentence classification.",
"Because we are not aware of any existing method for combined segmentation and classification, we first compare all methods using given prior segmentation from newlines in the text (NL) and then additionally apply our own segmentation strategies for plain text input: maximum label (max), embedding deviation (emd) and bidirectional embedding deviation (bemd).",
"For the experiments, we train a Paragraph Vectors (PV) model (Le and Mikolov, 2014) using all sections of the training sets.",
"We utilize this model for single-label topic classification (depicted as PV>T) by assigning the given topic labels as paragraph IDs.",
"Multi-label classification is not possible with this model.",
"We use the paragraph embedding for our own segmentation strategies.",
"We set the layer size to 256, window size to 7, and trained for 10 epochs using a batch size of 512 sentences and a learning rate of 0.025.",
"We further use an implementation of CNN (Kim, 2014) with our pre-trained word vectors as input for single-label topics (CNN>T) and multi-label headings (CNN>H).",
"We configured the models using the hyperparameters given in the paper and trained the model using a batch size of 256 sentences for 20 epochs with learning rate 0.01.",
"SECTOR Configurations.",
"We evaluate the various configurations of our model discussed in prior sections.",
"SEC>T depicts the single-label topic classification model which uses a softmax activation output layer, SEC>H is the multilabel variant with a larger output and sigmoid activations.",
"Other options are: bag-of-words sentence encoding (+bow), Bloom filter encoding (+bloom) and sentence embeddings (+emb); multi-class cross-entropy loss (as default) and ranking loss (+rank).",
"We have chosen network hyperparameters using grid search on the en disease validation set and keep them fixed over all evaluation runs.",
"For all configurations, we set LSTM layer size to 256, topic embeddings dimension to 128.",
"Models are trained on the complete train splits with a batch size of 16 documents (reduced to 8 for bag-of-words), 0.01 learning rate, 0.5 dropout, and ADAM optimization.",
"We used early stopping after 10 epochs without MAP improvement on the validation data sets.",
"We pretrained word embeddings with 256 dimensions for the specific tasks using word2vec on lowercase English and German Wikipedia documents using a window size of 7.",
"All tests are implemented in Deeplearning4j and run on a Tesla P100 GPU with 16GB memory.",
"Training a SEC+bloom model on en city takes roughly 5 hours, inference on CPU takes on average 0.36 seconds per document.",
"In addition, we trained a SEC>H@fullwiki model with raw headings from a complete English Wikipedia dump, 8 and use this model for cross-data set evaluation.",
"Quality Measures.",
"We measure text segmentation at sentence level using the probabilistic P k error score (Beeferman et al., 1999) , which calculates the probability of a false boundary in a window of size k, lower numbers mean better segmentation.",
"As relevant section boundaries we consider all section breaks where the topic label changes.",
"We set k to half of the average segment length.",
"We measure classification performance on section level by comparing the topic labels of all ground truth sections with predicted sections.",
"We 8 Excluding all documents contained in the test sets.",
"select the pairs by matching their positions using maximum boundary overlap.",
"We report microaveraged F 1 score for single-label or Precision@1 for multi-label classification.",
"Additionally, we measure Mean Average Precision (MAP), which evaluates the average fraction of true labels ranked above a particular label (Tsoumakas et al., 2009) .",
"Table 3 shows the evaluation results of the WIKISECTION-topics single-label classification task, Table 4 contains the corresponding numbers for multi-label classification.",
"Table 5 shows results for topic segmentation across different data sets.",
"Results SECTOR Outperforms Existing Classifiers.",
"With our given segmentation baseline (NL), the best sentence classification model CNN achieves 52.1% F 1 averaged over all data sets.",
"SECTOR improves this score significantly by 12.4 points.",
"Furthermore, in the setting with plain text input, SECTOR improves the CNN score by 18.8 points using identical baseline segmentation.",
"Our model finally reaches an average of 61.8% F 1 on the classification task using sentence embeddings and bidirectional segmentation.",
"This is a total improvement of 27.8 points over the CNN model.",
"Topic Embeddings Improve Segmentation.",
"SECTOR outperforms C99 and TopicTiling significantly by 16.4 and 18.8 points P k , respectively, on average.",
"Compared to the maximum label baseline, our model gains 3.1 points by using the bidirectional embedding deviation and 1.0 points using sentence embeddings.",
"Overall, SECTOR misses only 4.2 points P k and 2.6 points F 1 compared with the experiments with prior newline segmentation.",
"The third experiments reveals that our segmentation method in isolation almost reaches state-of-the-art on existing data sets and beats the unsupervised baselines, but lacks performance on cross-data set evaluation.",
"Bloom Filters on Par with Word Embeddings.",
"Bloom filter encoding achieves high scores among all data sets and outperforms our bag-of-words baseline, possibly because of larger training batch sizes and reduced model parameters.",
"Surprisingly, word embeddings did not improve the model significantly.",
"On average, German models gained 0.7 points F 1 and English models declined by 0.4 points compared with Bloom filters.",
"However, Classification and segmentation on plain text C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a PV>T Table 4 : Results for segmentation and multi-label classification trained with raw Wikipedia headings.",
"Here, the task is to segment the document and predict multi-word topics from a large ambiguous target vocabulary.",
"model training and inference using pre-trained embeddings is faster by an average factor of 3.2.",
"Topic Embeddings Perform Well on Noisy Data.",
"In the multi-label setting with unprocessed Wikipedia headings, classification precision of SECTOR reaches up to 72.3% P@1 for 2.8k labels.",
"This score is in average 9.5 points lower compared to the models trained on the small number of 25-30 normalized labels.",
"Furthermore, segmentation performance only misses 3.8 points P k compared with the topics task.",
"Ranking loss could not improve our models significantly, but achieved better segmentation scores on the headings task.",
"Finally, the cross-domain English fullwiki model performs only on baseline level for segmentation, but still achieves better classification performance than CNN on the English cities data set.",
"Figure 5 : Heatmaps of predicted topic labelsŷ k for document Trichomoniasis from PV and SECTOR models with newline and embedding segmentation.",
"Shading denotes probability for 10 out of 27 selected topic classes on Y axis, with sentences from left to right.",
"Segmentation is shown as black lines, X axis shows expected gold labels.",
"Note that segments with same class assignments are merged in both predictions and gold standard ('.",
".",
".",
"').",
"Discussion and Model Insights SECTOR Captures Latent Topics from Context.",
"We clearly see from NL predictions (left side of Figure 5 ) that SECTOR produces coherent results with sentence granularity, with topics emerging and disappearing over the course of a document.",
"In contrast, PV predictions are scattered across the document.",
"Both models successfully classify first (symptoms) and last sections (epidemiology).",
"However, only SECTOR can capture diagnosis, prevention, and treatment.",
"Furthermore, we observe additional screening predictions in the center of the document.",
"This section is actually labeled \"Prevention | Screening\" in the source document, which explains this overlap.",
"Furthermore, we observe low confidence in the second section labeled cause.",
"Our multi-class model predicts for this section {diagnosis, cause, genetics}.",
"The ground truth heading for this section is \"Causes | Genetic sequence,\" but even for a human reader this assignment is not clear.",
"This shows that the multilabel approach fills an important gap and can even serve as an indicator for low-quality article structure.",
"Finally, both models fail to segment the complication section near the end, because it consists of an enumeration.",
"The embedding deviation segmentation strategy (right side of Figure 5 ) completely solves this issue for both models.",
"Our SECTOR model is giving nearly perfect segmentation using the bidirectional strategy, it only misses the discussed part of cause and is off by one sentence for the start of prevention.",
"Furthermore, averaging over sentence-level predictions reveals clearly distinguishable section class labels.",
"Conclusions and Future Work We presented SECTOR, a novel model for coherent text segmentation and classification based on latent topics.",
"We further contributed WIKISECTION, a collection of four large data sets in English and German for this task.",
"Our end-to-end method builds on a neural topic embedding which is trained using Wikipedia headings to optimize a bidirectional LSTM classifier.",
"We showed that our best performing model is based on sparse word features with Bloom filter encoding and significantly improves classification precision for 25-30 topics on comprehensive documents by up to 29.5 points F 1 compared with state-of-the-art sentence classifiers with baseline segmentation.",
"We used the bidirectional deviation in our topic embedding to segment a document into coherent sections without additional training.",
"Finally, our experiments showed that extending the task to multi-label classification of 2.8k ambiguous topic words still produces coherent results with 71.1% average precision.",
"We see an exciting future application of SECTOR as a building block to extract and retrieve topical passages from unlabeled corpora, such as medical research articles or technical papers.",
"One possible task is WikiPassageQA , a benchmark to retrieve passages as answers to non-factoid questions from long articles."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Task Overview and Data set",
"WikiSection Data Set",
"Preprocessing",
"Synset Clustering",
"SECTOR Model",
"Sentence Encoding",
"Topic Embedding",
"Topic Classification",
"Topic Segmentation",
"Evaluation",
"Results",
"Conclusions and Future Work"
]
}
|
GEM-SciDuet-train-86#paper-1223#slide-16
|
Conclusion and future work
|
SECTOR is designed as a building block for document-level knowledge representation
Reading sentences in document context is an important step to capture both topical and structural information
Training the topic embedding with distant-supervised complementary labels improves performance over self-supervised word embeddings
In future work, we aim to apply the topic embedding for unsupervised passage retrieval and QA tasks
q = ther apy
|
SECTOR is designed as a building block for document-level knowledge representation
Reading sentences in document context is an important step to capture both topical and structural information
Training the topic embedding with distant-supervised complementary labels improves performance over self-supervised word embeddings
In future work, we aim to apply the topic embedding for unsupervised passage retrieval and QA tasks
q = ther apy
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-0
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-0
|
Improving NMT in low Resource scenarios
|
Bilingually low-resource scenario: large amounts of bilingual training data is not available
IDEA: Use existing resources from other tasks and train one model for all tasks using multi-task learning
This effectively injects inductive biases to help improving the generalisation of
Auxiliary tasks: Semantic Parsing, Syntactic Parsing, Named Entity Recognition
|
Bilingually low-resource scenario: large amounts of bilingual training data is not available
IDEA: Use existing resources from other tasks and train one model for all tasks using multi-task learning
This effectively injects inductive biases to help improving the generalisation of
Auxiliary tasks: Semantic Parsing, Syntactic Parsing, Named Entity Recognition
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-1
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-1
|
Encoders Decoders for Individual Tasks
|
I went home Encoder Decoder
Obama was elected and his voter celebrated
The burglar robbed the apartment
NP Encoder Decoder VP apartment the
Named-Entity Recognition NP N
DT burglar the Jim bought 300 shares of Acme Corp. in 2006 Encoder Decoder B-PER 0 0 0 0 B-ORG I-ORG 0 B-MISC
Noun Phrases (NP): the burglar, the apartment
|
I went home Encoder Decoder
Obama was elected and his voter celebrated
The burglar robbed the apartment
NP Encoder Decoder VP apartment the
Named-Entity Recognition NP N
DT burglar the Jim bought 300 shares of Acme Corp. in 2006 Encoder Decoder B-PER 0 0 0 0 B-ORG I-ORG 0 B-MISC
Noun Phrases (NP): the burglar, the apartment
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-2
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-2
|
Sharing Scenario
|
task tag Parse tree
Machine Translation Encoder Decoder
Semantic Parsing Encoder Decoder
Syntactic Parsing Encoder Decoder
Named-Entity Recognition Encoder Decoder
|
task tag Parse tree
Machine Translation Encoder Decoder
Semantic Parsing Encoder Decoder
Syntactic Parsing Encoder Decoder
Named-Entity Recognition Encoder Decoder
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-3
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-3
|
Partial Parameter Sharing
|
<translation> I went home Decoder Encoder
I went home <EOS> <translation>
Zaremoodi & Haffari, NAACL, 2018
|
<translation> I went home Decoder Encoder
I went home <EOS> <translation>
Zaremoodi & Haffari, NAACL, 2018
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-4
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-4
|
Adaptive Knowledge Sharing in MTL
|
!Sharing the parameters of the recurrent units among all tasks
Task interference sharing the knowledge for
Inability to leverage commonalities among subsets of tasks controlling the information
flow in the hidden states
Multiple experts in handling different kinds of information
Adaptively share experts among the tasks
Extend the recurrent units with multiple blocks
each block has its own information flow through the time
Routing mechanism: to softly direct the input to these blocks
|
!Sharing the parameters of the recurrent units among all tasks
Task interference sharing the knowledge for
Inability to leverage commonalities among subsets of tasks controlling the information
flow in the hidden states
Multiple experts in handling different kinds of information
Adaptively share experts among the tasks
Extend the recurrent units with multiple blocks
each block has its own information flow through the time
Routing mechanism: to softly direct the input to these blocks
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-5
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-5
|
Adaptive Knowledge Sharing
|
We use the proposed recurrent unit inside encoder and decoder.
<translation> I went home <EOS>
Task Block Task Block
|
We use the proposed recurrent unit inside encoder and decoder.
<translation> I went home <EOS>
Task Block Task Block
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-6
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-6
|
Experiments
|
Language Pairs: English to Farsi/Vietnamese
English to Farsi: TED corpus & LDC2016E93
English to Vietnamese: IWSLT 2015 (TED and TEDX talks)
Semantic parsing: AMR corpus(newswire, weblogs, web discussion forums and broadcast conversations)
Syntactic parsing: Penn Treebank
NER: CONLL NER Corpus (newswire articles from the Reuters Corpus)
NMT Architecture: GRU for blocks, 400 RNN hidden states and word embedding
NMT best practice: Optimisation: Adam Byte Pair Encoding (BPE) on both source/target Evaluation metrics: PPL, TER and BLEU
= MTL (Routing) BNMT =&MTL(Full) m MTL (Partial) = MTL (Routing)
English > Farsi English > Vietnamese
|
Language Pairs: English to Farsi/Vietnamese
English to Farsi: TED corpus & LDC2016E93
English to Vietnamese: IWSLT 2015 (TED and TEDX talks)
Semantic parsing: AMR corpus(newswire, weblogs, web discussion forums and broadcast conversations)
Syntactic parsing: Penn Treebank
NER: CONLL NER Corpus (newswire articles from the Reuters Corpus)
NMT Architecture: GRU for blocks, 400 RNN hidden states and word embedding
NMT best practice: Optimisation: Adam Byte Pair Encoding (BPE) on both source/target Evaluation metrics: PPL, TER and BLEU
= MTL (Routing) BNMT =&MTL(Full) m MTL (Partial) = MTL (Routing)
English > Farsi English > Vietnamese
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-7
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-7
|
Experiments English to Farsi
|
Block 1 Block 2 Block 3
MT Semantic Syntactic NER
Blocks specialisation: Block 1: MT, Semantic Parsing, Block 2: Syntactic/Semantic Parsing, Block 3: NER
|
Block 1 Block 2 Block 3
MT Semantic Syntactic NER
Blocks specialisation: Block 1: MT, Semantic Parsing, Block 2: Syntactic/Semantic Parsing, Block 3: NER
|
[] |
GEM-SciDuet-train-87#paper-1226#slide-8
|
1226
|
Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation
|
Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction Neural Machine Translation (NMT) has shown remarkable progress in recent years.",
"However, it requires large amounts of bilingual data to learn a translation model with reasonable quality (Koehn and Knowles, 2017) .",
"This requirement can be compensated by leveraging curated monolingual linguistic resources in a multi-task learning framework.",
"Essentially, learned knowledge from auxiliary linguistic tasks serves as inductive bias for the translation task to lead to better generalizations.",
"Multi-Task Learning (MTL) is an effective approach for leveraging commonalities of related tasks to improve performance.",
"Various recent works have attempted to improve NMT by scaffolding translation task on a single auxiliary task (Domhan and Hieber, 2017; Zhang and Zong, 2016; Dalvi et al., 2017) .",
"Recently, (Niehues and Cho, 2017) have made use of several linguistic tasks to improve NMT.",
"Their method shares components of the SEQ2SEQ model among the tasks, e.g.",
"encoder, decoder or the attention mechanism.",
"However, this approach has two limitations: (i) it fully shares the components, and (ii) the shared component(s) are shared among all of the tasks.",
"The first limitation can be addressed using deep stacked layers in encoder/decoder, and sharing the layers partially (Zaremoodi and Haffari, 2018) .",
"The second limitation causes this MTL approach to suffer from task interference or inability to leverages commonalities among a subset of tasks.",
"Recently, (Ruder et al., 2017) tried to address this issue; however, their method is restrictive for SEQ2SEQ scenarios and does not consider the input at each time step to modulate parameter sharing.",
"In this paper, we address the task interference problem by learning how to dynamically control the amount of sharing among all tasks.",
"We extended the recurrent units with multiple blocks along with a routing network to dynamically control sharing of blocks conditioning on the task at hand, the input, and model state.",
"Empirical results on two low-resource translation scenarios, English to Farsi and Vietnamese, show the effectiveness of the proposed model by achieving +1 BLEU score improvement compared to strong baselines.",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks Our MTL is based on the sequential encoderdecoder architecture with the attention mecha-nism (Luong et al., 2015b; .",
"The encoder/decoder consist of recurrent units to read/generate a sentence sequentially.",
"Sharing the parameters of the recurrent units among different tasks is indeed sharing the knowledge for controlling the information flow in the hidden states.",
"Sharing these parameters among all tasks may, however, lead to task interference or inability to leverages commonalities among subsets of tasks.",
"We address this issue by extending the recurrent units with multiple blocks, each of which processing its own information flow through the time.",
"The state of the recurrent unit at each time step is composed of the states of these blocks.",
"The recurrent unit is equipped with a routing mechanism to softly direct the input at each time step to these blocks (see Fig 1) .",
"Each block mimics an expert in handling different kinds of information, coordinated by the router.",
"In MTL, the tasks can use different subsets of these shared experts.",
"(Rosenbaum et al., 2018) uses a routing network for adaptive selection of non-linear functions for MTL.",
"However, it is for fixed-size inputs based on a feed-forward architecture, and is not applicable to SEQ2SEQ scenarios such as MT.",
"(Shazeer et al., 2017) uses Mixture-of-Experts (feed-forward sub-networks) between stacked layers of recurrent units, to adaptively gate state information vertically.",
"This is in contrast to our approach where the horizontal information flow is adaptively modulated, as we would like to minimise the task interference in MTL.",
"Assuming there are n blocks in a recurrent unit, we share n − 1 blocks among the tasks, and let the last one to be task-specific 1 .",
"Task-specific block receives the input of the unit directly while shared blocks are fed with modulated input by the routing network.",
"The state of the unit at each time-step would be the aggregation of blocks' states.",
"Routing Mechanism At each time step, the routing network is responsible to softly forward the input to the shared blocks conditioning on the input x t , and the previous hidden state of the unit h t−1 as follows: s t = tanh(W x · x t + W h · h t−1 + b s ), τ t = softmax(W τ · s t + b τ ), where W 's and b's are the parameters.",
"Then, the i-th shared block is fed with the input of the 1 multiple recurrent units can be stacked on top of each other to consist a multi-layer component Figure 1 : High-level architecture of the proposed recurrent unit with 3 shared blocks and 1 taskspecific.",
"h (2) h (1) h (3) h t-1 x t h t h (4) h (2) h (1) h (3) h (4) t t t t t-1 t-1 t-1 t-1 unit modulated by the corresponding output of the routing networkx (i) t = τ t [i]x t where τ t [i] is the scalar output of the routing network for the i-th block.",
"The hidden state of the unit is the concatenation of the hidden state of the shared and taskspecific parts h t = [h (shared) t ; h (task) t ].",
"The state of task-specific part is the state of the corresponding block h (task) t = h (n+1) t , and the state of the shared part is the sum of states of shared blocks weighted by the outputs of the routing network h (shared) t = n i=1 τ t [i]h (i) t .",
"Block Architecture Each block is responsible to control its own flow of information via a standard gating mechanism.",
"Our recurrent units are agnostic to the internal architecture of the blocks; we use the gated-recurrent unit in this paper.",
"For the i-th block the corresponding equations are as follows: z (i) t = σ(W (i) zx (i) t + U (i) z h (i) t−1 + b (i) z ), r (i) t = σ(W (i) rx (i) t + U (i) r h (i) t−1 + b (i) r ), h (i) t = tanh(W (i) hx (i) t + U (i) h h (i) t−1 + b (i) h ), h (i) t = z (i) t h (i) t−1 + (1 − z (i) t ) h (i) t .",
"Training Objective and Schedule.",
"The rest of the model is similar to attentional SEQ2SEQ model (Luong et al., 2015b) which computes the conditional probability of the target sequence given the source P θ θ θ (y|x) = j P θ θ θ (y j |y <j x).",
"For the case of training M + 1 SEQ2SEQ transduction tasks, each of which is associated with a training set D m := {(x i , y i )} Nm i=1 , the parameters of MTL architecture Θ mtl = {Θ m } M m=0 are learned by maximizing the following objective: L mtl (Θ mtl ) := M m=0 γ m |D m | (x,y)∈Dm log P Θm (y|x) where |D m | is the size of the training set for the mth task, and γ m is responsible to balance the influence of tasks in the training objective.",
"We explored different values in preliminary experiments, and found that for our training schedule γ = 1 for all tasks results in the best performance.",
"Generally, γ is useful when the dataset sizes for auxiliary tasks are imbalanced (our training schedule handles the main task).",
"Variants of stochastic gradient descent (SGD) can be used to optimize the objective function.",
"In our training schedule, we randomly select a mini-batch from the main task (translation) and another mini-batch from a randomly selected auxiliary task to make the next SGD update.",
"Selecting a mini-batch from the main task in each SGD update ensures that its training signals are not washed out by auxiliary tasks.",
"Experiments Bilingual Corpora We use two language-pairs, translating from English to Farsi and Vietnamese.",
"We have chosen them to analyze the effect of multi-task learning on languages with different underlying linguistic structures 2 .",
"We apply BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-Vietnamese, and separate vocabularies for English-Farsi as the alphabets are disjoined (30K BPE operations).",
"Further details about the corpora and their pre-processing is as follows: • The English-Farsi corpus has ∼105K sentence pairs.",
"It is assembled from English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012) , accompanied by all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium.",
"The corpus has been normalized using the Hazm toolkit 3 .",
"We have removed sentences with more than 80 tokens in either side (before applying BPE).",
"3k and 4k sentence pairs were held out for the purpose of validation and test.",
"• The English-Vietnamese has ∼133K training pairs.",
"It is the preprocessed version of the IWSLT 2015 translation task provided by (Luong and Manning, 2015) .",
"It consists of subtitles and their corresponding translations of a collection of public speeches from TED and TEDX talks.",
"The \"tst2012\" and \"tst2013\" parts are used as validation and test sets, respectively.",
"We have removed sentence pairs which had more than 300 tokens after applying BPE on either sides.",
"Auxiliary Tasks We have chosen the following auxiliary tasks to leverage the syntactic and semantic knowledge to improve NMT: Named-Entity Recognition (NER).",
"It is expected that learning to recognize named-entities help the model to learn translation pattern by masking out named-entites.",
"We have used the NER data comes from the CONLL shared task.",
"4 Sentences in this dataset come from a collection of newswire articles from the Reuters Corpus.",
"These sentences are annotated with four types of named entities: persons, locations, organizations and names of miscellaneous entities.",
"Syntactic Parsing.",
"By learning the phrase structure of the input sentence, the model would be able to learn better re-ordering.",
"Specially, in the case of language pairs with high level of syntactic divergence (e.g.",
"English-Farsi).",
"We have used Penn Tree Bank parsing data with the standard split for training, development, and test (Marcus et al., 1993) .",
"We cast syntactic parsing to a SEQ2SEQ transduction task by linearizing constituency trees (Vinyals et al., 2015) .",
"Semantic Parsing.",
"Learning semantic parsing helps the model to abstract away the meaning from the surface in order to convey it in the target translation.",
"For this task, we have used the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10) 5 .",
"This corpus contains natural language sentences from newswire, weblogs, web discussion forums and broadcast conversations.",
"We cast this task to a SEQ2SEQ transduction task by linearizing the AMR graphs (Konstas et al., 2017) .",
"Models and Baselines We have implemented the proposed MTL architecture along with the baselines in C++ using DyNet (Neubig et al., 2017) on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ2SEQ NMT model.",
"For our MTL architecture, we used the proposed recurrent unit with 3 blocks in encoder and decoder.",
"For the fair comparison in terms the of number of parameters, we used 3 stacked layers in both encoder and decoder components for the baselines.",
"We compare against the following baselines: • Baseline 1: The vanilla SEQ2SEQ model (Luong et al., 2015a) without any auxiliary task.",
"• Baseline 2: The MTL architecture proposed in (Niehues and Cho, 2017) which fully shares parameters in components.",
"We have used their best performing architecture with our training schedule.",
"We have extended their work with deep stacked layers for the sake of comparison.",
"• Baseline 3: The MTL architecture proposed in (Zaremoodi and Haffari, 2018) which uses deep stacked layers in the components and shares the parameters of the top two/one stacked layers among encoders/decoders of all tasks 6 .",
"For the proposed MTL, we use recurrent units with 400 hidden dimensions for each block.",
"The encoders and decoders of the baselines use GRU units with 400 hidden dimensions.",
"The attention component has 400 dimensions.",
"We use Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all the tasks.",
"Learning 6 In preliminary experiments, we have tried different sharing scenarios and this one led to the best results.",
"rates are halved on the decrease in the performance on the dev set of corresponding task.",
"Mini-batch size is set to 32, and dropout rate is 0.5.",
"All models are trained for 50 epochs and the best models are saved based on the perplexity on the dev set of the translation task.",
"For each task, we add special tokens to the beginning of source sequence (similar to (Johnson et al., 2017) ) to indicate which task the sequence pair comes from.",
"We used greedy decoding to generate translation.",
"In order to measure translation quality, we use BLEU 7 (Papineni et al., 2002) and TER (Snover et al., 2006) scores.",
"Table 1 reports the results for the baselines and our proposed method on the two aforementioned translation tasks.",
"As expected, the performance of MTL models are better than the baseline 1 (only MT task).",
"As seen, partial parameter sharing is more effective than fully parameter sharing.",
"Furthermore, our proposed architecture with adaptive sharing performs better than the other MTL methods on all tasks, and achieve +1 BLEU score improvements on the test sets.",
"The improvements in the translation quality of NMT models trained by our MTL method may be attributed to less interference with multiple auxiliary tasks.",
"Figure 2 shows the average percentage of block usage for each task in an MTL model with 3 shared blocks, on the English-Farsi test set.",
"We have aggregated the output of the routing network for the blocks in the encoder recurrent units over all the input tokens.",
"Then, it is normalized by dividing on the total number of input tokens.",
"Based on Figure 2 , the first and third blocks are more specialized (based on their usage) for the translation and NER tasks, respectively.",
"The second block is mostly used by the semantic and syntactic parsing tasks, so specialized for them.",
"This confirms our model leverages commonalities among subsets of tasks by dedicating common blocks to them to reduce task interference.",
"Results and analysis Conclusions We have presented an effective MTL approach to improve NMT for low-resource languages, by leveraging curated linguistic resources on the source side.",
"We address the task interference issue in previous MTL models by extending the recurrent units with multiple blocks along with a trainable routing network.",
"Our experimental results on low-resource English to Farsi and Vietnamese datasets, show +1 BLEU score improvements compared to strong baselines."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"SEQ2SEQ MTL Using Recurrent Unit with Adaptive Routed Blocks",
"Routing Mechanism",
"Block Architecture",
"Training Objective and Schedule.",
"Bilingual Corpora",
"Auxiliary Tasks",
"Models and Baselines",
"Conclusions"
]
}
|
GEM-SciDuet-train-87#paper-1226#slide-8
|
Conclusion
|
Address the task interference issue in MTL
|
Address the task interference issue in MTL
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-0
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-0
|
Problem Statement
|
Judiciously select labeled data from assisting language to improve the NER performance in the primary language for multilingual learning
|
Judiciously select labeled data from assisting language to improve the NER performance in the primary language for multilingual learning
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-1
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-1
|
Why need to judiciously select data from assisting language
|
Many language have less named entity annotated data
Several approaches have explored use of data from one or more languages (assisting languages) [Gillick et al. [2016], Yang et al.
However, annotated data from assisting languages might negatively influence the performance on the primary language
Word Per Loc Org Misc Word Per Loc Org Misc
Religions, Languages, Nationalities, etc. uppercase in English but not in Spanish
I am going to Washington
mein me washington washington jaa raha going to
|
Many language have less named entity annotated data
Several approaches have explored use of data from one or more languages (assisting languages) [Gillick et al. [2016], Yang et al.
However, annotated data from assisting languages might negatively influence the performance on the primary language
Word Per Loc Org Misc Word Per Loc Org Misc
Religions, Languages, Nationalities, etc. uppercase in English but not in Spanish
I am going to Washington
mein me washington washington jaa raha going to
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-2
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-2
|
What can go wrong in multilingual learning for NER
|
Religions, Languages, Nationalities, etc. uppercase in English but not in Spanish
|
Religions, Languages, Nationalities, etc. uppercase in English but not in Spanish
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-3
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-3
|
Related Work
|
Select sentences from general domain data most similar to in-domain data
Used language model to measure similarity of general domain data with the in-domain training data
Ruder and Plank [2017] Learn to weigh various data selection measures using
Zhao et al. [2018] Select assisting data for multi-task domain adaptation
Assisting language sentences with highest log likelihood value were selected
Ponti et al. [2018] Measure cross-lingual syntactic variation considering both morphological and structural properties
Selecting a assisting language with a lower degree of anisomorphism is crucial for knowledge transfer
Table 1: Literature most relevant to our work
|
Select sentences from general domain data most similar to in-domain data
Used language model to measure similarity of general domain data with the in-domain training data
Ruder and Plank [2017] Learn to weigh various data selection measures using
Zhao et al. [2018] Select assisting data for multi-task domain adaptation
Assisting language sentences with highest log likelihood value were selected
Ponti et al. [2018] Measure cross-lingual syntactic variation considering both morphological and structural properties
Selecting a assisting language with a lower degree of anisomorphism is crucial for knowledge transfer
Table 1: Literature most relevant to our work
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-4
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-4
|
Proposed Approach
|
Select sentences based on the agreement in tag distribution of common entities
Goal: Improve Spanish NER performance by adding English NER annotated data
Word Per Loc Org Misc Word Per Loc Org Misc
Select English sentences containing entities with similar tag distribution
Use Symmetric Kl-Divergence to calculate the tag disagreement for common entities between English and Spanish
Word Per Loc Org Misc Per Loc Org Misc KL(EngEsp) KL(EspEng) SKL
for every sentence X, in assisting language do
Score(X) for every word xi, in sentence X do if word xi appears in primary language then
Pa(xi) are tag distributions of xi in primary and assisting lan- guages}
end if end for end for
Add assisting language sentences with sentence score Score(X) less than a threshold to the primary language data
|
Select sentences based on the agreement in tag distribution of common entities
Goal: Improve Spanish NER performance by adding English NER annotated data
Word Per Loc Org Misc Word Per Loc Org Misc
Select English sentences containing entities with similar tag distribution
Use Symmetric Kl-Divergence to calculate the tag disagreement for common entities between English and Spanish
Word Per Loc Org Misc Per Loc Org Misc KL(EngEsp) KL(EspEng) SKL
for every sentence X, in assisting language do
Score(X) for every word xi, in sentence X do if word xi appears in primary language then
Pa(xi) are tag distributions of xi in primary and assisting lan- guages}
end if end for end for
Add assisting language sentences with sentence score Score(X) less than a threshold to the primary language data
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-5
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-5
|
Dataset Statistics
|
(#Tokens) Train (#Tokens) Test
English Tjong Kim Sang and
|
(#Tokens) Train (#Tokens) Test
English Tjong Kim Sang and
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-6
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-6
|
Network Details
|
Parameter sharing configurations considered
Sub-word feature extractors shared across languages
Neural network trained in language independent way
Figure 1: Architecture of the Neural
Network (Murthy and Bhattacharyya
|
Parameter sharing configurations considered
Sub-word feature extractors shared across languages
Neural network trained in language independent way
Figure 1: Architecture of the Neural
Network (Murthy and Bhattacharyya
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-7
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-7
|
Results
|
Primary Assisting Layers Data Selection Primary Assisting Layers Data Selection
Language Language Shared All SKL Language Language Shared All SKL
Monolingual None Monolingual None
Spanish All Spanish All 76.92Sub-word Sub-word
Dutch All 77.29Sub-word Dutch All Sub-word
Table 3: F-Score for German and Italian Test data using Monolingual and Multilingual learning strategies. indicates that the SKL results are statistically significant compared to adding all assisting language data with p-value 0.05 using two-sided Welch t-test.
Hindi Marathi Bengali Malayalam Tamil
ALL SKL ALL SKL ALL SKL ALL SKL ALL SKL
Table 4: Test set F-Score from monolingual and multilingual learning on Indian languages.
Result from monolingual training on the primary language is underlined. indicates SKL results statistically significant compared to adding all assisting language data with p-value 0.05 using two-sided Welch t-test.
|
Primary Assisting Layers Data Selection Primary Assisting Layers Data Selection
Language Language Shared All SKL Language Language Shared All SKL
Monolingual None Monolingual None
Spanish All Spanish All 76.92Sub-word Sub-word
Dutch All 77.29Sub-word Dutch All Sub-word
Table 3: F-Score for German and Italian Test data using Monolingual and Multilingual learning strategies. indicates that the SKL results are statistically significant compared to adding all assisting language data with p-value 0.05 using two-sided Welch t-test.
Hindi Marathi Bengali Malayalam Tamil
ALL SKL ALL SKL ALL SKL ALL SKL ALL SKL
Table 4: Test set F-Score from monolingual and multilingual learning on Indian languages.
Result from monolingual training on the primary language is underlined. indicates SKL results statistically significant compared to adding all assisting language data with p-value 0.05 using two-sided Welch t-test.
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-8
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-8
|
Analysis
|
Histogram of assisting language sentences ranked by their sentence scores
Figure 2: English-Italian: Histogram of Figure 3: Spanish-Italian: Histogram
English Sentences of Spanish Sentences
Influence of SKL Threshold
Figure 4: Spanish-Italian Multilingual Learning: Influence of Sentence score
(SKL) on Italian NER
|
Histogram of assisting language sentences ranked by their sentence scores
Figure 2: English-Italian: Histogram of Figure 3: Spanish-Italian: Histogram
English Sentences of Spanish Sentences
Influence of SKL Threshold
Figure 4: Spanish-Italian Multilingual Learning: Influence of Sentence score
(SKL) on Italian NER
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-9
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-9
|
Analysis European Languages
|
Adding all Spanish/Dutch sentences to Italian data, leads to drop in Italian NER performance
Label drift from overlapping entities is one of the reasons for the poor results
We compare the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning
Similar pattern is observed in the case of Dutch sentences
|
Adding all Spanish/Dutch sentences to Italian data, leads to drop in Italian NER performance
Label drift from overlapping entities is one of the reasons for the poor results
We compare the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning
Similar pattern is observed in the case of Dutch sentences
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-10
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-10
|
Analysis Indian Languages
|
Bengali, Malayalam, and Tamil (low-resource languages) benefits from our data selection strategy
Hindi and Marathi NER performance improves when the other is used as assisting language
Hindi and Marathi are not benefited from multilingual learning with Bengali, Malayalam and Tamil
|
Bengali, Malayalam, and Tamil (low-resource languages) benefits from our data selection strategy
Hindi and Marathi NER performance improves when the other is used as assisting language
Hindi and Marathi are not benefited from multilingual learning with Bengali, Malayalam and Tamil
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-11
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-11
|
Analysis Influence of SKL Threshold
|
Train for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages
We vary the threshold value from 0.0 to 9.0 in steps of 1
Italian test F-Score increases initially as we add more and more
Spanish sentences and then drops due to influence of drift becoming significant
|
Train for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages
We vary the threshold value from 0.0 to 9.0 in steps of 1
Italian test F-Score increases initially as we add more and more
Spanish sentences and then drops due to influence of drift becoming significant
|
[] |
GEM-SciDuet-train-88#paper-1227#slide-12
|
1227
|
Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER
|
Multilingual learning for Neural Named Entity Recognition (NNER) involves jointly training a neural network for multiple languages. Typically, the goal is improving the NER performance of one of the languages (the primary language) using the other assisting languages. We show that the divergence in the tag distributions of the common named entities between the primary and assisting languages can reduce the effectiveness of multilingual learning. To alleviate this problem, we propose a metric based on symmetric KL divergence to filter out the highly divergent training instances in the assisting language. We empirically show that our data selection strategy improves NER performance in many languages, including those with very limited training data.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"paper_content_text": [
"Existing approaches add all training sentences from the assisting language to the primary language and train the neural network on the combined data.",
"However, data from assisting languages can introduce a drift in the tag distribution for named entities, since the common named entities from the two languages may have vastly divergent tag distributions.",
"For example, the entity China appears in training split of Spanish (primary) and English (assisting) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) with the corresponding tag frequencies, Spanish = { Loc : 20, Org : 49, Misc : 1 } and English = { Loc : 91, Org : 7 }.",
"By adding English data to Spanish, the tag distribution of China is skewed towards Location entity in Spanish.",
"This leads to a drop in named entity recognition performance.",
"In this work, we address this problem of drift in tag distribution owing to adding training data from a supporting language.",
"The problem is similar to the problem of data selection for domain adaptation of various NLP tasks, except that additional complexity is introduced due to the multilingual nature of the learning task.",
"For domain adaptation in various NLP tasks, several approaches have been proposed to address drift in data distribution (Moore and Lewis, 2010; Axelrod et al., 2011; Ruder and Plank, 2017) .",
"For instance, in machine translation, sentences from out-of-domain data are selected based on a suitably defined metric (Moore and Lewis, 2010; Axelrod et al., 2011) .",
"The metric attempts to capture similarity of the out-of-domain sentences with the in-domain data.",
"Out-of-domain sentences most similar to the in-domain data are added.",
"Like the domain adaptation techniques summarized above, we propose to judiciously add sentences from the assisting language to the primary language data based on the divergence between the tag distributions of named entities in the train- Following are the contributions of the paper: (a) We present a simple approach to select assisting language sentences based on symmetric KL-Divergence of overlapping entities (b) We demonstrate the benefits of multilingual Neural NER on low-resource languages.",
"We compare the proposed data selection approach with monolingual Neural NER system, and the multilingual Neural NER system trained using all assisting language sentences.",
"To the best of our knowledge, ours is the first work for judiciously selecting a subset of sentences from an assisting language for multilingual Neural NER.",
"Judicious Selection of Assisting Language Sentences For every assisting language sentence, we calculate the sentence score based on the average symmetric KL-Divergence score of overlapping entities present in that sentence.",
"By overlapping entities, we mean entities whose surface form appears in both the languages' training data.",
"The symmetric KL-Divergence SKL(x), of a named entity x, is defined as follows, SKL(x) = KL( P p (x) || P a (x) ) + KL( P a (x) || P p (x) ) /2 (1) where P p (x) and P a (x) are the probability distributions for entity x in the primary (p) and the assisting (a) languages respectively.",
"KL refers to the standard KL-Divergence score between the two probability distributions.",
"KL-Divergence calculates the distance between the two probability distributions.",
"Lower the KL-Divergence score, higher is the tag agreement for an entity in both the languages thereby, reducing the possibility of entity drift in multilingual learning.",
"Assisting language sentences with the sentence score below a threshold value are added to the primary language data for multilingual learning.",
"If an assisting language sentence contains no overlapping entities, the corresponding sentence score is zero resulting in its selection.",
"Network Architecture Several deep learning models (Collobert et al., 2011; Ma and Hovy, 2016; Murthy and Bhattacharyya, 2016; Lample et al., 2016; Yang et al., 2017) have been proposed for monolingual NER in the literature.",
"Apart from the model by Collobert et al.",
"(2011) , remaining approaches extract sub-word features using either Convolution Neural Networks (CNNs) or Bi-LSTMs.",
"The proposed data selection strategy for multilingual Neural NER can be used with any of the existing models.",
"We choose the model by Murthy and Bhattacharyya (2016) 1 in our experiments.",
"Multilingual Learning We consider two parameter sharing configurations for multilingual learning (i) sub-word feature extractors shared across languages (Yang et al., 2017 ) (Sub-word) (ii) the entire network trained in a language independent way (All).",
"As Murthy and Bhattacharyya (2016) use CNNs to extract sub-word features, only the character-level CNNs are shared for the Sub-word configuration.",
"Experimental Setup In this section we list the datasets used and the network configurations used in our experiments.",
"Datasets The Table 1 lists the datasets used in our experiments along with pre-trained word embeddings used and other dataset statistics.",
"For German NER, we use ep-96-04-16.conll to create train and development splits, and use ep-96-04-15.conll as test split.",
"As Italian has a different tag set compared to English, Spanish and Dutch, we do not share output layer for All configuration in multilingual experiments involving Italian.",
"Even though the languages considered are resource-rich languages, we consider German and Italian as primary languages due to their relatively lower number of train tokens.",
"The German NER data followed IO notation and for all experiments involving German, we converted other language data to IO notation.",
"Similarly, the Italian NER data followed IOBES notation and for all experiments involving Italian, we converted other language data to IOBES notation.",
"For low-resource language setup, we consider the following Indian languages: Hindi, Marathi 2 , Bengali, Tamil and Malayalam.",
"Except for Hindi all are low-resource languages.",
"We consider only Person, Location and Organization tags.",
"Though the scripts of these languages are different, they share the same set of phonemes making script mapping across languages easier.",
"We convert Tamil, Bengali and Malayalam data to the Devanagari script using the Indic NLP li-2 Data is available here: http://www.cfilt.iitb.",
"ac.in/ner/annotated_corpus/ brary 3 (Kunchukuttan et al., 2015) thereby, allowing sharing of sub-word features across the Indian languages.",
"For Indian languages, the annotated data followed the IOB format.",
"Network Hyper-parameters With the exception of English, Spanish and Dutch, remaining language datasets did not have official train and development splits provided.",
"We randomly select 70% of the train split for training the model and remaining as development split.",
"The threshold for sentence score SKL, is selected based on cross-validation for every language pair.",
"The dimensions of the Bi-LSTM hidden layer are 200 and 400 for the monolingual and multilingual experiments respectively.",
"We extract 20 features per convolution filter, with width varying from 1 to 9.",
"The initial learning rate is 0.4 and multiplied by 0.7 when validation error increases.",
"The training is stopped when the learning rate drops below 0.002.",
"We assign a weight of 0.1 to assisting language sentences and oversample primary language sentences to match the assisting language sentence count in all multilingual experiments.",
"For European languages, we have performed hyper-parameter tuning for both the monolingual and multilingual learning (with all assisting language sentences) configurations.",
"The best hyperparameter values for the language pair involved were observed to be within similar range.",
"Hence, we chose the same set of hyper-parameter values for all languages.",
"Results We now present the results on both resource-rich and resource-poor languages.",
"Table 2 presents the results for German and Italian NER.",
"We consistently observe improvements for German and Italian NER using our data selection strategy, irrespective of whether only subword features are shared (Sub-word) or the entire network (All) is shared across languages.",
"Resource-Rich Languages Adding all Spanish/Dutch sentences to Italian data leads to drop in Italian NER performance when all layers are shared.",
"Label drift from overlapping entities is one of the reasons for the poor results.",
"This can be observed by comparing the histograms of English and Spanish sentences ranked by the SKL scores for Italian multilingual learning (Figure 1) .",
"Most English sentences have lower SKL scores indicating higher tag agreement for overlapping entities and lower drift in tag distribution.",
"Hence, adding all English sentences improves Italian NER accuracy.",
"In contrast, most Spanish sentences have larger SKL scores and adding these sentences adversely impacts Italian NER performance.",
"By judiciously selecting assisting language sentences, we eliminate sentences which are responsible for drift occurring during multilingual learning.",
"To understand how overlapping entities impact the NER performance, we study the statistics of overlapping named entities between Italian-English and Italian-Spanish pairs.",
"911 and 916 unique entities out of 4061 unique Italian entities appear in the English and Spanish data respectively.",
"We had hypothesized that entities with divergent tag distribution are responsible for hindering the performance in multilingual learning.",
"If we sort the common entities based on their SKL divergence value.",
"We observe that 484 out of 911 common entities in English and 535 out of 916 common entities in Spanish have an SKL score greater than 1.0.",
"162 out of 484 common entities in English-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the English corpus.",
"Similarly, 123 out of 535 common entities in Spanish-Italian data having SKL divergence value greater than 1.0 also appear more than 10 times in the Spanish corpus.",
"However, these common 162 entities have a combined frequency of 12893 in English, meanwhile the 123 common entities have a combined frequency of 34945 in Spanish.",
"To summarize, although the number of overlapping entities is comparable in English and Spanish sentences, entities with larger SKL divergence score appears more frequently in Spanish sentences compared to English sentences.",
"As a consequence, adding all Spanish sentences leads to significant drop in Italian NER performance which is not the case when all English sentences are added.",
"Table 3 : Test set F-Score from monolingual and multilingual learning on Indian languages.",
"Result from monolingual training on the primary language is underlined.",
"† indicates SKL results statistically significant compared to adding all assisting language data with p-value < 0.05 using two-sided Welch t-test.",
"Resource-Poor Languages As Indian languages exhibit high lexical overlap (Kunchukuttan and Bhattacharyya, 2016) and syntactic relatedness (V Subbãrão, 2012), we share all layers of the network across languages.",
"Influence of SKL Threshold Here, we study the influence of SKL score threshold on the NER performance.",
"We run experiments for Italian NER by adding Spanish training sentences and sharing all layers except for output layer across languages.",
"We vary the threshold value from 1.0 to 9.0 in steps of 1, and select sentences with score less than the threshold.",
"A threshold of 0.0 indicates monolingual training and threshold greater than 9.0 indicates all assist-ing language sentences considered.",
"The plot of Italian test F-Score against SKL score is shown in the Figure 2 .",
"Italian test F-Score increases initially as we add more and more Spanish sentences and then drops due to influence of drift becoming significant.",
"Finding the right SKL threshold is important, hence we use a validation set to tune the SKL threshold.",
"Conclusion In this paper, we address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER.",
"We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy.",
"We propose to use the symmetric KL-Divergence metric to measure the tag distribution divergence.",
"We observe consistent improvements in multilingual Neural NER performance using our data selection strategy.",
"The strategy shows benefits for extremely low resource primary languages too.",
"This problem of drift in data distribution may not be unique to multilingual NER, and we plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation, etc.",
"We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages."
]
}
|
{
"paper_header_number": [
"2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Judicious Selection of Assisting Language Sentences",
"Experimental Setup",
"Datasets",
"Network Hyper-parameters",
"Results",
"Resource-Rich Languages",
"Resource-Poor Languages",
"Influence of SKL Threshold",
"Conclusion"
]
}
|
GEM-SciDuet-train-88#paper-1227#slide-12
|
Conclusion And Future Work
|
We address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER
We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy
A more principled approach for data selection would be exploring the
We plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation
We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages
|
We address the problem of divergence in tag distribution between primary and assisting languages for multilingual Neural NER
We show that filtering out the assisting language sentences exhibiting significant divergence in the tag distribution can improve NER accuracy
A more principled approach for data selection would be exploring the
We plan to study the influence of data selection for multilingual learning on other NLP tasks like sentiment analysis, question answering, neural machine translation
We also plan to explore more metrics for multilingual learning, specifically for morphologically rich languages
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-0
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-0
|
Current State
|
We dont know too much about the differences between them:
Gated RNNs are shown to train better, beyond that:
RNNs are Turing Complete?
|
We dont know too much about the differences between them:
Gated RNNs are shown to train better, beyond that:
RNNs are Turing Complete?
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-1
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-1
|
Turing Complete
|
Uses stack(s), maintained in certain dimension(s)
Zeros are pushed using division (using g = g/4 + 1/4)
In 32 bits, this reaches the limit after pushes
Allows processing steps beyond reading input
(Not the standard use case!)
|
Uses stack(s), maintained in certain dimension(s)
Zeros are pushed using division (using g = g/4 + 1/4)
In 32 bits, this reaches the limit after pushes
Allows processing steps beyond reading input
(Not the standard use case!)
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-3
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-3
|
Real Use
|
Gated architectures have the best performance
LSTM and GRU are most popular
Of these, the choice between them is unclear
|
Gated architectures have the best performance
LSTM and GRU are most popular
Of these, the choice between them is unclear
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-4
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-4
|
Main Result
|
We accept all RNN types can simulate DFAs
We show that LSTMs and IRNNs can also count
And that the GRU and SRNN cannot
|
We accept all RNN types can simulate DFAs
We show that LSTMs and IRNNs can also count
And that the GRU and SRNN cannot
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-5
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-5
|
Power of Counting
|
LSTM better at capturing target length
Finite State Machines vs Counter Machines
|
LSTM better at capturing target length
Finite State Machines vs Counter Machines
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-6
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-6
|
K Counter Machines SKCMs
|
Fischer, Meyer, Rosenberg - 1968
Similar to finite automata, but also maintain k counters
A counter has 4 operations: inc/dec by one, do nothing, reset
Counters are observed by comparison to zero
|
Fischer, Meyer, Rosenberg - 1968
Similar to finite automata, but also maintain k counters
A counter has 4 operations: inc/dec by one, do nothing, reset
Counters are observed by comparison to zero
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-7
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-7
|
Counting Machines
|
Context Free Languages (CFL)
Context Sensitive Languages (CSL)
Recursively Enumerable Languages (RE)
|
Context Free Languages (CFL)
Context Sensitive Languages (CSL)
Recursively Enumerable Languages (RE)
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-8
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-8
|
Chomsky Hierarchy and SKCMs
|
Context Free Languages (CFL)
Context Sensitive Languages (CSL)
Recursively Enumerable Languages (RE)
SKCMs cross the Chomsky Hierarchy!
|
Context Free Languages (CFL)
Context Sensitive Languages (CSL)
Recursively Enumerable Languages (RE)
SKCMs cross the Chomsky Hierarchy!
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-9
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-9
|
Summary so Far
|
Counters give additional formal power
We claimed that LSTM can count and GRU cannot
|
Counters give additional formal power
We claimed that LSTM can count and GRU cannot
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-10
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-10
|
Popular Architectures
|
zt (Wzxt Uzht1 bz) rt (Wrxt Urht1 br)
ht zt ht1 zt) ht ct tanh(Wcxt Ucht1 bc)
ft (Wf xt Uf ht1 bf it (Wixt Uiht1 bi) ot (Woxt Uoht1 bo) ct = tanh(Wcxt Ucht1 bc) ct ft ct1 it ct ht ot g(ct)
ht tanh(Whxt Uh(rt ht1) bh)
candidate ct ft ct1 it ct vectors ht ot g(ct)
ht ot ht zt ht1 zt) ht ct
ct ct1 ct ht ot g(ct)
Interpolation Increase by 1
ot Can Cou it
|
zt (Wzxt Uzht1 bz) rt (Wrxt Urht1 br)
ht zt ht1 zt) ht ct tanh(Wcxt Ucht1 bc)
ft (Wf xt Uf ht1 bf it (Wixt Uiht1 bi) ot (Woxt Uoht1 bo) ct = tanh(Wcxt Ucht1 bc) ct ft ct1 it ct ht ot g(ct)
ht tanh(Whxt Uh(rt ht1) bh)
candidate ct ft ct1 it ct vectors ht ot g(ct)
ht ot ht zt ht1 zt) ht ct
ct ct1 ct ht ot g(ct)
Interpolation Increase by 1
ot Can Cou it
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-11
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-11
|
Other Architectures
|
ht h(Whxt Uhht1 bh) ht = max(0,Whxt Uhht1 bh)
(subtraction in parallel, also increasing,
Bounded ! Can Cou nt!
|
ht h(Whxt Uhht1 bh) ht = max(0,Whxt Uhht1 bh)
(subtraction in parallel, also increasing,
Bounded ! Can Cou nt!
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-12
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-12
|
So
|
Counting gives greater computational power
|
Counting gives greater computational power
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-13
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-13
|
Empirically
|
Activations on a b ACL 2018 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE.
(on positive examples up to length 100)
Figure 1: Activations for LSTM and GRU networks fo to use an explicit counting mechanism, in contrast wit
Did not generalise even within training domain begin ftao iliunsge aatn n=ex3p9l (ivcsit 2c5o7u fnotri nLg STmMe)c hanism, in contrast wit Figure 1: Activations for LSTM and GRU networks fo
Figure 1: Activations for LSTM and GRU networks for anbn and anbncn. The LSTM has clearly lear Did not generalise well to use an explicit counting mechanism, in contrast with the GRU. begin failing at n=9 (vs 101 for LSTM) Did not learn any discernible counting mechanism
ACL 2018 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Activations on a100b100c100
Figure 1: Activations for LSTM and GRU networks for anbn and anbncn. The LSTM has clearly lear to use an explicit counting mechanism, in contrast with the GRU.
|
Activations on a b ACL 2018 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE.
(on positive examples up to length 100)
Figure 1: Activations for LSTM and GRU networks fo to use an explicit counting mechanism, in contrast wit
Did not generalise even within training domain begin ftao iliunsge aatn n=ex3p9l (ivcsit 2c5o7u fnotri nLg STmMe)c hanism, in contrast wit Figure 1: Activations for LSTM and GRU networks fo
Figure 1: Activations for LSTM and GRU networks for anbn and anbncn. The LSTM has clearly lear Did not generalise well to use an explicit counting mechanism, in contrast with the GRU. begin failing at n=9 (vs 101 for LSTM) Did not learn any discernible counting mechanism
ACL 2018 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Activations on a100b100c100
Figure 1: Activations for LSTM and GRU networks for anbn and anbncn. The LSTM has clearly lear to use an explicit counting mechanism, in contrast with the GRU.
|
[] |
GEM-SciDuet-train-89#paper-1229#slide-15
|
1229
|
On the Practical Computational Power of Finite Precision RNNs for Language Recognition
|
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"paper_content_text": [
"Introduction Recurrent Neural Network (RNNs) emerge as very strong learners of sequential data.",
"A famous result by Siegelmann and Sontag (1992; 1994) , and its extension in (Siegelmann, 1999) , demonstrates that an Elman-RNN (Elman, 1990 ) with a sigmoid activation function, rational weights and infinite precision states can simulate a Turing-machine in real-time, making RNNs Turing-complete.",
"Recently, Chen et al (2017) extended the result to the ReLU activation function.",
"However, these constructions (a) assume reading the entire input into the RNN state and only then performing the computation, using unbounded time; and (b) rely on having infinite precision in the network states.",
"As argued by Chen et al (2017) , this is not the model of RNN computation used in NLP applications.",
"Instead, RNNs are often used by feeding an input sequence into the RNN one item at a time, each immediately returning a statevector that corresponds to a prefix of the sequence and which can be passed as input for a subsequent feed-forward prediction network operating in constant time.",
"The amount of tape used by a Turing machine under this restriction is linear in the input length, reducing its power to recognition of context-sensitive language.",
"More importantly, computation is often performed on GPUs with 32bit floating point computation, and there is increasing evidence that competitive performance can be achieved also for quantized networks with 4-bit weights or fixed-point arithmetics (Hubara et al., 2016) .",
"The construction of (Siegelmann, 1999) implements pushing 0 into a binary stack by the operation g ← g/4 + 1/4.",
"This allows pushing roughly 15 zeros before reaching the limit of the 32bit floating point precision.",
"Finally, RNN solutions that rely on carefully orchestrated mathematical constructions are unlikely to be found using backpropagation-based training.",
"In this work we restrict ourselves to inputbound recurrent neural networks with finiteprecision states (IBFP-RNN), trained using backpropagation.",
"This class of networks is likely to coincide with the networks one can expect to obtain when training RNNs for NLP applications.",
"An IBFP Elman-RNN is finite state.",
"But what about other RNN variants?",
"In particular, we consider the Elman RNN (SRNN) (Elman, 1990) with squashing and with ReLU activations, the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU) Chung et al., 2014) .",
"The common wisdom is that the LSTM and GRU introduce additional gating components that handle the vanishing gradients problem of training SRNNs, thus stabilizing training and making it more robust.",
"The LSTM and GRU are often considered as almost equivalent variants of each other.",
"(a) a n b n -LSTM on a 1000 b 1000 (b) a n b n c n -LSTM on a 100 b 100 c 100 (c) a n b n -GRU on a 1000 b 1000 (d) a n b n c n -GRU on a 100 b 100 c 100 Figure 1 : Activations -c for LSTM and h for GRU -for networks trained on a n b n and a n b n c n .",
"The LSTM has clearly learned to use an explicit counting mechanism, in contrast with the GRU.",
"We show that in the input-bound, finiteprecision case, there is a real difference between the computational capacities of the LSTM and the GRU: the LSTM can easily perform unbounded counting, while the GRU (and the SRNN) cannot.",
"This makes the LSTM a variant of a k-counter machine (Fischer et al., 1968) , while the GRU remains finite-state.",
"Interestingly, the SRNN with ReLU activation followed by an MLP classifier also has power similar to a k-counter machine.",
"These results suggest there is a class of formal languages that can be recognized by LSTMs but not by GRUs.",
"In section 5, we demonstrate that for at least two such languages, the LSTM manages to learn the desired concept classes using backpropagation, while using the hypothesized control structure.",
"Figure 1 shows the activations of 10d LSTM and GRU trained to recognize the languages a n b n and a n b n c n .",
"It is clear that the LSTM learned to dedicate specific dimensions for counting, in contrast to the GRU.",
"1 1 Is the ability to perform unbounded counting relevant to \"real world\" NLP tasks?",
"In some cases it might be.",
"For example, processing linearized parse trees (Vinyals et al., 2015; Choe and Charniak, 2016; Aharoni and Goldberg, 2017) requires counting brackets and nesting levels.",
"Indeed, previous works that process linearized parse trees report using LSTMs The RNN Models An RNN is a parameterized function R that takes as input an input vector x t and a state vector h t−1 and returns a state vector h t : h t = R(x t , h t−1 ) (1) The RNN is applied to a sequence x 1 , ..., x n by starting with an initial vector h 0 (often the 0 vector) and applying R repeatedly according to equation (1).",
"Let Σ be an input vocabulary (alphabet), and assume a mapping E from every vocabulary item to a vector x (achieved through a 1-hot encoding, an embedding layer, or some other means).",
"Let RN N (x 1 , ..., x n ) denote the state vector h resulting from the application of R to the sequence E(x 1 ), ..., E(x n ).",
"An RNN recognizer (or RNN acceptor) has an additional function f mapping states h to 0, 1.",
"Typically, f is a log-linear classifier or multi-layer perceptron.",
"We say that an RNN recognizes a language L⊆ Σ * if f (RN N (w)) returns 1 for all and only words w = x 1 , ..., x n ∈ L. Elman-RNN (SRNN) In the Elman-RNN (Elman, 1990) , also called the Simple RNN (SRNN), and not GRUs for this purpose.",
"Our work here suggests that this may not be a coincidence.",
"the function R takes the form of an affine transform followed by a tanh nonlinearity: h t = tanh(W x t + U h t−1 + b) (2) Elman-RNNs are known to be at-least finitestate.",
"Siegelmann (1996) proved that the tanh can be replaced by any other squashing function without sacrificing computational power.",
"IRNN The IRNN model, explored by (Le et al., 2015) , replaces the tanh activation with a nonsquashing ReLU: h t = max(0, (W x t + U h t−1 + b)) (3) The computational power of such RNNs (given infinite precision) is explored in (Chen et al., 2017) .",
"Gated Recurrent Unit (GRU) In the GRU , the function R incorporates a gating mechanism, taking the form: z t = σ(W z x t + U z h t−1 + b z ) (4) r t = σ(W r x t + U r h t−1 + b r ) (5) h t = tanh(W h x t + U h (r t • h t−1 ) + b h )(6) h t = z t • h t−1 + (1 − z t ) •h t (7) Where σ is the sigmoid function and • is the Hadamard product (element-wise product).",
"Long Short Term Memory (LSTM) In the LSTM (Hochreiter and Schmidhuber, 1997) , R uses a different gating component configuration: f t = σ(W f x t + U f h t−1 + b f ) (8) i t = σ(W i x t + U i h t−1 + b i ) (9) o t = σ(W o x t + U o h t−1 + b o ) (10) c t = tanh(W c x t + U c h t−1 + b c ) (11) c t = f t • c t−1 + i t •c t (12) h t = o t • g(c t ) (13) where g can be either tanh or the identity.",
"Equivalences The GRU and LSTM are at least as strong as the SRNN: by setting the gates of the GRU to z t = 0 and r t = 1 we obtain the SRNN computation.",
"Similarly by setting the LSTM gates to i t = 1,o t = 1, and f t = 0.",
"This is easily achieved by setting the matrices W and U to 0, and the biases b to the (constant) desired gate values.",
"Thus, all the above RNNs can recognize finitestate languages.",
"Power of Counting Power beyond finite state can be obtained by introducing counters.",
"Counting languages and kcounter machines are discussed in depth in (Fischer et al., 1968) .",
"When unbounded computation is allowed, a 2-counter machine has Turing power.",
"However, for computation bound by input length (real-time) there is a more interesting hierarchy.",
"In particular, real-time counting languages cut across the traditional Chomsky hierarchy: real-time k-counter machines can recognize at least one context-free language (a n b n ), and at least one context-sensitive one (a n b n c n ).",
"However, they cannot recognize the context free language given by the grammar S → x|aSa|bSb (palindromes).",
"SKCM For our purposes, we consider a simplified variant of k-counter machines (SKCM).",
"A counter is a device which can be incremented by a fixed amount (INC), decremented by a fixed amount (DEC) or compared to 0 (COMP0).",
"Informally, 2 an SKCM is a finite-state automaton extended with k counters, where at each step of the computation each counter can be incremented, decremented or ignored in an input-dependent way, and state-transitions and accept/reject decisions can inspect the counters' states using COMP0.",
"The results for the three languages discussed above hold for the SKCM variant as well, with proofs provided in the supplementary material.",
"RNNs as SKCMs In what follows, we consider the effect on the state-update equations on a single dimension, h t [j].",
"We omit the index [j] for readability.",
"LSTM The LSTM acts as an SKCM by designating k dimensions of the memory cell c t as counters.",
"In non-counting steps, set i t = 0, f t = 1 through equations (8-9).",
"In counting steps, the counter direction (+1 or -1) is set inc t (equation 11) based on the input x t and state h t−1 .",
"The counting itself is performed in equation (12) , after setting i t = f t = 1.",
"The counter can be reset to 0 by setting i t = f t = 0.",
"Finally, the counter values are exposed through h t = o t g(c t ), making it trivial to compare the counter's value to 0.",
"3 We note that this implementation of the SKCM operations is achieved by saturating the activations to their boundaries, making it relatively easy to reach and maintain in practice.",
"SRNN The finite-precision SRNN cannot designate unbounded counting dimensions.",
"The SRNN update equation is: h t = tanh(W x + U h t−1 + b) h t [i] = tanh( dx j=1 W ij x[j] + d h j=1 U ij h t−1 [j] + b[i]) By properly setting U and W, one can get certain dimensions of h to update according to the value of x, by h t [i] = tanh(h t−1 [i] + w i x + b[i]).",
"However, this counting behavior is within a tanh activation.",
"Theoretically, this means unbounded counting cannot be achieved without infinite precision.",
"Practically, this makes the counting behavior inherently unstable, and bounded to a relatively narrow region.",
"While the network could adapt to set w to be small enough such that counting works for the needed range seen in training without overflowing the tanh, attempting to count to larger n will quickly leave this safe region and diverge.",
"IRNN Finite-precision IRNNs can perform unbounded counting conditioned on input symbols.",
"This requires representing each counter as two dimensions, and implementing INC as incrementing one dimension, DEC as incrementing the other, and COMP0 as comparing their difference to 0.",
"Indeed, Appendix A in (Chen et al., 2017) provides concrete IRNNs for recognizing the languages a n b n and a n b n c n .",
"This makes IBFP-RNN with 3 Some further remarks on the LSTM: LSTM supports both increment and decrement in a single dimension.",
"The counting dimensions in ct are exposed through a function g. For both g(x) = x and g(x) = tanh(x), it is trivial to do compare 0.",
"Another operation of interest is comparing two counters (for example, checking the difference between them).",
"This cannot be reliably achieved with g(x) = tanh(x), due to the non-linearity and saturation properties of the tanh function, but is possible in the g(x) = x case.",
"LSTM can also easily set the value of a counter to 0 in one step.",
"The ability to set the counter to 0 gives slightly more power for real-time recognition, as discussed by Fischer et al.",
"(1968) .",
"Relation to known architectural variants: Adding peephole connections (Gers and Schmidhuber, 2000) essentially sets g(x) = x and allows comparing counters in a stable way.",
"Coupling the input and the forget gates (it = 1 − ft) (Greff et al., 2017) removes the single-dimension unbounded counting ability, as discussed for the GRU.",
"ReLU activation more powerful than IBFP-RNN with a squashing activation.",
"Practically, ReLUactivated RNNs are known to be notoriously hard to train because of the exploding gradient problem.",
"GRU Finite-precision GRUs cannot implement unbounded counting on a given dimension.",
"The tanh in equation (6) combined with the interpolation (tying z t and 1 − z t ) in equation (7) restricts the range of values in h to between -1 and 1, precluding unbounded counting with finite precision.",
"Practically, the GRU can learn to count up to some bound m seen in training, but will not generalize well beyond that.",
"4 Moreover, simulating forms of counting behavior in equation (7) require consistently setting the gates z t , r t and the proposalh t to precise, non-saturated values, making it much harder to find and maintain stable solutions.",
"Summary We show that LSTM and IRNN can implement unbounded counting in dedicated counting dimensions, while the GRU and SRNN cannot.",
"This makes the LSTM and IRNN at least as strong as SKCMs, and strictly stronger than the SRNN and the GRU.",
"5 Experimental Results Can the LSTM indeed learn to behave as a kcounter machine when trained using backpropagation?",
"We show empirically that: 1.",
"LSTMs can be trained to recognize a n b n and a n b n c n .",
"2.",
"These LSTMs generalize to much higher n than seen in the training set (though not infinitely so).",
"3.",
"The trained LSTM learn to use the perdimension counting mechanism.",
"4.",
"The GRU can also be trained to recognize a n b n and a n b n c n , but they do not have clear 4 One such mechanism could be to divide a given dimension by k > 1 at each symbol encounter, by setting zt = 1/k andht = 0.",
"Note that the inverse operation would not be implementable, and counting down would have to be realized with a second counter.",
"5 One can argue that other counting mechanismsinvolving several dimensions-are also possible.",
"Intuitively, such mechanisms cannot be trained to perform unbounded counting based on a finite sample as the model has no means of generalizing the counting behavior to dimensions beyond those seen in training.",
"We discuss this more in depth in the supplementary material, where we also prove that an SRNN cannot represent a binary counter.",
"counting dimensions, and they generalize to much smaller n than the LSTMs, often failing to generalize correctly even for n within their training domain.",
"Trained LSTM networks outperform trained GRU networks on random test sets for the languages a n b n and a n b n c n .",
"Similar empirical observations regarding the ability of the LSTM to learn to recognize a n b n and a n b n c n are described also in (Gers and Schmidhuber, 2001) .",
"We train 10-dimension, 1-layer LSTM and GRU networks to recognize a n b n and a n b n c n .",
"For a n b n the training samples went up to n = 100 and for a n b n c n up to n = 50.",
"6 Results On a n b n , the LSTM generalizes well up to n = 256, after which it accumulates a deviation making it reject a n b n but recognize a n b n+1 for a while, until the deviation grows.",
"7 The GRU does not capture the desired concept even within its training domain: accepting a n b n+1 for n > 38, and also accepting a n b n+2 for n > 97.",
"It stops accepting a n b n for n > 198.",
"On a n b n c n the LSTM recognizes well until n = 100.",
"It then starts accepting also a n b n+1 c n .",
"At n > 120 it stops accepting a n b n c n and switches to accepting a n b n+1 c n , until at some point the deviation grows.",
"The GRU accepts already a 9 b 10 c 12 , and stops accepting a n b n c n for n > 63.",
"Figure 1a plots the activations of the 10 dimensions of the a n b n -LSTM for the input a 1000 b 1000 .",
"While the LSTM misclassifies this example, the use of the counting mechanism is clear.",
"Figure 1b plots the activation for the a n b n c n LSTM on a 100 b 100 c 100 .",
"Here, again, the two counting dimensions are clearly identified-indicating the LSTM learned the canonical 2-counter solutionalthough the slightly-imprecise counting also starts to show.",
"In contrast, Figures 1c and 1d show the state values of the GRU-networks.",
"The GRU behavior is much less interpretable than the LSTM.",
"In the a n b n case, some dimensions may be performing counting within a bounded range, but move to erratic behavior at around t = 1750 (the network starts to misclassify on sequences much shorter than that).",
"The a n b n c n state dynamics are even less interpretable.",
"Finally, we created 1000-sample test sets for each of the languages.",
"For a n b n we used words with the form a n+i b n+j where n ∈ rand(0, 200) and i, j ∈ rand(−2, 2), and for a n b n c n we use words of the form a n+i b n+j c n+k where n ∈ rand(0, 150) and i, j, k ∈ rand(−2, 2).",
"The LSTM's accuracy was 100% and 98.6% on a n b n and a n b n c n respectively, as opposed to the GRU's 87.0% and 86.9%, also respectively.",
"All of this empirically supports our result, showing that IBFP-LSTMs can not only theoretically implement \"unbounded\" counters, but also learn to do so in practice (although not perfectly), while IBFP-GRUs do not manage to learn proper counting behavior, even when allowing floating point computations.",
"Conclusions We show that the IBFP-LSTM can model a realtime SKCM, both in theory and in practice.",
"This makes it more powerful than the IBFP-SRNN and the IBFP-GRU, which cannot implement unbounded counting and are hence restricted to recognizing regular languages.",
"The IBFP-IRNN can also perform input-dependent counting, and is thus more powerful than the IBFP-SRNN.",
"We note that in addition to theoretical distinctions between architectures, it is important to consider also the practicality of different solutions: how easy it is for a given architecture to discover and maintain a stable behavior in practice.",
"We leave further exploration of this question for future work."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"5.",
"6"
],
"paper_header_content": [
"Introduction",
"The RNN Models",
"Power of Counting",
"RNNs as SKCMs",
"Experimental Results",
"Trained LSTM networks outperform trained",
"Conclusions"
]
}
|
GEM-SciDuet-train-89#paper-1229#slide-15
|
Take Home Message
|
and result in actual differences in expressive power
Dont fall in the Turing Tarpit!
|
and result in actual differences in expressive power
Dont fall in the Turing Tarpit!
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-0
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-0
|
News Context
|
Identify the news category by URL pattern
7 categories: Entertainment, World, Finance, Sports, Fashion,
Classify words based on category document frequency
E.g., superstar belongs to Entertainment
For both English and Chinese news and words
|
Identify the news category by URL pattern
7 categories: Entertainment, World, Finance, Sports, Fashion,
Classify words based on category document frequency
E.g., superstar belongs to Entertainment
For both English and Chinese news and words
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-1
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-1
|
Word Sense Disambiguation
|
Expanded College English Test 4 Dictionary
English, Chinese (relative frequency), part-of-speech
33,664 English-Chinese pairs and
~4k unique English words
Baseline: always choose the most frequent relative
of coverage as it always has a translation
Low accuracy as it lacks context modeling
Approach 1: News Category
Pick the Chinese translation with the same category as the news article
E.g., =>interest in Finance news
Approach 2: Part-of-Speech (POS)
Pick up the Chinese translation with the same POS as the target English word
|
Expanded College English Test 4 Dictionary
English, Chinese (relative frequency), part-of-speech
33,664 English-Chinese pairs and
~4k unique English words
Baseline: always choose the most frequent relative
of coverage as it always has a translation
Low accuracy as it lacks context modeling
Approach 1: News Category
Pick the Chinese translation with the same category as the news article
E.g., =>interest in Finance news
Approach 2: Part-of-Speech (POS)
Pick up the Chinese translation with the same POS as the target English word
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-2
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-2
|
WSD Bing Translator Based Methods
|
Approach 3: Substring Match
2. Look up dictionary state department spokeswomen said
into the worlds top 40 clubs
(Bing) is a substring of
Limited by dictionary coverage!
into the worlds top clubs
3. No output using substring match
Approach 4: Relaxed Match
Chinese Segmentation 3. Relaxed Match:
(Bing) is superset of
Chinese Segmentation 3. Two relaxed matches, both wrong
Approach 5: Bing Alignment
Better - No output if the alignment is phrase to phrase
|
Approach 3: Substring Match
2. Look up dictionary state department spokeswomen said
into the worlds top 40 clubs
(Bing) is a substring of
Limited by dictionary coverage!
into the worlds top clubs
3. No output using substring match
Approach 4: Relaxed Match
Chinese Segmentation 3. Relaxed Match:
(Bing) is superset of
Chinese Segmentation 3. Two relaxed matches, both wrong
Approach 5: Bing Alignment
Better - No output if the alignment is phrase to phrase
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-3
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-3
|
WSD Evaluation
|
Baseline 1. News 2. POS 3. Bing - 4. Bing - Relaxed 5. Bing - Align Category Substring
|
Baseline 1. News 2. POS 3. Bing - 4. Bing - Relaxed 5. Bing - Align Category Substring
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-4
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-4
|
What is a set of suitable distractors
|
Have the same form as the target word
Fit the sentence context
Have proper difficulty level according to users level of mastery
Difficult distractors are more semantically similar to the target words
|
Have the same form as the target word
Fit the sentence context
Have proper difficulty level according to users level of mastery
Difficult distractors are more semantically similar to the target words
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-5
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-5
|
Generating proper distractors
|
The difficulty level is measured by Lin distance between the target word and candidate distractor in WordNet
Lowest common subsumer synset
A distractor is deemed hard when its similarity to target word is above threshold (e.g., 0.1)
|
The difficulty level is measured by Lin distance between the target word and candidate distractor in WordNet
Lowest common subsumer synset
A distractor is deemed hard when its similarity to target word is above threshold (e.g., 0.1)
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-6
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-6
|
Distractor Generation
|
1. WordNews Hard: Same word form +
2. Random News: Same word form +
Vary the number of hard distractors based on users knowledge level
Beginner: two random + one hard
|
1. WordNews Hard: Same word form +
2. Random News: Same word form +
Vary the number of hard distractors based on users knowledge level
Beginner: two random + one hard
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-7
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-7
|
Human Evaluation
|
WordGap System (Knoop and Wilske, 2013)
Distractor: targets synonyms of synonyms in WordNet
Evaluation 1: WordGap vs. Random News
Evaluation 2: WordGap vs. WordNews Hard
22. Most sex workers that Hail-Jares encounters through street-based outreach
are not in it fora , or because they lack the drive to succeed, she says. *
One is the target word, three are from i WordGap, and the other three are from @
WordNews Hard or Random News ; es) @ i) w @ &) SS a =
# of wins Avg. Score
Lower scores are better
|
WordGap System (Knoop and Wilske, 2013)
Distractor: targets synonyms of synonyms in WordNet
Evaluation 1: WordGap vs. Random News
Evaluation 2: WordGap vs. WordNews Hard
22. Most sex workers that Hail-Jares encounters through street-based outreach
are not in it fora , or because they lack the drive to succeed, she says. *
One is the target word, three are from i WordGap, and the other three are from @
WordNews Hard or Random News ; es) @ i) w @ &) SS a =
# of wins Avg. Score
Lower scores are better
|
[] |
GEM-SciDuet-train-90#paper-1230#slide-8
|
1230
|
Interactive Second Language Learning from News Websites
|
We propose WordNews, a web browser extension that allows readers to learn a second language vocabulary while reading news online. Injected tooltips allow readers to look up selected vocabulary and take simple interactive tests. *
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"We discover that two key system components needed improvement, both which stem from the need to model context.",
"These two issues are real-world word sense disambiguation (WSD) to aid translation quality and constructing interactive tests.",
"For the first, we start with Microsoft's Bing translation API but employ additional dictionary-based heuristics that significantly improve translation in both coverage and accuracy.",
"For the second, we propose techniques for generating appropriate distractors for multiple-choice word mastery tests.",
"Our preliminary user survey confirms the need and viability of such a language learning platform.",
"Introduction Learning a new language from language learning websites is time consuming.",
"Research shows that regular practice, guessing, memorization (Rubin, 1975) as well as immersion into real scenarios (Naiman, 1978) hastens the language learning process.",
"To make second language learning attractive and efficient, we interleave language learning with the daily activity of online news reading.",
"Most existing language learning software are either instruction-driven or user-driven.",
"Duolingo 1 is a popular instruction-driven system that teaches through structured lessons.",
"Instruction driven systems demand dedicated learner time on a daily basis and are limited by learning materials as lesson curation is often labor-intensive.",
"In contrast, many people informally use Google Translate 2 or Amazon Kindle's Vocabulary Builder 3 to learn vocabulary, making these prominent examples of user-driven systems.",
"These systems, however, lack the rigor of a learning platform as they omit tests to allow learners to demonstrate mastery.",
"In our work, we merge learning and assessment within the single activity of news reading.",
"Our system also adapts to the learner's skill during assessment.",
"We propose a system to enable online news readers to efficiently learn a new language's vocabulary.",
"Our prototype targets Chinese language learning while reading English language news.",
"Learners are provided translations of opendomain words for learning from an English news page.",
"In the same environment -for words that the system deems mastered by the learner -learners are assessed by replacing the original English text in the article with their Chinese translations and asked to translate them back given a choice of possible translations.",
"The system, WordNews, deployed as a Chrome web browser extension, is triggered when readers visit a preconfigured list of news websites (e.g., CNN, BBC).",
"A key design property of our WordNews web browser extension is that it is only active on certain news websites.",
"This is important as news articles typically are classified with respect to a news category, such as finance, world news, and sports.",
"If we know which category of news the learner is viewing, we can leverage this contextual knowledge to improve the learning experience.",
"In the development of the system, we discovered two key components that can be affected by this context modeling.",
"We report on these developments here.",
"In specific, we propose improved algorithms for two components: (i) for translating English words to Chinese from news articles, (ii) for generating distractors for learner assessment.",
"The WordNews Chrome Extension Our method to directly enhance the web browser is inspired by earlier work in the computer-aided language learning community that also uses the web browser as the delivery vehicle for language learning.",
"WERTi (Metcalf and Meurers, 2006; Meurers et al., 2010) was a monolingual, userdriven system that modified web pages in the target language to highlight or remove certain words from specific syntactic patterns to teach difficultto-learn English grammar.",
"Our focus is to help build Chinese vocabulary for second language learners fluent in English.",
"We give a running scenario to illustrate the use of WordNews.",
"When a learner browses to an English webpage on a news website, our extension either selectively replaces certain original English words with their Chinese translation or underlines the English words, based on user configuration (Figure 1, middle) .",
"While the meaning of the Chinese word is often apparent in context, the learner can choose to learn more about the replaced/underlined word, by mousing over the word to reveal a definition tooltip (Figure 1, left) to aid mastery of the Chinese word.",
"Once the learner has encountered the target word a few times, WordNews assesses learner's mastery by generating a multiple choice translation test on the target word (Figure 1 , right).",
"Our learning platform thus can be viewed as three logical use cases: translating, learning and testing.",
"Translating.",
"We pass the main content of the webpage from the extension client to our server for candidate selection and translation.",
"As certain words are polysemous, the server must select the most appropriate translation among all pos-sible meanings.",
"Our initial selection method replaces any instance of words stored in our dictionary.",
"For translation, we check the word's stored meanings against the machine translation of each sentence obtained from the Microsoft Bing Translation API 4 (hereafter, \"Bing\").",
"Matches are deemed as correct translations and are pushed back to the Chrome client for rendering.",
"Learning.",
"Hovering the mouse over the replacement Chinese word causes a tooltip to appear, which gives the translation, pronunciation, and simplified written form, and a More link that loads additional contextual example sentences (that were previously translated by the backend) for the learner to study.",
"The More link must be clicked for activation, as we find this two-click architecture helps to minimize latency and the loading of unnecessary data.",
"The server keeps record of the learning tooltip activations, logging the enclosing webpage URL, the target word and the user identity.",
"Testing.",
"After the learner encounters the same word a pre-defined number t = 3 times, Word-News generates a multiple choice question (MCQ) test to assess mastery.",
"When the learner hovers over the replaced word, the test is shown for the learner to select the correct answer.",
"When an option is clicked, the server logs the selection and updates the user's test history, and the client reveals the correct answer.",
"News Categories As our learning platform is active only on certain news websites, we can model the news category (for individual words and whole webpages) as additional evidence to help with tasks.",
"Of particular importance to WordNews is the association of words to a news category, which is used downstream in both word sense disambiguation (Section 3) and the generation of distractors in the interactive tests (Section 4).",
"Here, our goal is to automatically find highly relevant words to a particular news category -e.g., \"what are typical finance words?\"",
"We first obtain a large sample of categorized English news webpages, by creating custom crawlers for specific news websites (e.g.",
"CNN).",
"We use a seed list of words that are matched 3.",
"Finance Finance \"investment\", \"财富\" 4.",
"Sports Sports \"score\", \"比 赛\" Fashion Beauty & Health \"jewelry\", \"时髦\" 6.",
"Technology Technology \"cyber\", \"互联网\" 7.",
"Travel \"natural\" against a target webpage's URL.",
"If any match, the webpage is deemed to be of that category.",
"For example, a webpage that has the seed word \"football\" in its URL is deemed of category \"Sports\".",
"Since the news category is also required for Chinese words for word sense disambiguation, we must perform a similar procedure to crawl Chinese news (e.g., BaiduNews 5 ) However, Chinese news sites follow a different categorization scheme, so we first manually align the categories based on observation (see Table 1 ), creating seven bilingual categories: namely, \"World\", \"Technology\", \"Sports\", \"Entertainment\", \"Finance\", \"Fashion\" and \"Travel\".",
"We tokenize and part-of-speech tag the main body text of the categorized articles, discarding punctuation and stopwords.",
"For Chinese, we segment words using the Stanford Chinese word segmenter (Chang et al., 2008) .",
"The remaining words are classified to a news category based on document frequency.",
"A word w is classified to a category c if it appears more often (a tunable threshold δ 6 ) than its average category document frequency.",
"Note that a word can be categorized to multiple categories under this scheme.",
"Word Sense Disambiguation (WSD) Component Our extension needs to show the most appropriate translation sense based on the context.",
"Such a translation selection task -cross-lingual word sense disambiguation -is a common problem in machine translation.",
"In this section, we describe how we improved WordNews' WSD capabilities through a series of six approaches.",
"The context evidence that we leverage for WSD comes in two forms: the news category of the target word and its enclosing sentence.",
"Bilingual Dictionary and Baseline WordNews's server component includes a bilingual lexicon of English words with possible Chinese senses.",
"The English words in our dictionary is based on the publicly-available College English Test (CET 4) list, which has a breadth of about 4,000 words.",
"We augment the list to include the relative frequency among Chinese senses, with their part-of-speech, per English word.",
"Our baseline translation uses the most frequent sense: for an English word to be translated, it chooses the most frequent relative Chinese translation sense c from the possible set of senses C. 关闭 密切 亲 亲 亲密 密 密 亲 亲 亲密 密 密 亲 亲 亲密 密 密 (2) ... kids can't stop singing ... verb: 停止, 站, 阻止, 停 ... 停 停 停止 止 止 阻止 停 停 停止 止 止 停 停 停止 止 止 停 停 停止 止 止 ( 免费 免费 自 自 自由 由 由 自 自 自由 由 由 自 自 自由 由 由 (4) ... why Obama's trip to my homeland is meaningful ... noun: 旅, 旅程 ... 旅游 ... 旅 旅 旅 旅 旅 旅行 行 行 旅 旅 旅行 行 行 (5) ... winning more points in the match ... noun: 匹 配, 比 赛, 赛, 敌手, 对手, 火柴 ... 匹配 匹配 比 比 比赛 赛 赛 比 比 比赛 赛 赛 比 比 比赛 赛 赛 (6) ... state department spokeswoman Jen Psaki said that the allies ... noun: 态, 国, 州, ... verb: 声明, 陈述, 述, 申 明 ... 发言 ... adj: 国家的 ... 态 态 发言 发言 人 国 国 国家 家 家 This method has complete coverage over the CET 4 list (as the word frequency rule always yields a prospective translation), but as it lacks any context model, it is the least accurate.",
"Approach 1: News Category Topic information has been shown to be useful in WSD (Boyd-Graber et al., 2007) .",
"For example, consider the English word interest.",
"In finance related articles, \"interest\" is more likely to carry the sense of \"a share, right, or title in the ownership of property\" (\"利息\" in Chinese), over other senses.",
"Therefore, analysing the topic of the original article and selecting the translation with the same topic label might help disambiguate the word sense.",
"For a target English word e, for each prospective Chinese sense c ∈ C, choose the first (in terms of relative frequency) sense that has the same news category as the containing webpage.",
"Approach 2: Part-of-Speech Part-of-Speech (POS) tags are also useful for word sense disambiguation (Wilks and Stevenson, 1998 ) and machine translation (Toutanova et al., 2002; Ueffing and Ney, 2003) .",
"For example, the English word \"book\" can function as a verb or a noun, which gives rise to two differ-ent dominant senses: \"reserve\" (\"预定\" in Chinese) and \"printed work\" (\"书\"), respectively.",
"As senses often correspond cross-lingually, knowledge of the English word's POS can assist disambiguation.",
"We employ the Standford log-linear Part-of-Speech tagger (Toutanova et al., 2003) to obtain the POS tag for the English word, whereas the POS tag for target Chinese senses are provided in our dictionary.",
"In cases where multiple candidate Chinese translations fit the same sense, we again break ties using relative frequency of the prospective candidates.",
"Approaches 3-5: Machine Translation Neighbouring words provide the necessary context to perform WSD in many contexts.",
"In our work, we consider the sentence in which the target word appears as our context.",
"We then acquire its translation from Microsoft Bing Translator using its API.",
"As we access the translation as a third party, the Chinese translation comes as-is, without the needed explicit word to locate the target English word to translate in the original input sentence.",
"We need to perform alignment of the Chinese and English sentences in order to recover the target word's translation from the sentence translation.",
"Approach 3 -Substring Match.",
"As potential Chinese translations are available in our dictionary, a straightforward use of substring matching recovers a Chinese translation; i.e., check whether the candidate Bing translation is a substring of the Chinese translation.",
"If more than one candidate matches, we use the longest string match heuristic and pick the one with the longest match as the final output.",
"If none matches, the system does not output a translation for the word.",
"Approach 4 -Relaxed Match.",
"The final rule in the substring match method unfortunately fires often, as the coverage of WordNews's lexicon is limited.",
"As we wish to offer correct translations that are not limited by our lexicon, we relax our substring condition, allowing the Bing translation to be a superset of a candidate translation in our dictionary (see Example 4 in Table 2 , where the Bing translation \"旅行\" is allowed to be relaxed to match the dictionary \"旅\").",
"To this end, we must know the extent of the words in the translation.",
"We first segment the obtained Bing translation with the Stanford Chinese Word Segmenter, and then use string matching to find a Chinese translation c. If more than one candidate matches, we heuristically use the last matched candidate.",
"This technique significantly augments the translation range of our extension beyond the reach of our lexicon.",
"Approach 5 -Word Alignment.",
"The relaxed method runs into difficulties when the target English e's Chinese prospective translations which come from our lexicon generate several possible matches.",
"Consider Example 6 in Table 2 .",
"The target English word \"state\" has corresponding Chinese entries \"发言\" and \"国家的\" in our dictionary.",
"For this reason, both \"国家\" (\"country\", correct) and \"发言人\" (\"spokeswoman\", incorrect) are relaxed matches.",
"As relaxed approach always picks up the last candidate, \"发 言人\" is the final output, which is incorrect.",
"To address this, we use the Bing Word Alignment API 7 to provide a possibly different prospective Chinese sense c. In this example, \"state\" matches \"国家\" (\"country\", correct) from word alignment, and the final algorithm chooses \"国家\" as the output.",
"7 https://msdn.microsoft.com/enus/library/dn198370.aspx Evaluation To evaluate the effectiveness of our proposed methods, we randomly sampled 707 words and their sentences from recent CNN 8 news articles, manually annotating the ground truth translation for each target English word.",
"We report both the coverage (i.e., the ability of the system to return a translation) and accuracy (i.e., whether the translation is contextually accurate).",
"Table 3 shows the experimental results for the six approaches.",
"As expected, frequency-based baseline achieves 100% coverage, but a low accuracy (57.3%); POS also performs similarly .",
"The category-based approach performs the worst, due to low coverage.",
"This is because news category only provides a high-level context and many of the Chinese word senses do not have a strong topic tendency.",
"Of most promise is our use of web based translation related APIs.",
"The three Bing methods iteratively improve the accuracy and have reasonable coverage.",
"Among all the methods, the additional step of word alignment is the best in terms of accuracy (97.4%), significantly bettering the others.",
"This validates previous work that sentence-level context is helpful in WSD.",
"Distractor Generation Component Assesing mastery over vocabulary is the other key functionality of our prototype learning platform.",
"The generation of the multiple choice selection test requires the selection of alternative choices aside from the correct answer of the target word.",
"In this section, we investigate a way to automatically generate such choices (called distractors in the literature) in English, given a target word.",
"Related Work Multiple choice question (MCQ) is widely used in vocabulary learning.",
"Semantic and syntactic properties of the target word need to be considered while generating their distractors.",
"In particular, (Pho et al., 2014) did an analysis on real-life MCQ corpus, and validated there are syntactic and semantic homogeneity among distractors.",
"Based on this, automatic distractor generation algorithms have been proposed.",
"For instance, (Lee and Seneff, 2007) generate distractors for English prepositions based on collocations, and idiosyncratic incorrect usage learned from non-native English corpora.",
"Lärka (Volodina et al., 2014 ) -a Swedish language learning system -generates vocabulary assessment exercises using a corpus.",
"They also have different modes of exercise generation to allow learning and testing via the same interface.",
"(Susanti et al., 2015) generate distractors for TOEFL vocabulary test using WordNet and word sense disambiguation given a target word.",
"While these approaches serve in testing mastery, they do not provide the capability for learning new vocabulary in context.",
"The most related prior work is WordGap system (Knoop and Wilske, 2013) , a mobile application that generates MCQ tests based on the text selected by users.",
"WordGap customizes the reading context, however, the generation of distractors -based on syntactic and semantic homogeneityis not contextualized.",
"Approach WordNews postulates \"a set of suitable distractors\" as: 1) having the same form as the target word, 2) fitting the sentence's context, and 3) having proper difficulty level according to user's level of mastery.",
"As input to the distractor generation algorithm, we provide the target word, its partof-speech (obtained by tagging the input sentence first) and the enclosing webpage's news category.",
"We restrict the algorithm to produce distractors matching the input POS, and which match the news category of the page.",
"We can design the test to be more difficult by choosing distractors that are more similar to the target word.",
"By varying the semantic distance, we can generate tests at varying difficulty levels.",
"We quantify similarity by using the Lin distance (Lin, 1998) between two input candidate concepts in WordNet (Miller, 1995) : sim(c1, c2) = 2 * logP (lso(c1, c2)) logP (c1) + logP (c2) (1) where P (c) denotes the probability of encountering concept c, and lso(c1, c2) denotes the lowest common subsumer synset, which is the lowest node in the WordNet hierarchy that is a hypernym of both c1 and c2.",
"This returns a score from 0 (completely dissimilar) to 1 (semantically equivalent).",
"If we use a target word e as the starting point, we can use WordNet to retrieve related words using WordNet relations (hypernyms/hyponyms, synonyms/antonyms) and determine their similarity using Lin distance.",
"We empirically set 0.1 as the similarity threshold -words that are deemed more similar than 0.1 are returned as possible distractors for our algorithm.",
"We note that Lin distance often returns a score of 0 for many pairs and the threshold of 0.1 allows us to have a large set of distractors to choose from, while remaining fairly efficient in run-time distractor generation.",
"We discretize a learner's knowledge of the word based on their prior exposure to it.",
"We then adopt a strategy to generate distractors for the input word based learners' knowledge level: Easy: The learner has been exposed to the word at least t = 3 times.",
"Two distractors are randomly selected from words that share the same news category as the target word e. The third distractor is generated using our algorithm.",
"Hard: The learner has passed the Easy level test x = 6 times.",
"All three distractors are generated from the same news category, using our algorithm.",
"Evaluation The WordGap system (Knoop and Wilske, 2013) represents the most related prior work on automated distractor generation, and forms our baseline.",
"WordGap adopts a knowledge-based approach: selecting the synonyms of synonyms (also computed by WordNet) as distractors.",
"They first select the most frequently used word, w1, from the target word's synonym set, and then select the synonyms of w1, called s1.",
"Finally, WordGap selects the three most frequently-used words from s1 as distractors.",
"We conducted a human subject evaluation of distractor generation to assess its fitness for use.",
"The subjects were asked to rank the feasibility of a distractor (inclusive of the actual answer) from a given sentential context.",
"The contexts were sentences retrieved from actual news webpages, identical to WordNews's use case.",
"We randomly selected 50 sentences from recent news articles, choosing a noun or adjective from the sentence as the target word.",
"We show the original sentence (leaving the target word as blank) as the context, and display distractors as choices (see Figure 2 ).",
"Subjects were required to read the sentence and rank the distractors by plausibility: 1 (the original answer), 2 (most plausible alternative) to 7 (least plausible alternative).",
"We recruited 15 subjects from within our institution for the survey.",
"All of them are fluent English speakers, and half are native speakers.",
"We evaluated two scenarios, for two different purposes.",
"In both evaluations, we generate three distractors using each of the two systems, and add the original target word for validation (7 options in total, conforming to our ranking options of 1-7).",
"Since we have news category information, we wanted to check whether that information alone could improve distractor generation.",
"Evaluation 1 tests the WordGap baseline system versus a Random News system that uses random word selection.",
"It just uses the constraint that chosen distractors must conform to the news category (be classified to the news category of the target word).",
"In our Evaluation 2, we tested our Hard setup where our algorithm is used to generate all distractors against WordGap.",
"This evaluation aims to assess the efficacy of our algorithm over the baseline.",
"Results and Analysis Each question was answered by five different users.",
"We compute the average ranking for each choice.",
"A lower rating means a more plausible (harder) distractor.",
"The rating for all the target words is low (1.1 on average) validating their truth and implying that the subjects answered the survey seriously, assuring the validity of the evaluation.",
"For each question, we deem an algorithm to be the winner if its three distractors as a whole (the sum of three average ratings) are assessed to be more plausible than the distractors by its competitor.",
"We calculate the number of wins for each algorithm over the 50 questions in each evaluation.",
"We display the results of both evaluations in Table 4 and Table 5 .",
"We see that the WordGap baseline outperforms the random selection, constrained solely by news category, by 4 wins and a 0.26 lower average score.",
"This shows that word news category alone is insufficient for generating good distractors.",
"When a target word does not have a strong category tendency, e.g., \"venue\" and \"week\", the random news method cannot select highly plausible distractors.",
"In the second table, our distractor algorithm significantly betters the baseline in both number of wins (8 more) and average score (0.67 lower).",
"This further confirms that context and semantic information are complementary for distractor generation.",
"As we mentioned before, a good distractor should fit the reading context and have a certain level of difficulty.",
"Finally, in Table 6 we show the distractors generated for the target word \"lark\" in the example survey question (Figure 2 ).",
"Platform Viability and Usability Survey We have thus far described and evaluated two critical components that can benefit from capturing the learner's news article context.",
"In the larger context, we also need to check the viability of second language learning intertwined with news reading.",
"In a requirements survey prior to the prototype development, two-thirds of the respondents indicated that although they have interest in learning a second language, they only have only used language learning software infrequently (less than once per week) yet frequently read news, giving us motivation for our development.",
"Post-prototype, we conducted a summative survey to assess whether our prototype product satisfied the target niche, in terms of interest, usability and possible interference with normal reading activities.",
"We gathered 16 respondents, 15 of which were between the ages of 18-24.",
"11 (the majority) also claimed native Chinese language proficiency.",
"The respondents felt that the extension platform was a viable language learning platform (3.4 of 5; on a scale of 1 \"disagreement\" to 5 \"agreement\") and that they would like to try it when available for their language pair (3 of 5).",
"In our original prototype, we replaced the orig-inal English word with the Chinese translation.",
"While most felt that replacing the original English with the Chinese translation would not hamper their reading, they still felt a bit uncomfortable (3.7 of 5).",
"This finding prompted us to change the default learning tooltip behavior to underlining to hint at the tooltip presence.",
"Conclusion We described WordNews, a client extension and server backend that transforms the web browser into a second language learning platform.",
"Leveraging web-based machine translation APIs and a static dictionary, it offers a viable user-driven language learning experience by pairing an improved, context-sensitive tooltip definition service with the generation of context-sensitive multiple choice questions.",
"WordNews is potentially not confined to use in news websites; one respondent noted that they would like to use it on arbitrary websites, but currently we feel usable word sense disambiguation is difficult enough even in the restricted news domain.",
"We also note that respondents are more willing to use a mobile client for news reading, such that our future development work may be geared towards an independent mobile application, rather than a browser extension.",
"We also plan to conduct a longitudinal study with a cohort of second language learners to better evaluate Word-News' real-world effectiveness."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"5.",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.3.1",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"The WordNews Chrome Extension",
"News Categories",
"Fashion",
"Word Sense Disambiguation (WSD) Component",
"Bilingual Dictionary and Baseline",
"Approach 1: News Category",
"Approach 2: Part-of-Speech",
"Approaches 3-5: Machine Translation",
"Evaluation",
"Distractor Generation Component",
"Related Work",
"Approach",
"Evaluation",
"Results and Analysis",
"Platform Viability and Usability Survey",
"Conclusion"
]
}
|
GEM-SciDuet-train-90#paper-1230#slide-8
|
Conclusion
|
WordNews: a Chrome extension enabling interactive vocabulary learning when reading online news
Word Sense Disambiguation based on Machine Translation
Distractor Generation based on news context and semantic similarity
Mobile client and longitudinal user study
|
WordNews: a Chrome extension enabling interactive vocabulary learning when reading online news
Word Sense Disambiguation based on Machine Translation
Distractor Generation based on news context and semantic similarity
Mobile client and longitudinal user study
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-0
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-0
|
Intro
|
Leibniz Institute for the Social Sciences
* Typical difficulties in searching digital libraries (DL)
Vagueness between search and indexing terms
Weak rankings based on term frequency (tf*idf), also others ...
Assumption I: a user's search (experience) should improve by using recommendation services (Mutschke et al., 2011), esp. in:
Assumption II: scholarly user's search with keywords, author names and journal names and use search tactics (Carevic & Mayr, 2016 to appear)
|
Leibniz Institute for the Social Sciences
* Typical difficulties in searching digital libraries (DL)
Vagueness between search and indexing terms
Weak rankings based on term frequency (tf*idf), also others ...
Assumption I: a user's search (experience) should improve by using recommendation services (Mutschke et al., 2011), esp. in:
Assumption II: scholarly user's search with keywords, author names and journal names and use search tactics (Carevic & Mayr, 2016 to appear)
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-1
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-1
|
Recommender Services
|
Leibniz Institute for the Social Sciences
You type a query and
IRM project at GESIS (Liike et al., 2013) __get specific recommendations has developed core journals v
e Soziale Systeme (105)
Search term recommender - STR + Zeitschrift fur
(co-word analysis/Jaccard index) es
Journal name recommender - JNR + Zeitschrift fur
(core journals/bradfordizing) Rechtssoziologie (25)
e Author name recommender - ANR central authors v
(co-authorship analysis/betweenness * Luhmann, Niklas tra | it ) e Luhmann, Hans-Jochen cen y Schimank, Uwe
e Tyrell, Hartmann e Hartmann, Jutta e Fischedick, Manfred
|
Leibniz Institute for the Social Sciences
You type a query and
IRM project at GESIS (Liike et al., 2013) __get specific recommendations has developed core journals v
e Soziale Systeme (105)
Search term recommender - STR + Zeitschrift fur
(co-word analysis/Jaccard index) es
Journal name recommender - JNR + Zeitschrift fur
(core journals/bradfordizing) Rechtssoziologie (25)
e Author name recommender - ANR central authors v
(co-authorship analysis/betweenness * Luhmann, Niklas tra | it ) e Luhmann, Hans-Jochen cen y Schimank, Uwe
e Tyrell, Hartmann e Hartmann, Jutta e Fischedick, Manfred
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-2
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-2
|
Case Study
|
Leibniz Institute for the Social Sciences
* 19 social sciences researchers (seniors, research staff and
PhD candidates) assessed topical relevance for STR, JNR and
ANR for their research topics/familiar field
23 topics have been assessed
[e.g. urban sociology, interviewer error, theory of action, atypical employment, ...]
They assessed 4-5 recommendations for each recommender
All recommendations were derived from the social sciences database SOLIS
|
Leibniz Institute for the Social Sciences
* 19 social sciences researchers (seniors, research staff and
PhD candidates) assessed topical relevance for STR, JNR and
ANR for their research topics/familiar field
23 topics have been assessed
[e.g. urban sociology, interviewer error, theory of action, atypical employment, ...]
They assessed 4-5 recommendations for each recommender
All recommendations were derived from the social sciences database SOLIS
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-3
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-3
|
Results I
|
Leibniz Institute for the Social Sciences
* >70% of the recommendations are relevant
Precision of ANR is slightly better than STR and JNR
* Top 1 recommendation of JNR is more often not relevant
|
Leibniz Institute for the Social Sciences
* >70% of the recommendations are relevant
Precision of ANR is slightly better than STR and JNR
* Top 1 recommendation of JNR is more often not relevant
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-4
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-4
|
Results II
|
Leibniz Institute for the Social Sciences
Practitioners tend to assess author names more relevant
* Postdocs tend to assess journal names more relevant
|
Leibniz Institute for the Social Sciences
Practitioners tend to assess author names more relevant
* Postdocs tend to assess journal names more relevant
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-5
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-5
|
Conclusion Further Questions
|
Leibniz Institute for the Social Sciences
Precision values of recommendations from STR, JNR and
ANR are close together on a high level
Q: Would the result be similar in a real retrieval scenario?
Practitioners are favoring author name recommendations while postdocs are favoring journal name recommendations
Q: Are author names typically more distinctive features than journal names?
|
Leibniz Institute for the Social Sciences
Precision values of recommendations from STR, JNR and
ANR are close together on a high level
Q: Would the result be similar in a real retrieval scenario?
Practitioners are favoring author name recommendations while postdocs are favoring journal name recommendations
Q: Are author names typically more distinctive features than journal names?
|
[] |
GEM-SciDuet-train-91#paper-1232#slide-6
|
1232
|
How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations?
|
In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution di↵ers largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115
],
"paper_content_text": [
"Introduction In metadata-driven Digital Libraries (DL) typically three major information retrieval (IR) related di culties arise: (1) the vagueness between search and indexing terms, (2) the information overload by the amount of result records obtained by the information retrieval systems, and (3) the problem that pure term frequency based rankings, such as term frequency -inverse document frequency (tf-idf), provide results that often do not meet user needs [10] .",
"Search term suggestion or other domain-specific recommendation modules can help users -e.g.",
"in the social sciences [7] and humanities [4] -to formulate their queries by mapping the personal vocabularies of the users onto the often highly specialized vocabulary of a digital library.",
"A recent overview of recommender systems in DL can be found in [2] .",
"This strongly suggests the introduction of models in IR systems that rely more on the real research process and have therefore a greater potential for closing the gap between information needs of scholarly users and IR systems than conventional system-oriented approaches.",
"In this paper 1 we will present an approach to utilize specific information retrieval services [10] as enhanced search stratagems [1, 17, 5] and recommendation services within a scholarly IR environment.",
"In a typical search scenario a user first formulates his query, which can then be enriched by a Search Term Recommender that adds controlled descriptors from the corresponding document language to the query.",
"With this new query a search in a database can be triggered.",
"The search returns a result set which can be re-ranked.",
"Since search is an iterative procedure this workflow can be repeated many times till the expected result set is retrieved.",
"These iterative search steps are typically stored in a search sessions log.",
"The idea of this paper is to assess if topic-specific recommendation services which provide search related thesaurus term, journal name and author name suggestions are accepted by researchers.",
"In section 2 we will shortly introduce three di↵erent recommendation services: (1) co-word analysis and the derived concept of Search Term Recommendation (STR), (2) coreness of journals and the derived Journal Name Recommender (JNR) and (3) centrality of authors and the derived Author Name Recommender (ANR).",
"The basic concepts and an evaluation of the top-ranked recommendations are presented in the following sections.",
"We can show that the proposed recommendation services can easily be implemented within scholarly DLs.",
"In the conclusion we assume that a users search should improve by using the proposed recommendation services when interacting with a scientific information system, but the relevance of these recommendations in a real interactive search task is still an open question (compare the experiences with the Okapi system [13] ).",
"Models for information retrieval enhancement The standard model of IR in current DLs is the tf-idf model which proposes a text-based relevance ranking.",
"As tf-idf is text-based, it assigns a weight to term t in document d which is influenced by di↵erent occurrences of t and d. Variations of the basis term weighing process have been proposed, like normalization of document length or by scaling the tf values but the basic assumption stays the same.",
"We hypothesize that recommendation services which are situated in the search process can improve the search experience of users in a DL.",
"The recommendation services are outlined very shortly in the following section.",
"More details on these services can be found in [10] .",
"Search Term Recommendation Search Term Recommenders (STRs) are an approach to compensate the long known language problem in IR [3, 12, 7] : When searching an information system, a user has to come up with the \"appropriate\" query terms so that they best match the document language to get qualitative results.",
"STRs in this paper are based on statistical co-word analysis and build associations between free terms (i.e.",
"from title or abstract) and controlled terms (i.e.",
"from a thesaurus) which are used during a professional indexation of the documents (see \"Alternative Keywords\" in Figure 1 ).",
"The co-word analysis implies a semantic association between the free and the controlled terms.",
"The more often terms co-occur in the text the more likely it is that they share a semantic relation.",
"In our setup we use STR for search term recommendation where the original topical query of the researcher is expanded with semantically \"near\" terms 2 from the controlled vocabulary Thesaurus for the Social Sciences (TheSoz).",
"Recommending Journal Names Journals play an important role in the scientific communication process.",
"They appear periodically, they are topically focused, they have established standards of quality control and often they are involved in the academic gratification system.",
"Metrics like the famous impact factor are aggregated on the journal level.",
"In some disciplines journals are the main place for a scientific community to communicate and discuss new research results [11, 15] .",
"In addition, journals or better journal names play an important role in the search process (see e.g.",
"the famous search stratagem \"journal run\") [1, 8, 5] .",
"The underlying mechanism for recommending journal names (JNR) in this paper is called Bradfordizing [16] .",
"Bradfordizing is an alternative mechanism to re-rank journal articles according to core journals to bypass the problem of very large and unstructured result sets.",
"The approach of Bradfordizing is to use characteristic concentration e↵ects (Bradfords law of scattering) that appear typically in journal literature.",
"Bradfordizing defines di↵erent zones of documents which are based on the frequency counts in a given document set.",
"Documents in core journals -journals which publish frequently on a topic -are ranked higher than documents which were published in journals from the following Bradford zones 3 .",
"In IR a positive e↵ect on the search result can be assumed in favor of documents from core journals [8, 9] .",
"Bradfordizing is implemented as one re-ranking feature called \"Journal productivity\" in the digital library sowiport 4 [6] .",
"In our setup of the assessment we evaluated the journal name recommendations, namely the top-ranked 5 core journals after Bradfordizing.",
"Recommending Author Names Collaboration in science is mainly represented by co-authorships between two or more authors who write a publication together.",
"Transferred to a whole community, co-authorships form a co-authorship network reflecting the overall collaboration structure of a community.",
"The underlying mechanism for recommending author names (ANR) in this paper is the author centrality measure betweenness.",
"Author centrality is another way of re-ranking result sets (see Figure 2 ).",
"Here the concept of centrality in a network of authors is an additional approach for the problem of large and unstructured result sets.",
"The intention behind this ranking model is to make use of knowledge about the interaction and cooperation behavior in special fields of research.",
"The (social) status and strategic position of a person in a scientific community is used too.",
"The model is based on a network analytical view on a field of research and di↵ers greatly from conventional text-oriented ranking methods like tf-idf.",
"A concrete criterion of relevance in this model is the centrality of authors from retrieved publications in a co-authorship network.",
"The model calculates a co-authorship network based on the result set to a specific query.",
"Centrality of each single author in this network is calculated by applying the betweenness measure and the documents in the result set are ranked according to the betweenness of their authors so that publications with very central authors are ranked higher in the result list [10, 9] .",
"From a recent study we know that many users are searching DL with author names [5] .",
"In addition, author name recommendations basing on author centrality can be successfully be used as query expansion mechanism [14] .",
"Implementation All proposed services are implemented in a live information system using (1) the Solr search engine, (2) Grails Web framework to demonstrate the general feasibility of the approaches.",
"Both Bradfordizing and author centrality as rerank mechanism are implemented as plugins to the open source web framework Grails.",
"Grails is the glue to combine the di↵erent modules and to o↵er an interactive web-based prototype.",
"In general these retrieval services can be applied in di↵erent query phases.",
"In the following section we will describe a small case study with researchers using the recommendation services STR, JNR and ANR to find search terms, journal names and author names relevant to their research topics.",
"Assessment Study The assessment study involved 19 researchers in the social sciences who agreed to name one or two of their research topics and take part in a short online assessment exercise.",
"We have recruited the researchers (practitioners 5 , PhD students and PostDocs) via email and telephone and they were asked to qualify their primary research topic in the form of 1-3 typical search terms they would enter in a search box.",
"These search terms have been operationalized into a valid query for our prototype by us together with an individualized login for the single researcher.",
"Individualized assessment accounts were sent to the researchers via email for each topic and contained a link to the online assessment tool and a short description how to evaluate the recommendations.",
"All researchers were asked to assess the topical relevance of each recommendation in relationship to their research topic into relevant or not relevant (binary assessments).",
"All researchers got three di↵erent assessment screens, always in the same order with a maximum of 5 recommendations for each recommender on one screen: first all search term recommendations, second all author name recommendations and last all journal name recommendations.",
"For each query, researchers got a set of max.",
"15 recommendations.",
"This is the list of all 23 evaluated researcher topics: [east Europe; urban sociology; equal treatment; data quality; interviewer error; higher education research; evaluation research; information science; political sociology; party democracy; data quality (2) 6 , party system; factor structure; nonresponse; ecology; industrial sociology; sociology of culture; theory of action; atypical employment; lifestyle; Europeanization; survey design; societal change in the newly-formed German states].",
"Evaluation In the following section we describe the evaluation of the recorded assessments.",
"We calculated average precision AP for each recommender service.",
"The precision P of each service was calculated by P = r r + nr (1) for each topic, where r is the number of all relevant assessed recommendations and r+nr is the number of all assessed recommendations (relevant and not relevant).",
"We wanted to keep the assessment exercise for the researchers very short and hence we limited the list of recommendations of each service to a maximum of 5 controlled terms, journal names and author names.",
"According to this restriction we decided to calculate AP@1, AP@2, AP@4 for each service.",
"In very rare case one recommendation service generated just one or two recommendations.",
"Results In sum 19 researchers assessed 23 topics in the online assessment study.",
"This resulted in total 95 STR, 111 JNR and 107 ANR assessments (see Table 1 ).",
"In average the researchers assessed 4.1 search term, 4.8 journal name and 4.6 author name recommendations per topic.",
"Table 2 shows the evaluation results of all STR, JNR and ANR assessments.",
"For this case study we did no statistically testing like t-test or Wilcoxon because 6 Data quality is mentioned by two researchers and assessed two times.",
"of our very small sample.",
"The following results should be read as plausibility tests without any statistical significance.",
"We just want to demonstrate here the indicative relevance of this kind of recommender systems for scholarly search systems.",
"Table 2 .",
"Evaluation of the assessments.",
"AP, AP@1, AP@2 and AP@4 for recommendation from STR, JNR and ANR STR JNR ANR AP 0.743 0.728 0.749 AP@1 0.957 0.826 0.957 AP@2 0.826 0.848 0.864 AP@4 0.750 0.726 0.750 We can see that the average precision AP of ANR (0.749) and STR (0.743) is slightly better than JNR (0.728).",
"Consulting the AP@1 measures ANR and STR are clearly better the JNR.",
"That means that the first recommended author name or search term is rated more often relevant than the first journal name in a list of 4 or 5 recommendations.",
"Surprisingly JNR (0.848) is slightly better than STR (0.826) in AP@2.",
"If we look at the last row (AP@4 ) in Table 2 we can see that all three recommendation services move closer together when more recommendation are assessed.",
"Table 3 shows the average precision AP of STR, JNR and ANR for our three di↵erent researcher types (practitioners, PhD students and postdocs).",
"From the 19 researchers in our user study we group 8 researchers into the practitioners group (mostly information professionals without PhD), 8 PhD students which had 1-4 years research experience and a small group of 3 postdocs with 4 and more years research experience.",
"We can see clearly that the author name recommendations are rated highest by the practitioners (see AP of ANR = 0.836).",
"Surprisingly the postdocs have evaluated ANR much lower than the other two groups (see AP of ANR = 0.467).",
"In the experiment postdocs favor journal name recommendations.",
"PhD students rate all three recommenders more or less the same.",
"Conclusion In this small case study typical researchers in the social sciences are confronted with specific recommendations which were calculated on the basis of researchers research topics.",
"Looking at the precision values two important insights can be noted: (1) precision values of recommendations from STR, JNR and ANR are close together on a very high level -AP is close to 0.75 -and (2) each service retrieved a disjoint set of relevant recommendations.",
"The di↵erent services each favor quite other -but still relevant -recommendations and relevance distribution di↵ers largely across topics and researchers.",
"A distinction between researcher types shows that practitioners are favoring author name recommendations (ANR) while postdocs are favoring journal name recommendations precompiled by our recommender services.",
"This can be an artifact due to the small size of the postdoc group but this is also plausible.",
"In terms of research topics author names typically are more distinctive than journal names.",
"An experienced researcher (e.g.",
"postdoc) who is familiar with an authors work can quickly rate an authors name relevance for a specific topic.",
"In this context journal names are not that problematic because they published widely on di↵erent topics.",
"This seems to be the case in our small sample (see third row in Table 3 ).",
"PhD students who typically are unexperienced find all recommendations (terms, author names, journal names) helpful (see second row in Table 3 ).",
"The proposed models and derived recommendation services open up new viewpoints on the scientific knowledge space and also provide an alternative framework to structure and search domain-specific retrieval systems [10] .",
"In sum, this case study presents limited results but show clearly that bibliometricenhanced recommender services can support the retrieval process.",
"In a next step we plan to evaluate the proposed recommendation services in a larger document assessment task where the services are utilized as query expansion mechanisms [14] and interactive services [17] .",
"However, a lot of research e↵ort needs to be done to make more progress in coupling bibliometric-enhanced recommendation services with IR.",
"The major challenge that we see here is to consider also the dynamic mechanisms which form the structures and activities in question and their relationships to dynamic features in scholarly information retrieval.",
"Acknowledgment Our thanks go to all researchers in the study.",
"The work presented here was funded by DFG, grant no.",
"INST 658/6-1 and grant no.",
"SU 647/5-2."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Models for information retrieval enhancement",
"Search Term Recommendation",
"Recommending Journal Names",
"Recommending Author Names",
"Implementation",
"Assessment Study",
"Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
}
|
GEM-SciDuet-train-91#paper-1232#slide-6
|
Outlook
|
Leibniz Institute for the Social Sciences
Integrate different recommender systems In real retrieval tasks (search sessions)
Measure task completion rates or goal satisfaction
Use and evaluate recommenders for query expansion and as dynamic features in IR
Develop new measures of utility of recommender
|
Leibniz Institute for the Social Sciences
Integrate different recommender systems In real retrieval tasks (search sessions)
Measure task completion rates or goal satisfaction
Use and evaluate recommenders for query expansion and as dynamic features in IR
Develop new measures of utility of recommender
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-0
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-0
|
In Short
|
I good results with classical pipeline
I explicit connectives and arguments: adapted approach from detection
of speculation and negation (Velldal et al. 2012, Read et al. 2012)
I cross-validation on training set
I sense disambiguation: ensemble classifier
I F1 27.77 on English blind test set
|
I good results with classical pipeline
I explicit connectives and arguments: adapted approach from detection
of speculation and negation (Velldal et al. 2012, Read et al. 2012)
I cross-validation on training set
I sense disambiguation: ensemble classifier
I F1 27.77 on English blind test set
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-1
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-1
|
Architecture
|
Non-Explicit Relation Detection Non-Explicit Argument Ranking Non-Explicit Sense Classification
Figure : OPT system overview.
|
Non-Explicit Relation Detection Non-Explicit Argument Ranking Non-Explicit Sense Classification
Figure : OPT system overview.
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-2
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-2
|
Explicit Connective Detection
|
I extends the work by Velldal et al. (2012) for identifying expressions of
I disambiguate closed class list of connectives (heads only)
I binary SVMlight classifier
|
I extends the work by Velldal et al. (2012) for identifying expressions of
I disambiguate closed class list of connectives (heads only)
I binary SVMlight classifier
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-3
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-3
|
Classifier Features
|
I token and POS n-grams around the candidate (up to
I parent, sibling, path, etc. features over PTB-style parse trees
I feature tuning by ten-fold cross-validation on training set
I final model selection (among some thousand runs):
I prefer smaller models with less variation across folds
I test twelve candidate models against development set
I surface features up to 3 tokens before/after candidate
I full feature conjunction for self and parent categories
I limited conjunctions for siblings
I no connected context
|
I token and POS n-grams around the candidate (up to
I parent, sibling, path, etc. features over PTB-style parse trees
I feature tuning by ten-fold cross-validation on training set
I final model selection (among some thousand runs):
I prefer smaller models with less variation across folds
I test twelve candidate models against development set
I surface features up to 3 tokens before/after candidate
I full feature conjunction for self and parent categories
I limited conjunctions for siblings
I no connected context
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-5
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-5
|
Arguments
|
I based on work on the scope of speculation and negation (Read et al.,
I assumption: arguments basically correspond to phrases
I extract clausal constituents: S, SBAR, SQ
I SVMlight classifiers; ten-fold cross-validation on training set
|
I based on work on the scope of speculation and negation (Read et al.,
I assumption: arguments basically correspond to phrases
I extract clausal constituents: S, SBAR, SQ
I SVMlight classifiers; ten-fold cross-validation on training set
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-6
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-6
|
Argument Position
|
Table : Position of Arg2 relative to Arg1.
I non-explicit relations: Arg1 is in previous sentence (PS) from Arg2
I explicit relations: classifier for PS or same sentence (SS)
I path from connective to root
I connective position in sentence (tertiles)
I POS bigram of connective and following token
|
Table : Position of Arg2 relative to Arg1.
I non-explicit relations: Arg1 is in previous sentence (PS) from Arg2
I explicit relations: classifier for PS or same sentence (SS)
I path from connective to root
I connective position in sentence (tertiles)
I POS bigram of connective and following token
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-7
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-7
|
Argument Candidate Ranking
|
I ordinal ranking of clausal constituents
I iteratively build a pool of feature types
Exp. PS Exp. SS Non-Exp.
Arg1 Arg2 Arg1 Arg2 Arg1 Arg2
Path to Initial Token
Table : Feature types used to describe candidate constituents for argument ranking.
|
I ordinal ranking of clausal constituents
I iteratively build a pool of feature types
Exp. PS Exp. SS Non-Exp.
Arg1 Arg2 Arg1 Arg2 Arg1 Arg2
Path to Initial Token
Table : Feature types used to describe candidate constituents for argument ranking.
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-8
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-8
|
Post Editing Heuristics
|
Arg1 Arg2 Arg1 Arg2
Table : Alignment of constituent yield with arguments (in SS or PS).
I initial alignment of full constituent yield with arguments is low
add conjunction (CC) preceding constituent (Arg1) cut clause headed by connective (Arg1, explicit, SS) cut constituent-final CC (Arg1) cut constituent-final wh-determiner (Arg1) cut constituent-initial CC (Arg2, explicit) cut relative clause, i.e. SBAR initiated by WHNP/WHADVP cut connective cut initial and final punctuation
|
Arg1 Arg2 Arg1 Arg2
Table : Alignment of constituent yield with arguments (in SS or PS).
I initial alignment of full constituent yield with arguments is low
add conjunction (CC) preceding constituent (Arg1) cut clause headed by connective (Arg1, explicit, SS) cut constituent-final CC (Arg1) cut constituent-final wh-determiner (Arg1) cut constituent-initial CC (Arg2, explicit) cut relative clause, i.e. SBAR initiated by WHNP/WHADVP cut connective cut initial and final punctuation
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-9
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-9
|
Argument Extraction Results
|
WSJ Test Set Blind Set
Arg1 Arg2 Both Arg1 Arg2 Both
Table : Argument extraction results, no error propagation.
|
WSJ Test Set Blind Set
Arg1 Arg2 Both Arg1 Arg2 Both
Table : Argument extraction results, no error propagation.
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-10
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-10
|
Sense Classification
|
I separate ensemble classifiers for explicit and non-explicit relations:
Wang & Lan (2015)LSVC LIBLINEAR SVM classifier
Wang & Lan (2015)XGBoost : decision trees with gradient boosting, same features
I final prediction label picked from sum of individual classifier
|
I separate ensemble classifiers for explicit and non-explicit relations:
Wang & Lan (2015)LSVC LIBLINEAR SVM classifier
Wang & Lan (2015)XGBoost : decision trees with gradient boosting, same features
I final prediction label picked from sum of individual classifier
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-11
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-11
|
Sense Classification Results
|
WSJ Test Set Blind Set
System Exp Non-Exp All Exp Non-Exp All
Table : Isolated results for sense classification (the bottom model was not part of the submission).
|
WSJ Test Set Blind Set
System Exp Non-Exp All Exp Non-Exp All
Table : Isolated results for sense classification (the bottom model was not part of the submission).
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-12
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-12
|
Overall Results
|
I WSJ test set and blind test set
I compared to challenge in 2015 and 2016
I error propagation, automatic parses
|
I WSJ test set and blind test set
I compared to challenge in 2015 and 2016
I error propagation, automatic parses
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-13
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-13
|
WSJ Test Set Blind Test Set
|
Table : Per-component breakdown of system performance.
|
Table : Per-component breakdown of system performance.
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-14
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-14
|
Take Home Messages
|
I overall, the end-to-end problem is anything but solved
I adaptation of constituent ranking good fit for argument identification
I cross-validation has helped reduce over-fitting to WSJ data
I classifier ensemble improves sense prediction (post-submission results)
|
I overall, the end-to-end problem is anything but solved
I adaptation of constituent ranking good fit for argument identification
I cross-validation has helped reduce over-fitting to WSJ data
I classifier ensemble improves sense prediction (post-submission results)
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-15
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-15
|
Non Explicit Relation Detection
|
I non-explicit relation between sentences A and B, iff (PDTB):
(i) A and B are adjacent,
(ii) A and B are in the same paragraph,
(iii) A and B are not linked by an explicit connective, and
(iv) a coherence relation or an entity-based relation holds between them.
I traverse sentence bigrams (i), (ii)
I check for explicit connectives with Arg1 in PS (iii)
I NoRel (0.6% in PDTB) and AltLex (1.5%) are currently ignored (iv)
|
I non-explicit relation between sentences A and B, iff (PDTB):
(i) A and B are adjacent,
(ii) A and B are in the same paragraph,
(iii) A and B are not linked by an explicit connective, and
(iv) a coherence relation or an entity-based relation holds between them.
I traverse sentence bigrams (i), (ii)
I check for explicit connectives with Arg1 in PS (iii)
I NoRel (0.6% in PDTB) and AltLex (1.5%) are currently ignored (iv)
|
[] |
GEM-SciDuet-train-92#paper-1236#slide-16
|
1236
|
OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120
],
"paper_content_text": [
"Introduction Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis.",
"For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective.",
"It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions.",
"Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) .",
"While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation Read et al., 2012) .",
"It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks.",
"At the same time, the original system has been substantially augmented for discourse parsing as outlined below.",
"There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"System Architecture Our system overview is shown in Figure 1 .",
"The individual modules interface through JSON files which resemble the desired output files of the Task.",
"Each module adds the information specified for it.",
"We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure.",
"Relation identification ( §3) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations.",
"Our argument identification module ( §4) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations.",
"Likewise, the sense classification module ( §5) employs separate ensemble classifiers for explicit and non-explicit relations.",
"Relation Identification Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al.",
"(2012) for identifying expressions of speculation and negation.",
"The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data.",
"Connectives can be single-or multitoken sequences (e.g.",
"'as' vs. 'as long as').",
"In cases §3 §5 §4 of overlapping connective candidates, OPT deterministically chooses the longest sequence.",
"The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'.",
"As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT.",
"Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) .",
"Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al.",
"(2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009 ), Lin et al.",
"(2014 , and Wang & Lan (2015) .",
"Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data.",
"During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009 ) within each group.",
"These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features.",
"Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data.",
"The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) .",
"This model has some 1.2 million feature types.",
"Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them.",
"We proceed straightforwardly: We traverse the sentence bigrams, following condition (a).",
"Paragraph boundaries are detected based on character offsets in the input text (b).",
"We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see §4).",
"If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram.",
"Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs.",
"Implicit relations are disambiguated in the downstream sense module ( §5).",
"We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"Argument Identification Our approach to argument identification is rooted in previous work on resolving the scope of spec- ulation and negation, in particular work by Read et al.",
"(2012) : We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument.",
"Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS).",
"However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015) , we attempt to learn a classification function to predict whether these are in SS or PS.",
"Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ).",
"Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect.",
"Exp.",
"PS Exp.",
"SS Non-Exp.",
"Arg1 Arg2 Arg1 Arg2 Arg1 Arg2 Connective Form We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) .",
"These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"• Connective Category • Connective Precedes • Following Token • Initial Token • Path to Root • • • • Path to Connective • • • Path to Initial Token • • Preceding Token • • • • Production Rules • • • • Size • An exhaustive search of all permutations of the above feature types requires significant resources.",
"Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05).",
"The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features.",
"Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ).",
"In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 .",
"We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2).",
"We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent.",
"We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses.",
"Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects.",
"Firstly, some arguments span sentence boundaries (see Sent.",
"Span in Table 1 ) meaning there can be no single aligned constituent.",
"Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type).",
"Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence.",
"Table 1 provides system upper-bounds taking each of these limitations into account.",
"Relation Sense Classification In order to assign senses to the predicted relations, we apply an ensemble-classification approach.",
"In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations.",
"Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing.",
"In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Naïve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel.",
"In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) .",
"Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) .",
"Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights.",
"This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g.",
"Expansion.Exception, only appear a dozen of times in the provided dataset.",
"For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane.",
"In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) .",
"For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously.",
"In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score.",
"More formally, we compute the prediction labelŷ i for the input instance x i aŝ y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor.",
"Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 − to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"Experimental Results Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets.",
"To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) .",
"Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.",
"The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT.",
"For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8).",
"Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only.",
"False positives include NoRel and missing relations.",
"About half of the false negatives are relations within the same sentence (across a semicolon).",
"WSJ Test Set Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) .",
"683 .817 .590 .647 .783 .519 Explicit (PS) .",
"623 .663 .462 .611 .832 .505 Explicit (All) .",
"572 .753 .474 .586 .782 .473 Non-explicit (All) .",
"744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"Arguments Table 4 reports the isolated performance for argument identification.",
"Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2.",
"However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set.",
"This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"Sense Classification The results of the sense classification subtask without error propagation are shown in Table 5 .",
"As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 .",
"The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set.",
"The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data.",
"Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) , using the provided Word2Vec embeddings as input.",
"This ongoing work shows clear promise for substantive improvements in sense classification.",
"Conclusion & Outlook The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing.",
"Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture.",
"The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing.",
"This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing.",
"We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations.",
"Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data.",
"We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"System Architecture",
"Relation Identification",
"Argument Identification",
"Relation Sense Classification",
"Experimental Results",
"Conclusion & Outlook"
]
}
|
GEM-SciDuet-train-92#paper-1236#slide-16
|
Non Explicit Relation Detection Results
|
I module evaluation on gold standard explicit connectives
|
I module evaluation on gold standard explicit connectives
|
[] |
GEM-SciDuet-train-93#paper-1238#slide-0
|
1238
|
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
|
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228
],
"paper_content_text": [
"Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.",
"Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.",
"We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.",
"By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.",
"In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.",
"What does it mean for two ideas to compete in texts, quantitatively?",
"Consider, for example, the issue of immigration.",
"There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.",
"One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .",
"Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.",
"One question is whether or not these two ideas cooccur in the same documents.",
"In the example above, these narratives are used by distinct groups of people with different ideologies.",
"The fact that they don't cooccur is one clue that they may be in competition with each other.",
"However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.",
"Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.",
"Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).",
"We use topics from LDA (Blei et al., 2003) to represent ideas.",
"Each topic is named with a pair of words that are most strongly associated with the topic in LDA.",
"Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.",
"The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.",
"All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.",
"We explain the formal definition of strength in §2.",
"the U.S. during the cold war.",
"To capture these possibilities, we use prevalence correlation over time.",
"Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.",
"This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.",
"1 .",
"We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.",
"Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.",
"Friendship (correlated over time, likely to cooccur).",
"The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .",
"Head-to-head (anti-correlated over time, unlikely to cooccur).",
"\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.",
"This observation agrees with a report from Pew Research Center (Guskin, 2013) .",
"Tryst (anti-correlated over time, likely to cooccur).",
"The two off-diagonal examples use topics related to law enforcement.",
"Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.",
"This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).",
"2 Arms-race (correlated over time, unlikely to cooccur).",
"One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.",
"Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.",
"For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.",
"We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.",
"We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .",
"To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).",
"We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.",
"As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.",
"Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).",
"We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.",
"For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.",
"This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.",
"We also show relations between topics in ACL that center around machine translation.",
"Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.",
"We provide some concluding thoughts in §6.",
"Computational Framework The aim of our computational framework is to explore relations between ideas.",
"We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.",
"Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.",
"In the following, we introduce our formal definitions and datasets.",
"∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.",
"1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.",
"Eq.",
"2 is the Pearson correlation between two ideas' prevalence over time.",
"Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.",
"Our input is a collection of documents, each represented by a set of ideas and indexed by time.",
"We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .",
".",
".",
", D T }, where D t = {d t 1 , .",
".",
".",
", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.",
"Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .",
"In order to formally capture the two dimensions above, we employ two commonly-used statistics.",
"First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.",
"1 in Fig.",
"2 .",
"Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.",
"Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.",
"2 in Fig.",
"2 .",
"Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).",
"The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.",
"We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.",
"(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.",
"We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .",
"Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.",
"• News articles.",
"We follow the strategy in Card et al.",
"(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.",
"We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.",
"Each of these corpora contains more than 25,000 articles.",
"Please refer to the supplementary material for details.",
"• Research papers.",
"We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.",
"3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.",
"The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.",
"In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.",
"• Topics.",
"We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.",
"4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.",
"• Keywords.",
"We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.",
"(2008) .",
"We set the number of keywords to 100 for all corpora.",
"For news articles, the background corpus for each issue is comprised of all articles from the other four issues.",
"For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.",
"Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.",
"Refer to the supplementary material for a list of example keywords in each corpus.",
"In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.",
"(2013) .",
"Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.",
"In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.",
"Characterizing the Space of Relations To provide an overview of the four relation types in Fig.",
"1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.",
"In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.",
"We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.",
"Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.",
"We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.",
"(Scott, 2015) .",
"The plots along the axes show the marginal distribution of the corresponding dimension.",
"In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .",
"In these plots, we use topics to represent ideas.",
"their joint distribution.",
"Fig.",
"3 shows three examples: two from news articles and one from research papers.",
"We will also focus our case studies on these three corpora in §4.",
"The corresponding plots for keywords have been relegated to supplementary material due to space limitations.",
"Cooccurrence tends to be unimodal but not normal.",
"In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.",
"We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.",
"Prevalence correlation exhibits diverse distributions.",
"Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.",
"The dip test only rejects the unimodality hypothesis in NIPS.",
"None follow normal distributions based on D'Agostino's K 2 test.",
"Cooccurrence is positively correlated with prevalence correlation.",
"In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.",
"This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.",
"Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.",
"776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.",
"These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.",
"Here we compare the relative strength of extreme pairs in each dataset.",
"We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.",
"For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.",
"3.",
"This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.",
"The results are not sensitive to the choice of 25.",
"Fig.",
"4 shows the collective strength of the four types in all of our datasets.",
"The most common ordering is: friendship > head-to-head > arms-race > tryst.",
"The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.",
"In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.",
"This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.",
"We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.",
"We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.",
"In contrast, news stories are more self-contained and seek to employ consistent usage.",
"Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.",
"Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.",
"International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .",
"As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".",
"5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".",
"These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.",
"Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.",
"Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.",
"In particular, islam was rarely used in coverage of terrorism in the 1980s.",
"attention with the other, likely because they share the same underlying cause.",
"We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".",
"While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.",
"Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .",
"The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.",
"The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.",
"Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).",
"Fig.",
"5a shows the relations between the \"federal, state\" topic and four international topics.",
"Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.",
"Fig.",
"5b and Fig.",
"5c represent concrete examples in Fig.",
"5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".",
"In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.",
"When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.",
"In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.",
"6) .",
"It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.",
"The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.",
"This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.",
"6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.",
"We leave it to further investigation to confirm or reject this hypothesis.",
"To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.",
"Table 1 shows the results for three pairs above.",
"If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.",
"PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.",
"Our observation starts with a top tryst relation between latino and asian.",
"Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.",
"Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.",
"Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.",
"In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.",
"Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.",
"The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.",
"In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .",
"However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.",
"Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.",
"Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.",
"It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.",
"8) .",
"7 It is part of the strongest relation in all four types except tryst (ranked #5).",
"The full relation graph presents further patterns.",
"Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.",
"But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.",
"Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.",
"The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.",
"9 , which reveals additional detail.",
"Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.",
"The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.",
"The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.",
"Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.",
"For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.",
"This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.",
"We illustrate our computational method by exploratory studies on news corpora and scientific research papers.",
"We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.",
"It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.",
"For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.",
"Our method is entirely observational.",
"It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.",
"In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.",
"Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.",
"There are many potential directions to improve our method to account for complex relations between ideas.",
"For instance, we assume that both ideas and relations are statically grounded in keywords or topics.",
"In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.",
"Similarly, new ideas show up and even the same idea may change over time and be represented by different words."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"6"
],
"paper_header_content": [
"Introduction",
"Computational Framework",
"Cooccurrence and Prevalence Correlation",
"Datasets and Representation of Ideas",
"Characterizing the Space of Relations",
"Empirical Distribution Properties",
"Relative Strength of Extreme Pairs",
"Exploratory Studies",
"International Relations in Terrorism",
"Ethnicity Keywords in Immigration",
"Relations between Topics in ACL",
"Concluding Discussion"
]
}
|
GEM-SciDuet-train-93#paper-1238#slide-0
|
Relations between ideas
|
undocumented immigrants rivals illegal alien
small government friends free market
word alignment friends machine translation
We have shown a framework to quantitatively
describe relations between ideas.
Anti-correlated Can we use them to effectively explore relations Correlated
|
undocumented immigrants rivals illegal alien
small government friends free market
word alignment friends machine translation
We have shown a framework to quantitatively
describe relations between ideas.
Anti-correlated Can we use them to effectively explore relations Correlated
|
[] |
GEM-SciDuet-train-93#paper-1238#slide-1
|
1238
|
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
|
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228
],
"paper_content_text": [
"Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.",
"Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.",
"We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.",
"By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.",
"In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.",
"What does it mean for two ideas to compete in texts, quantitatively?",
"Consider, for example, the issue of immigration.",
"There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.",
"One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .",
"Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.",
"One question is whether or not these two ideas cooccur in the same documents.",
"In the example above, these narratives are used by distinct groups of people with different ideologies.",
"The fact that they don't cooccur is one clue that they may be in competition with each other.",
"However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.",
"Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.",
"Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).",
"We use topics from LDA (Blei et al., 2003) to represent ideas.",
"Each topic is named with a pair of words that are most strongly associated with the topic in LDA.",
"Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.",
"The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.",
"All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.",
"We explain the formal definition of strength in §2.",
"the U.S. during the cold war.",
"To capture these possibilities, we use prevalence correlation over time.",
"Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.",
"This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.",
"1 .",
"We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.",
"Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.",
"Friendship (correlated over time, likely to cooccur).",
"The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .",
"Head-to-head (anti-correlated over time, unlikely to cooccur).",
"\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.",
"This observation agrees with a report from Pew Research Center (Guskin, 2013) .",
"Tryst (anti-correlated over time, likely to cooccur).",
"The two off-diagonal examples use topics related to law enforcement.",
"Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.",
"This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).",
"2 Arms-race (correlated over time, unlikely to cooccur).",
"One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.",
"Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.",
"For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.",
"We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.",
"We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .",
"To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).",
"We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.",
"As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.",
"Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).",
"We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.",
"For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.",
"This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.",
"We also show relations between topics in ACL that center around machine translation.",
"Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.",
"We provide some concluding thoughts in §6.",
"Computational Framework The aim of our computational framework is to explore relations between ideas.",
"We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.",
"Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.",
"In the following, we introduce our formal definitions and datasets.",
"∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.",
"1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.",
"Eq.",
"2 is the Pearson correlation between two ideas' prevalence over time.",
"Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.",
"Our input is a collection of documents, each represented by a set of ideas and indexed by time.",
"We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .",
".",
".",
", D T }, where D t = {d t 1 , .",
".",
".",
", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.",
"Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .",
"In order to formally capture the two dimensions above, we employ two commonly-used statistics.",
"First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.",
"1 in Fig.",
"2 .",
"Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.",
"Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.",
"2 in Fig.",
"2 .",
"Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).",
"The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.",
"We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.",
"(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.",
"We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .",
"Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.",
"• News articles.",
"We follow the strategy in Card et al.",
"(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.",
"We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.",
"Each of these corpora contains more than 25,000 articles.",
"Please refer to the supplementary material for details.",
"• Research papers.",
"We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.",
"3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.",
"The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.",
"In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.",
"• Topics.",
"We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.",
"4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.",
"• Keywords.",
"We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.",
"(2008) .",
"We set the number of keywords to 100 for all corpora.",
"For news articles, the background corpus for each issue is comprised of all articles from the other four issues.",
"For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.",
"Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.",
"Refer to the supplementary material for a list of example keywords in each corpus.",
"In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.",
"(2013) .",
"Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.",
"In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.",
"Characterizing the Space of Relations To provide an overview of the four relation types in Fig.",
"1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.",
"In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.",
"We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.",
"Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.",
"We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.",
"(Scott, 2015) .",
"The plots along the axes show the marginal distribution of the corresponding dimension.",
"In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .",
"In these plots, we use topics to represent ideas.",
"their joint distribution.",
"Fig.",
"3 shows three examples: two from news articles and one from research papers.",
"We will also focus our case studies on these three corpora in §4.",
"The corresponding plots for keywords have been relegated to supplementary material due to space limitations.",
"Cooccurrence tends to be unimodal but not normal.",
"In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.",
"We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.",
"Prevalence correlation exhibits diverse distributions.",
"Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.",
"The dip test only rejects the unimodality hypothesis in NIPS.",
"None follow normal distributions based on D'Agostino's K 2 test.",
"Cooccurrence is positively correlated with prevalence correlation.",
"In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.",
"This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.",
"Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.",
"776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.",
"These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.",
"Here we compare the relative strength of extreme pairs in each dataset.",
"We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.",
"For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.",
"3.",
"This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.",
"The results are not sensitive to the choice of 25.",
"Fig.",
"4 shows the collective strength of the four types in all of our datasets.",
"The most common ordering is: friendship > head-to-head > arms-race > tryst.",
"The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.",
"In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.",
"This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.",
"We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.",
"We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.",
"In contrast, news stories are more self-contained and seek to employ consistent usage.",
"Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.",
"Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.",
"International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .",
"As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".",
"5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".",
"These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.",
"Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.",
"Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.",
"In particular, islam was rarely used in coverage of terrorism in the 1980s.",
"attention with the other, likely because they share the same underlying cause.",
"We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".",
"While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.",
"Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .",
"The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.",
"The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.",
"Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).",
"Fig.",
"5a shows the relations between the \"federal, state\" topic and four international topics.",
"Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.",
"Fig.",
"5b and Fig.",
"5c represent concrete examples in Fig.",
"5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".",
"In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.",
"When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.",
"In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.",
"6) .",
"It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.",
"The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.",
"This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.",
"6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.",
"We leave it to further investigation to confirm or reject this hypothesis.",
"To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.",
"Table 1 shows the results for three pairs above.",
"If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.",
"PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.",
"Our observation starts with a top tryst relation between latino and asian.",
"Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.",
"Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.",
"Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.",
"In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.",
"Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.",
"The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.",
"In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .",
"However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.",
"Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.",
"Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.",
"It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.",
"8) .",
"7 It is part of the strongest relation in all four types except tryst (ranked #5).",
"The full relation graph presents further patterns.",
"Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.",
"But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.",
"Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.",
"The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.",
"9 , which reveals additional detail.",
"Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.",
"The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.",
"The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.",
"Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.",
"For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.",
"This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.",
"We illustrate our computational method by exploratory studies on news corpora and scientific research papers.",
"We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.",
"It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.",
"For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.",
"Our method is entirely observational.",
"It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.",
"In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.",
"Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.",
"There are many potential directions to improve our method to account for complex relations between ideas.",
"For instance, we assume that both ideas and relations are statically grounded in keywords or topics.",
"In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.",
"Similarly, new ideas show up and even the same idea may change over time and be represented by different words."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"6"
],
"paper_header_content": [
"Introduction",
"Computational Framework",
"Cooccurrence and Prevalence Correlation",
"Datasets and Representation of Ideas",
"Characterizing the Space of Relations",
"Empirical Distribution Properties",
"Relative Strength of Extreme Pairs",
"Exploratory Studies",
"International Relations in Terrorism",
"Ethnicity Keywords in Immigration",
"Relations between Topics in ACL",
"Concluding Discussion"
]
}
|
GEM-SciDuet-train-93#paper-1238#slide-1
|
Main contributions
|
First quantitative framework to systematically describe relations between ideas
Demonstrate effective explorations with this framework on a wide range of datasets
undocumented immigrants rivals illegal alien
small government friends free market
|
First quantitative framework to systematically describe relations between ideas
Demonstrate effective explorations with this framework on a wide range of datasets
undocumented immigrants rivals illegal alien
small government friends free market
|
[] |
GEM-SciDuet-train-93#paper-1238#slide-2
|
1238
|
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
|
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228
],
"paper_content_text": [
"Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.",
"Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.",
"We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.",
"By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.",
"In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.",
"What does it mean for two ideas to compete in texts, quantitatively?",
"Consider, for example, the issue of immigration.",
"There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.",
"One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .",
"Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.",
"One question is whether or not these two ideas cooccur in the same documents.",
"In the example above, these narratives are used by distinct groups of people with different ideologies.",
"The fact that they don't cooccur is one clue that they may be in competition with each other.",
"However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.",
"Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.",
"Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).",
"We use topics from LDA (Blei et al., 2003) to represent ideas.",
"Each topic is named with a pair of words that are most strongly associated with the topic in LDA.",
"Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.",
"The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.",
"All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.",
"We explain the formal definition of strength in §2.",
"the U.S. during the cold war.",
"To capture these possibilities, we use prevalence correlation over time.",
"Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.",
"This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.",
"1 .",
"We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.",
"Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.",
"Friendship (correlated over time, likely to cooccur).",
"The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .",
"Head-to-head (anti-correlated over time, unlikely to cooccur).",
"\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.",
"This observation agrees with a report from Pew Research Center (Guskin, 2013) .",
"Tryst (anti-correlated over time, likely to cooccur).",
"The two off-diagonal examples use topics related to law enforcement.",
"Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.",
"This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).",
"2 Arms-race (correlated over time, unlikely to cooccur).",
"One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.",
"Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.",
"For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.",
"We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.",
"We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .",
"To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).",
"We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.",
"As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.",
"Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).",
"We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.",
"For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.",
"This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.",
"We also show relations between topics in ACL that center around machine translation.",
"Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.",
"We provide some concluding thoughts in §6.",
"Computational Framework The aim of our computational framework is to explore relations between ideas.",
"We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.",
"Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.",
"In the following, we introduce our formal definitions and datasets.",
"∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.",
"1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.",
"Eq.",
"2 is the Pearson correlation between two ideas' prevalence over time.",
"Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.",
"Our input is a collection of documents, each represented by a set of ideas and indexed by time.",
"We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .",
".",
".",
", D T }, where D t = {d t 1 , .",
".",
".",
", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.",
"Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .",
"In order to formally capture the two dimensions above, we employ two commonly-used statistics.",
"First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.",
"1 in Fig.",
"2 .",
"Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.",
"Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.",
"2 in Fig.",
"2 .",
"Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).",
"The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.",
"We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.",
"(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.",
"We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .",
"Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.",
"• News articles.",
"We follow the strategy in Card et al.",
"(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.",
"We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.",
"Each of these corpora contains more than 25,000 articles.",
"Please refer to the supplementary material for details.",
"• Research papers.",
"We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.",
"3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.",
"The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.",
"In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.",
"• Topics.",
"We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.",
"4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.",
"• Keywords.",
"We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.",
"(2008) .",
"We set the number of keywords to 100 for all corpora.",
"For news articles, the background corpus for each issue is comprised of all articles from the other four issues.",
"For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.",
"Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.",
"Refer to the supplementary material for a list of example keywords in each corpus.",
"In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.",
"(2013) .",
"Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.",
"In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.",
"Characterizing the Space of Relations To provide an overview of the four relation types in Fig.",
"1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.",
"In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.",
"We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.",
"Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.",
"We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.",
"(Scott, 2015) .",
"The plots along the axes show the marginal distribution of the corresponding dimension.",
"In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .",
"In these plots, we use topics to represent ideas.",
"their joint distribution.",
"Fig.",
"3 shows three examples: two from news articles and one from research papers.",
"We will also focus our case studies on these three corpora in §4.",
"The corresponding plots for keywords have been relegated to supplementary material due to space limitations.",
"Cooccurrence tends to be unimodal but not normal.",
"In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.",
"We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.",
"Prevalence correlation exhibits diverse distributions.",
"Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.",
"The dip test only rejects the unimodality hypothesis in NIPS.",
"None follow normal distributions based on D'Agostino's K 2 test.",
"Cooccurrence is positively correlated with prevalence correlation.",
"In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.",
"This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.",
"Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.",
"776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.",
"These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.",
"Here we compare the relative strength of extreme pairs in each dataset.",
"We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.",
"For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.",
"3.",
"This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.",
"The results are not sensitive to the choice of 25.",
"Fig.",
"4 shows the collective strength of the four types in all of our datasets.",
"The most common ordering is: friendship > head-to-head > arms-race > tryst.",
"The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.",
"In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.",
"This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.",
"We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.",
"We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.",
"In contrast, news stories are more self-contained and seek to employ consistent usage.",
"Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.",
"Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.",
"International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .",
"As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".",
"5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".",
"These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.",
"Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.",
"Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.",
"In particular, islam was rarely used in coverage of terrorism in the 1980s.",
"attention with the other, likely because they share the same underlying cause.",
"We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".",
"While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.",
"Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .",
"The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.",
"The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.",
"Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).",
"Fig.",
"5a shows the relations between the \"federal, state\" topic and four international topics.",
"Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.",
"Fig.",
"5b and Fig.",
"5c represent concrete examples in Fig.",
"5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".",
"In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.",
"When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.",
"In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.",
"6) .",
"It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.",
"The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.",
"This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.",
"6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.",
"We leave it to further investigation to confirm or reject this hypothesis.",
"To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.",
"Table 1 shows the results for three pairs above.",
"If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.",
"PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.",
"Our observation starts with a top tryst relation between latino and asian.",
"Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.",
"Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.",
"Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.",
"In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.",
"Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.",
"The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.",
"In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .",
"However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.",
"Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.",
"Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.",
"It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.",
"8) .",
"7 It is part of the strongest relation in all four types except tryst (ranked #5).",
"The full relation graph presents further patterns.",
"Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.",
"But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.",
"Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.",
"The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.",
"9 , which reveals additional detail.",
"Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.",
"The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.",
"The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.",
"Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.",
"For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.",
"This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.",
"We illustrate our computational method by exploratory studies on news corpora and scientific research papers.",
"We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.",
"It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.",
"For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.",
"Our method is entirely observational.",
"It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.",
"In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.",
"Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.",
"There are many potential directions to improve our method to account for complex relations between ideas.",
"For instance, we assume that both ideas and relations are statically grounded in keywords or topics.",
"In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.",
"Similarly, new ideas show up and even the same idea may change over time and be represented by different words."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"6"
],
"paper_header_content": [
"Introduction",
"Computational Framework",
"Cooccurrence and Prevalence Correlation",
"Datasets and Representation of Ideas",
"Characterizing the Space of Relations",
"Empirical Distribution Properties",
"Relative Strength of Extreme Pairs",
"Exploratory Studies",
"International Relations in Terrorism",
"Ethnicity Keywords in Immigration",
"Relations between Topics in ACL",
"Concluding Discussion"
]
}
|
GEM-SciDuet-train-93#paper-1238#slide-2
|
Using text to trace ideas
|
Our focus is on relations between ideas.
We will use standard approaches
Topics from latent Dirichlet Hall et al. 2008 allocation Keywords (Blei et as al. ideas
Keywords (Monroe et al.
Culturomics, Michel et al. 2011
|
Our focus is on relations between ideas.
We will use standard approaches
Topics from latent Dirichlet Hall et al. 2008 allocation Keywords (Blei et as al. ideas
Keywords (Monroe et al.
Culturomics, Michel et al. 2011
|
[] |
GEM-SciDuet-train-93#paper-1238#slide-3
|
1238
|
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
|
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228
],
"paper_content_text": [
"Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.",
"Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.",
"We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.",
"By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.",
"In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.",
"What does it mean for two ideas to compete in texts, quantitatively?",
"Consider, for example, the issue of immigration.",
"There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.",
"One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .",
"Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.",
"One question is whether or not these two ideas cooccur in the same documents.",
"In the example above, these narratives are used by distinct groups of people with different ideologies.",
"The fact that they don't cooccur is one clue that they may be in competition with each other.",
"However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.",
"Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.",
"Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).",
"We use topics from LDA (Blei et al., 2003) to represent ideas.",
"Each topic is named with a pair of words that are most strongly associated with the topic in LDA.",
"Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.",
"The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.",
"All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.",
"We explain the formal definition of strength in §2.",
"the U.S. during the cold war.",
"To capture these possibilities, we use prevalence correlation over time.",
"Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.",
"This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.",
"1 .",
"We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.",
"Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.",
"Friendship (correlated over time, likely to cooccur).",
"The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .",
"Head-to-head (anti-correlated over time, unlikely to cooccur).",
"\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.",
"This observation agrees with a report from Pew Research Center (Guskin, 2013) .",
"Tryst (anti-correlated over time, likely to cooccur).",
"The two off-diagonal examples use topics related to law enforcement.",
"Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.",
"This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).",
"2 Arms-race (correlated over time, unlikely to cooccur).",
"One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.",
"Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.",
"For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.",
"We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.",
"We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .",
"To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).",
"We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.",
"As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.",
"Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).",
"We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.",
"For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.",
"This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.",
"We also show relations between topics in ACL that center around machine translation.",
"Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.",
"We provide some concluding thoughts in §6.",
"Computational Framework The aim of our computational framework is to explore relations between ideas.",
"We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.",
"Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.",
"In the following, we introduce our formal definitions and datasets.",
"∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.",
"1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.",
"Eq.",
"2 is the Pearson correlation between two ideas' prevalence over time.",
"Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.",
"Our input is a collection of documents, each represented by a set of ideas and indexed by time.",
"We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .",
".",
".",
", D T }, where D t = {d t 1 , .",
".",
".",
", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.",
"Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .",
"In order to formally capture the two dimensions above, we employ two commonly-used statistics.",
"First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.",
"1 in Fig.",
"2 .",
"Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.",
"Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.",
"2 in Fig.",
"2 .",
"Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).",
"The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.",
"We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.",
"(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.",
"We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .",
"Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.",
"• News articles.",
"We follow the strategy in Card et al.",
"(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.",
"We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.",
"Each of these corpora contains more than 25,000 articles.",
"Please refer to the supplementary material for details.",
"• Research papers.",
"We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.",
"3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.",
"The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.",
"In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.",
"• Topics.",
"We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.",
"4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.",
"• Keywords.",
"We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.",
"(2008) .",
"We set the number of keywords to 100 for all corpora.",
"For news articles, the background corpus for each issue is comprised of all articles from the other four issues.",
"For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.",
"Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.",
"Refer to the supplementary material for a list of example keywords in each corpus.",
"In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.",
"(2013) .",
"Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.",
"In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.",
"Characterizing the Space of Relations To provide an overview of the four relation types in Fig.",
"1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.",
"In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.",
"We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.",
"Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.",
"We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.",
"(Scott, 2015) .",
"The plots along the axes show the marginal distribution of the corresponding dimension.",
"In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .",
"In these plots, we use topics to represent ideas.",
"their joint distribution.",
"Fig.",
"3 shows three examples: two from news articles and one from research papers.",
"We will also focus our case studies on these three corpora in §4.",
"The corresponding plots for keywords have been relegated to supplementary material due to space limitations.",
"Cooccurrence tends to be unimodal but not normal.",
"In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.",
"We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.",
"Prevalence correlation exhibits diverse distributions.",
"Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.",
"The dip test only rejects the unimodality hypothesis in NIPS.",
"None follow normal distributions based on D'Agostino's K 2 test.",
"Cooccurrence is positively correlated with prevalence correlation.",
"In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.",
"This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.",
"Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.",
"776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.",
"These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.",
"Here we compare the relative strength of extreme pairs in each dataset.",
"We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.",
"For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.",
"3.",
"This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.",
"The results are not sensitive to the choice of 25.",
"Fig.",
"4 shows the collective strength of the four types in all of our datasets.",
"The most common ordering is: friendship > head-to-head > arms-race > tryst.",
"The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.",
"In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.",
"This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.",
"We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.",
"We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.",
"In contrast, news stories are more self-contained and seek to employ consistent usage.",
"Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.",
"Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.",
"International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .",
"As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".",
"5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".",
"These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.",
"Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.",
"Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.",
"In particular, islam was rarely used in coverage of terrorism in the 1980s.",
"attention with the other, likely because they share the same underlying cause.",
"We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".",
"While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.",
"Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .",
"The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.",
"The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.",
"Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).",
"Fig.",
"5a shows the relations between the \"federal, state\" topic and four international topics.",
"Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.",
"Fig.",
"5b and Fig.",
"5c represent concrete examples in Fig.",
"5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".",
"In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.",
"When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.",
"In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.",
"6) .",
"It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.",
"The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.",
"This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.",
"6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.",
"We leave it to further investigation to confirm or reject this hypothesis.",
"To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.",
"Table 1 shows the results for three pairs above.",
"If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.",
"PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.",
"Our observation starts with a top tryst relation between latino and asian.",
"Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.",
"Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.",
"Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.",
"In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.",
"Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.",
"The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.",
"In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .",
"However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.",
"Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.",
"Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.",
"It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.",
"8) .",
"7 It is part of the strongest relation in all four types except tryst (ranked #5).",
"The full relation graph presents further patterns.",
"Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.",
"But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.",
"Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.",
"The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.",
"9 , which reveals additional detail.",
"Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.",
"The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.",
"The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.",
"Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.",
"For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.",
"This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.",
"We illustrate our computational method by exploratory studies on news corpora and scientific research papers.",
"We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.",
"It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.",
"For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.",
"Our method is entirely observational.",
"It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.",
"In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.",
"Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.",
"There are many potential directions to improve our method to account for complex relations between ideas.",
"For instance, we assume that both ideas and relations are statically grounded in keywords or topics.",
"In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.",
"Similarly, new ideas show up and even the same idea may change over time and be represented by different words."
]
}
|
{
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"4.1",
"4.2",
"4.3",
"6"
],
"paper_header_content": [
"Introduction",
"Computational Framework",
"Cooccurrence and Prevalence Correlation",
"Datasets and Representation of Ideas",
"Characterizing the Space of Relations",
"Empirical Distribution Properties",
"Relative Strength of Extreme Pairs",
"Exploratory Studies",
"International Relations in Terrorism",
"Ethnicity Keywords in Immigration",
"Relations between Topics in ACL",
"Concluding Discussion"
]
}
|
GEM-SciDuet-train-93#paper-1238#slide-3
|
Quantitatively describe relations between ideas
|
Given a corpus of documents over time, each document consists of a set of ideas
undocumented immigrants rivals illegal alien
Cooccurrence does not capture which is winning or losing undocumented immigrants
|
Given a corpus of documents over time, each document consists of a set of ideas
undocumented immigrants rivals illegal alien
Cooccurrence does not capture which is winning or losing undocumented immigrants
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.